repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,570 | closed | Add setters by type of args to TrainingArguments | # What does this PR do?
This PR introduces setters that group some of the `TrainingArguments` by type, to make this class easier to use. This was suggested in #20831
If the idea is of interest, I'll create more of those to cover all training arguments. | 02-10-2023 17:55:38 | 02-10-2023 17:55:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@MKhalusova this is a way to split the documentation of training arguments (currently 96 :grimacing: ) by splitting them in groups, could you have a quick look?<|||||>I really like the idea of splitting 96 arguments into reasonable chunks. Looking at the examples, I am not sure how it would look when you use more than one group.
Is it going to look something like this?
```
args = TrainingArguments("working_dir")
args = args.set_training(learning_rate=1e-4, batch_size=32)
args = args.set_logging(strategy="steps", steps=100)
args = args.set_evaluate(strategy="steps", steps=100)
```
<|||||>Yup, or chained directly if that's more your style:
```py
args = (
TrainingArguments("working_dir")
.set_training(learning_rate=1e-4, batch_size=32)
.set_logging(strategy="steps", steps=100)
.set_evaluate(strategy="steps", steps=100)
)
```<|||||>I think this should make the docs much more readable and might make the code more readable as well. |
transformers | 21,569 | closed | improving contributing tests section | # What does this PR do?
Adds info about the `RUN_PT_FLAX_CROSS_TESTS` environment variable during testing. It took me some time to figure this out during testing flax models :sweat_smile: so figured it might come helpful to future :hugs: contributors.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a GitHub issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 02-10-2023 16:54:45 | 02-10-2023 16:54:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Marvelous, thanks again!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21569). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,568 | closed | Incorrect right stride when sequence end falls within right stride of penultimate chunk | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.15
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.7.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @sanchit-gandhi @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The easiest way to see this problem is to use a Whisper ASR model and debug into the `chunk_iter` method of `transformers/src/transformers/pipelines/automatic_speech_recognition.py` with an audio file that is of a specific length. An example length that illustrates this problem is about 87 seconds in length. The specific file I was testing with had 1,385,995 samples at 16kHz.
### Expected behavior
The Whisper model requires chunks of 30 seconds in length. I tested using the default stride of 5 seconds on each side. So a chunk is `30*16,000=480,000` samples. The stride width is normally `5*16,000=80,000 `samples. But the first chunk doesn't have a left stride and the final chunk doesn't have a right stride so, for this file we would expect the following chunk sizes and strides...
* 480,000; 0; 80,000
* 480,000; 80,000; 80,000
* 480,000; 80,000; 80,000
* 425,995; 80,000; **25,995**
* 105,995; 80,000; 0
But what we actually get is
* 480,000; 0; 80,000
* 480,000; 80,000; 80,000
* 480,000; 80,000; 80,000
* 425,995; 80,000; **80,000**
* 105,995; 80,000; 0
This means that the fourth (penultimate) chunk is processed as if it has a full right stride when in fact it shouldn't.
| 02-10-2023 16:50:04 | 02-10-2023 16:50:04 | > if it has a full right stride when in fact it shouldn't.
The striding is designed in such a way that right stride should *always* be full, or if there's not enough data, no right stride at all and you are the last item.
There could still be a bug like you describe.<|||||>1,385,995
Using chunk_stride `(480, 80, 80)` means we're going to effectively use only 320k samples for each chunk so we're going to move only 320k samples within the audio at each step (except first and last which just use more audio)
First chunk:
(480, 0, 80), starting at offset 0 within samples
(480, 80, 80), starting at offset 320 within samples (80 left stride, means we're effectively using data from 400)
(480, 80, 80), starting at offset 640 within samples
(425, 80, 0), starting at offset 960 within samples (finishing at 1 385k)
The 4th one should be the last one, since the last sample to consider is outside the range of the audio, so it is the last part of the audio.
That is what should happen I think.<|||||>I'm seeing this:
```
(480000, 80000, 80000)
(480000, 80000, 80000)
(425000, 80000, 80000)
(105000, 80000, 0)
```
So there definitely seems to be an issue.
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py#L66
This line could be the issue, but I'm not fresh enough right now to be sure that my current reasoning is solid. |
transformers | 21,567 | closed | GPT NeoX Input docstring includes token_type_ids but GPTNeoXModel does not have token_type_ids as an input to forward | ### System Info
n/a
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Go to https://huggingface.co/docs/transformers/main/en/model_doc/gpt_neox#transformers.GPTNeoXModel and see that forward does not include token_type_id but it is shown in the docstring
### Expected behavior
Docstring should match forward | 02-10-2023 16:24:14 | 02-10-2023 16:24:14 | also applies to position_ids |
transformers | 21,566 | closed | Goodbye to Blip-2 doctests | # What does this PR do?
Blip-2 causes GPU OOM on doctest, and potentially causes some subsequential tests failing (as the GPU is not in good state, but I didn't verify thoroughly).
This PR tries to avoid this situation, despite the added lines is not really meant to be in a doc for users. | 02-10-2023 16:20:44 | 02-10-2023 16:20:44 | > I don't think the doctests should run on a 2.7b-parameter model, that's not a good use of our resources. So the doctest should be skipped if there is not a smaller checkpoint available.
It could be run with FP16 + the changes in this PR. In terms of resources, the machine is always on, so I don't see the waste. Could you elaborate a bit more regarding your thought on the resources (while we can make it fit into the T4 GPUs eventaully)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I agree! Will skip<|||||>cc @NielsRogge for information<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21566). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,565 | closed | Final cleanup of TOKENIZER_FOR_DOC | # What does this PR do?
This PR removes a last instance of `TOKENIZER_FOR_DOC` used in TFSpeech2Text and also cleans up the model templates to match the rest of the library. | 02-10-2023 15:47:32 | 02-10-2023 15:47:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,564 | closed | Remove Niels from templates | # What does this PR do?
As @NielsRogge will be leaving the opensource team on Monday, this PR updates the templates to remove him from the persons to tag. Thanks all your help @NielsRogge ! | 02-10-2023 15:29:52 | 02-10-2023 15:29:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,563 | closed | Add EfficientNet | # What does this PR do?
Adds EfficientNet to transformers, see [original paper](https://arxiv.org/abs/1905.11946) and [repo](https://github.com/keras-team/keras/blob/v2.11.0/keras/applications/efficientnet.py). EfficientNet will be used as the vision encoder of the upcoming ALIGN PR.
- [x] Upload converted checkpoints for image classification
- [x] Create model cards
- [x] Update integration tests
Note: Failing tests are unrelated to this PR, will rebase to the main branch after the failing tests are fixed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ X] Did you write any new necessary tests?
| 02-10-2023 14:23:48 | 02-10-2023 14:23:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hmmm related to #17387, where the mode used is actually EfficientNet.... So it is multimodal, but the processing is on the audio part. Should probably have better synched on this one! <|||||>@ArthurZucker aha, I wasn't aware of Trillson indeed, the final phase of the blocks and the top are different but we could have saved some of the work |
transformers | 21,562 | closed | [Variant] Make sure variant files are not incorrectly deleted | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As discussed offline with @sgugger and @ArthurZucker this PR makes sure that only sharded weight files can be deleted to prevent accidentally deleting variants of `pytorch_model.bin` or `model.safetensors` .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-10-2023 13:10:49 | 02-10-2023 13:10:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated<|||||>> this
Might be good to keep in mind as soon as we need "variant" for TF as well |
transformers | 21,561 | closed | Generate: Fix flaky indexing error in `test_constrained_beam_search_generate_dict_output` | # What does this PR do?
Fixes a random indexing error in `test_constrained_beam_search_generate_dict_output` -- For models without integer inputs (like whisper), we were requesting to force a random token between 3 and 99 (inclusive), but the vocab size was also 99.
This means that when the test attempted to force a `99`, the test would fail with an indexing error. | 02-10-2023 11:50:58 | 02-10-2023 11:50:58 | > For models without integer inputs (like whisper), we were requesting to force a random token between 3 and 99
Sorry, I lack of a bit the context. `models without integer inputs`, but in this failed test, the `input_ids` is still passed to the model, and the model use it (and get index error)? I am a bit confused by reading it literally. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh `input_ids` is a poor variable name in this test (and in many others). It holds a dummy input with the format expected by the model -- for `Whisper`, it is a tensor of type fp32. See below what the debugger prints for this test on `Whisper`

The touched variables (`min_id` and `max_id`) are used to force a random token (selected between those ids) at generate time. These were the indices triggering the indexing error. Since `generate` always outputs tokens, regardless of the model, the if/else I removed didn't make much sense :) |
transformers | 21,560 | closed | training with native pytorch bf16 fsdp gives runtime error. | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to train my bert model with native pytorch fsdp and bf16 mixed precision.
When enabling bf16, transformers would give me the follow error:
`File "/data/root/anaconda3/envs/pt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/root/anaconda3/envs/pt/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 363, in forward
context_layer = torch.matmul(attention_probs, value_layer)
RuntimeError: expected scalar type BFloat16 but found Float
`
Below is my full py code
```
import os
import time
import functools
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from transformers import BertModel, BertConfig, BertLayer
from torchvision.models.vision_transformer import Encoder, EncoderBlock, MLPBlock
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision
from torch.distributed.fsdp.wrap import (
transformer_auto_wrap_policy,
size_based_auto_wrap_policy,
enable_wrap,
wrap,
)
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
apply_activation_checkpointing,
checkpoint_wrapper as checkpoint_wrapper_pytorch,
CheckpointImpl,
)
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def train(data, model, rank, world_size, optimizer):
model.train()
ddp_loss = torch.zeros(2).to(rank)
data = data.to(rank)
print("data: ", data.shape)
for _ in range(8):
optimizer.zero_grad()
output = model(data)
loss = output[0].mean()
if rank == 0: print(loss)
loss.backward()
optimizer.step()
ddp_loss[0] += loss.item()
ddp_loss[1] += len(data)
dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM)
def fsdp_main(rank, world_size):
setup(rank, world_size)
torch.cuda.set_device(rank)
bs = 64
model = BertModel(BertConfig())
data = torch.ones((bs, 128)).int()
attn_layers = [BertLayer]
auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls = { *attn_layers }
)
dtype = torch.bfloat16
mp_policy = MixedPrecision(
param_dtype = dtype, reduce_dtype=dtype, buffer_dtype=dtype
)
# mp_policy = None
model = FSDP(model.to(rank), device_id=rank,
mixed_precision=mp_policy,
auto_wrap_policy=auto_wrap_policy)
# model = DDP(model.to(rank))
if rank == 0:
print(f"{model}")
optimizer = optim.Adadelta(model.parameters(), lr=1e-2)
train(data, model, rank, world_size, optimizer)
dist.destroy_process_group()
if __name__ == '__main__':
WORLD_SIZE = torch.cuda.device_count()
# fsdp_main(0, WORLD_SIZE, args, )
mp.spawn(fsdp_main,
args=(WORLD_SIZE, ),
nprocs=WORLD_SIZE,
join=True)
```
You can launch the script with python run.py. Only pytorch and transformer are needed. I also found a similar report in the [pytorch repo](https://github.com/pytorch/pytorch/issues/75676). Looks like sth is wrong with transformers get_extended_mask implementation. Wondering if there will be any quick fix to this.
Many thanks!
### Expected behavior
Transformers should work with pytorch native bf16 | 02-10-2023 11:43:44 | 02-10-2023 11:43:44 | I think the issue is just that you never do `model.half()` can you try this? <|||||>Hi @ArthurZucker
I tried running `model.half()` with `dtype = torch.float16`, and it works fine.
But when `dtype = 'bfloat16',` I still have to maunally convert the extended_mask output to bf16
<|||||>Ok, as a minimal reproducing script here is what is working:
```python
from transformers import BertModel, BertTokenizer
import torch
model = BertModel.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("this is a test input", return_tensors = "pt")
model = model.cuda().to(torch.bfloat16)
print(model.dtype) # wether we can use it or not
model(**{k:v.cuda() for k,v in inputs.items()})
```
Also works with default config:
```python
model = BertModel(BertConfig())
model = model.cuda().to(torch.bfloat16)
print(model.dtype) # wether we can use it or not
model(**{k:v.cuda() for k,v in inputs.items()})
```
As I am not familiar with FSDP and it's internals, I need a full traceback error (because the `get_extended_attention_mask` uses `self.dtype` by default is no `dtype` is provided). Otherwise it might not be entirely related to transformers π <|||||>cc @pacman100 for FSDP :-) <|||||>Hello @YamamotoSayaka, does using context manager `torch.cuda.amp.autocast(dtype=torch.bfloat16)` solve the issue? I tried it and seems to solve this issue. Just want a confirmation from your end. |
transformers | 21,559 | closed | The batch_size in OPTModel limits the training performance with Pytorch FSDP | ### System Info
When I use transformers' OPTModel to load the opt-13b model for training with Pytorch FSDP, I found that the whole training is limited by batch_size. Although FSDP has the ability to offload parameters to the CPU memory to reduce the pressure on the GPU memory, due to the impact of batch on the parameter scale of the forward phase, the GPU memory overflows when some parameters are initialized on the GPU.
### Who can help?
@sgugger @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Training code
```python
import os
import argparse
import functools
import torch
from itertools import chain
import torch.nn as nn
import torch.optim as optim
from transformers import (
OPTForCausalLM,
AutoTokenizer,
default_data_collator,
)
from transformers.models.opt.modeling_opt import OPTDecoderLayer, OPTAttention
from datasets import load_dataset
from torch.utils.data import DataLoader
from torch.optim.lr_scheduler import StepLR
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.distributed.fsdp import (
MixedPrecision,
FullyShardedDataParallel as FSDP
)
from torch.distributed.fsdp.fully_sharded_data_parallel import (
CPUOffload,
)
from torch.distributed.fsdp.wrap import (
size_based_auto_wrap_policy,
transformer_auto_wrap_policy,
)
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
checkpoint_wrapper,
)
def getDataset():
raw_datasets = load_dataset("wikitext", "wikitext-2-v1")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b")
column_names = raw_datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=1,
remove_columns=column_names,
load_from_cache_file=False,
desc="Running tokenizer on dataset",
)
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {
k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= 1024:
total_length = (total_length // 1024) * 1024
# Split by chunks of max_len.
result = {
k: [t[i: i + 1024]
for i in range(0, total_length, 1024)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=1,
load_from_cache_file=False,
desc=f"Grouping texts in chunks of {1024}",
)
return lm_datasets["train"]
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def train(args, model, rank, world_size, train_loader, optimizer, epoch):
model.train()
ddp_loss = torch.zeros(2).to(rank)
for batch_idx, batch in enumerate(train_loader):
input_ids = batch["input_ids"].to(rank)
attention_mask = batch["attention_mask"].to(rank)
labels = batch["labels"].to(rank)
print(rank, "start forward", batch_idx, " *"*10)
outputs = model(input_ids=input_ids,
attention_mask=attention_mask, labels=labels)
optimizer.zero_grad()
loss = outputs.loss
print(rank, "start backward", batch_idx, " *"*10)
loss.backward()
optimizer.step()
ddp_loss[0] += loss.item()
ddp_loss[1] += len(input_ids)
if rank == 0:
print(batch_idx, " *"*10)
dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM)
if rank == 0:
print('Train Epoch: {} \tLoss: {:.6f}'.format(
epoch, ddp_loss[0] / ddp_loss[1]))
def fsdp_main(rank, world_size, args):
setup(rank, world_size)
train_dataset = getDataset()
train_loader = DataLoader(
train_dataset, collate_fn=default_data_collator,
batch_size=101, num_workers=1
)
my_auto_wrap_policy = functools.partial(
size_based_auto_wrap_policy, min_num_params=100000
)
# my_auto_wrap_policy = functools.partial(
# transformer_auto_wrap_policy, transformer_layer_cls={
# OPTDecoderLayer, OPTAttention, nn.LayerNorm, nn.Linear}
# )
torch.cuda.set_device(rank)
init_start_event = torch.cuda.Event(enable_timing=True)
init_end_event = torch.cuda.Event(enable_timing=True)
if rank == 0:
print("*"*10+"loading to cpu"+"*"*10)
model = OPTForCausalLM.from_pretrained("facebook/opt-13b")
model = checkpoint_wrapper(model, offload_to_cpu=True)
model = FSDP(model,
cpu_offload=CPUOffload(CPUOffload(offload_params=True)),
auto_wrap_policy=my_auto_wrap_policy,
mixed_precision=MixedPrecision(param_dtype=torch.float16,
reduce_dtype=torch.float16,
buffer_dtype=torch.float16,
keep_low_precision_grads=True)
)
if rank == 0:
print("*"*10+"print the fsdp model"+"*"*10)
print(model)
print_file = open("./model", 'w')
print(model, file=print_file)
print()
optimizer = optim.Adam(model.parameters(), lr=args.lr)
# optimizer = optim.SGD(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
init_start_event.record()
for epoch in range(1, args.epochs + 1):
train(args, model, rank, world_size, train_loader,
optimizer, epoch)
scheduler.step()
init_end_event.record()
if rank == 0:
print(
f"CUDA event elapsed time: {init_start_event.elapsed_time(init_end_event) / 1000}sec")
print(f"{model}")
cleanup()
if __name__ == '__main__':
# Training settings
parser = argparse.ArgumentParser(description='PyTorch OPT Example')
parser.add_argument('--batch-size', type=int, default=1, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--epochs', type=int, default=1, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=0.001, metavar='LR',
help='learning rate (default: 0.001)')
parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
help='Learning rate step gamma (default: 0.7)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
args = parser.parse_args()
torch.manual_seed(args.seed)
WORLD_SIZE = torch.cuda.device_count()
mp.spawn(fsdp_main,
args=(WORLD_SIZE, args),
nprocs=WORLD_SIZE,
join=True)
```
### Expected behavior
The shape of attn_weights is (bszοΌ100οΌself.num_headsοΌ40οΌtgt_lenοΌ1024οΌsrc_lenοΌ1024). Even though its data type is fp16, its size has reached close to 8GB, which directly leads to gpu memory overflow.

| 02-10-2023 09:37:23 | 02-10-2023 09:37:23 | The bsz is actually the input batch_size.<|||||>Seems like the crash happens during the forward pass, and is not directly related to the transformers library.
This does not seem like a bug but rather a discussion, feel free to ask on [the forum](https://discuss.huggingface.co/). |
transformers | 21,558 | closed | GenerationConfig bug on Transformers >4.26.0 by using Whisper and model.generate() | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0
- Python version: 3.9.13
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have fine tuned a Whisper model and by using `model.generate(...)` to do the inference with the new version transformers library 4.26.0 I got an error (not happening in 4.25.0).
My error is related with **GenerationConfig**, something that was changed from 4.26.0.
```
pred_ids = model.generate(
encoder_outputs=model.get_encoder()(data_batch['input_features'].to(device), attention_mask=None),
attention_mask=None,
)
```
### **Error trace:**
`(\n File \"/mydir/lib/python3.9/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\n return func(*args, **kwargs)\n File \"/mydir/lib/python3.9/site-packages/transformers/generation/utils.py\", line 1183, in generate\n if self.generation_config._from_model_config:\n File \"/mydir/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1185, in __getattr__\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\nAttributeError: 'WhisperForConditionalGeneration' object has no attribute 'generation_config'\n"`
### **Error location:**
Line 1174-1187
https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py
### **Possible solution:**
By adding following line before `model.generate(...)` I don't have any error
`model.generation_config = GenerationConfig.from_model_config(model.config)` | 02-10-2023 09:36:37 | 02-10-2023 09:36:37 | cc'ing our generation expert @gante <|||||>Hi @javilonso π There is a high likelihood that you are seeing an error due to an unfortunate combination of events, and I'd like to pin that down (e.g. a model trained in v4.25 + loading without `from_pretrained`).
Can you share a reproducible script? π <|||||>Thanks @gante for checking the issue!
You are right, I am not using `from_pretained` to load the model.
Instead, I'm using MLFlow hub, so the way to load models is `mlflow.pytorch.load_model(....)`.
By having a look at MLFlow source code, they upload the model via **torch**, so they don't use HuggingFace `from_pretained`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,556 | closed | [deepspeed] Bittensor dataset causes hanging | ### System Info
transformers version: 4.27.0.dev0
Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Python version: 3.10.6
Huggingface_hub version: 0.12.0
PyTorch version (GPU?): 1.12.0+cu113 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: yes, via deepspeed
Using distributed or parallel set-up in script?: yes, via deepspeed
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've been trying to use the Trainer with deepspeed using the following guide: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/deepspeed#trainer-deepspeed-integration
Below is my python code:
```
#!/usr/bin/env python
# coding=utf-8
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for sequence to sequence.
"""
# You can also adapt this script on your own sequence to sequence task. Pointers for this are left as comments.
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
import evaluate
import transformers
from transformers import (
AutoConfig,
AutoTokenizer,
HfArgumentParser,
M2M100Tokenizer,
MBart50Tokenizer,
MBart50TokenizerFast,
MBartTokenizer,
MBartTokenizerFast,
Trainer,
TrainingArguments,
AutoModelForCausalLM,
default_data_collator,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version
import bittensor
from itertools import chain
from tqdm.auto import tqdm
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.27.0.dev0")
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/translation/requirements.txt")
logger = logging.getLogger(__name__)
# A list of all multilingual tokenizer which require src_lang and tgt_lang attributes.
MULTILINGUAL_TOKENIZERS = [MBartTokenizer, MBartTokenizerFast, MBart50Tokenizer, MBart50TokenizerFast, M2M100Tokenizer]
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `huggingface-cli login` (necessary to use this script "
"with private models)."
)
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
source_lang: str = field(default=None, metadata={"help": "Source language id for translation."})
target_lang: str = field(default=None, metadata={"help": "Target language id for translation."})
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a jsonlines)."})
validation_file: Optional[str] = field(
default=None,
metadata={
"help": "An optional input evaluation data file to evaluate the metrics (sacrebleu) on a jsonlines file."
},
)
test_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input test data file to evaluate the metrics (sacrebleu) on a jsonlines file."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
max_source_length: Optional[int] = field(
default=1024,
metadata={
"help": (
"The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
max_target_length: Optional[int] = field(
default=128,
metadata={
"help": (
"The maximum total sequence length for target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
val_max_target_length: Optional[int] = field(
default=None,
metadata={
"help": (
"The maximum total sequence length for validation target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded. Will default to `max_target_length`."
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
"during ``evaluate`` and ``predict``."
)
},
)
pad_to_max_length: bool = field(
default=False,
metadata={
"help": (
"Whether to pad all samples to model maximum sentence length. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch. More "
"efficient on GPU but very bad for TPU."
)
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
)
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
)
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
)
},
)
num_beams: Optional[int] = field(
default=None,
metadata={
"help": (
"Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
"which is used during ``evaluate`` and ``predict``."
)
},
)
ignore_pad_token_for_loss: bool = field(
default=True,
metadata={
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
},
)
source_prefix: Optional[str] = field(
default=None, metadata={"help": "A prefix to add before every source text (useful for T5 models)."}
)
forced_bos_token: Optional[str] = field(
default=None,
metadata={
"help": (
"The token to force as the first generated token after the :obj:`decoder_start_token_id`.Useful for"
" multilingual models like :doc:`mBART <../model_doc/mbart>` where the first generated token needs to"
" be the target language token.(Usually it is the target language token)"
)
},
)
def __post_init__(self):
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
# accepting both json and jsonl file extensions, as
# many jsonlines files actually have a .json extension
valid_extensions = ["json", "jsonl"]
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in valid_extensions, "`train_file` should be a jsonlines file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in valid_extensions, "`validation_file` should be a jsonlines file."
if self.val_max_target_length is None:
self.val_max_target_length = self.max_target_length
def load_raw_datasets(name: str, confName: str) -> DatasetDict:
if name == "bittensor":
dataset = bittensor.dataset(
no_tokenizer=True,
# batch_size=cfg.training.train_batch_size,
# block_size=cfg.dataset.block_size,
)
dataloader = dataset.dataloader(1000)
bittensor_dataset = {"text": []}
for batch in tqdm(dataloader, desc="Loading data from bittensor IPFS"):
bittensor_dataset["text"].extend(batch)
raw_datasets = Dataset.from_dict(bittensor_dataset)
dataset.close() # Avoid leaving threadqueue running.
return raw_datasets
if os.path.exists(name):
data_files = {"text": name}
dataset_args = {}
extension = os.path.splitext(name)[-1].lstrip(".")
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = True
raw_datasets = load_dataset(
extension, data_files=data_files, **dataset_args)
raw_datasets = raw_datasets["text"]
else:
raw_datasets = load_dataset(name, confName)
return raw_datasets
def load_model_and_tokenizer(model_args: ModelArguments):
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# tokenizer.pad_token = cfg.tokenizer.pad_token
if tokenizer.pad_token is None and tokenizer.eos_token is not None:
tokenizer.pad_token = tokenizer.eos_token
# model = AutoModelForCausalLM.from_pretrained(
# name,
# from_tf=bool(".ckpt" in name),
# config=config,
# )
# model.to('cuda')
# model.resize_token_embeddings(len(tokenizer))
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
# on a small vocab and want a smaller embedding size, remove this test.
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
return tokenizer, model
def preprocess(blockSize, tokenizer, raw_datasets):
# First we tokenize all the texts.
column_names = raw_datasets.column_names
text_column_name = "text" if "text" in column_names else column_names["train"][0]
if True is True:
pad = False
else:
pad = "max_length"
def group_texts(examples):
# print(examples)
# Concatenate all texts.
concatenated_examples = {
k: list(chain(*examples[k])) for k in examples.keys()}
# print(concatenated_examples)
total_length = len(concatenated_examples[list(examples.keys())[0]])
if total_length >= blockSize:
total_length = (
total_length // blockSize
) * blockSize
# Split by chunks of max_len.
result = {
k: [
t[i: i + blockSize]
for i in range(0, total_length, blockSize)
]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
def tokenize_fn(examples):
# result = tokenizer(
# examples[text_column_name],
# padding=pad,
# truncation=True,
# max_length=cfg.dataset.block_size,
# )
# result["labels"] = result["input_ids"].copy()
# return result
return tokenizer(examples[text_column_name])
tokenized_datasets = raw_datasets.map(
tokenize_fn,
batched=True,
remove_columns=text_column_name,
load_from_cache_file=not False,
desc="Running tokenizer on dataset",
)
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=None,
load_from_cache_file=not False,
desc=f"Grouping texts in chunks of {blockSize}",
)
return lm_datasets
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_translation", model_args, data_args)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
logger.info(f"Training/evaluation parameters {training_args}")
tokenizer, model = load_model_and_tokenizer(model_args)
dataset = load_raw_datasets("bittensor", None)
# dataset = load_raw_datasets("wikitext", "wikitext-2-raw-v1")
tokenized_datasets = preprocess(2, tokenizer, dataset)
if "train" not in tokenized_datasets.column_names:
tokenized_datasets = tokenized_datasets.train_test_split(
test_size=5 / 100
)
tokenized_datasets_test_valid = tokenized_datasets["test"].train_test_split(
test_size=0.5
)
tokenized_datasets["test"] = tokenized_datasets_test_valid["train"]
tokenized_datasets["validation"] = tokenized_datasets_test_valid["test"]
train_dataset = tokenized_datasets["train"]
eval_dataset = tokenized_datasets["validation"]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
# tokenizer=tokenizer,
# compute_metrics=compute_metrics,
)
trainer.train()
if __name__ == "__main__":
main()`
```
The JSON config I'm using for deepspeed is:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none",
"pin_memory": false
},
"offload_param": {
"device": "none",
"pin_memory": false
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
My program works great when using a hugging face dataset, however when I try using the bittensor dataset the program always just hangs early on, either while training or while evaluating with nothing obvious appearing in the logs. Any ideas? Is there anything I can do to determine what is causing the hanging? Thanks.
### Expected behavior
The program runs to completion without hanging or at the very least, indicates what is causing the hanging. | 02-10-2023 07:53:08 | 02-10-2023 07:53:08 | Your report is almost perfect, @benproton - but how do I run your program?
Please show me the cmd args for both - a good and the hanging run, since until I can reproduce it I can't help you.
Also were you able to use `py-spy` that I linked to yesterday https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-hanging-solutions.md to see where the program hangs? It is surprisingly easy to use.<|||||>Sure my bad:
Hanging run with bittensor:
`deepspeed examples/pytorch/translation/run-text-gen.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir=EleutherAI/gpt-neo-1.3B-aB --evaluation_strategy steps --eval_steps 50 --num_train_epochs 4 --dataset_name bittensor --save_strategy no --warmup_steps 20 --lr_scheduler_type cosine_with_restarts --weight_decay 0.2 --learning_rate 0.00006`
In the code I gave I've actually hard-coded the bittensor data set i.e.:
`dataset = load_raw_datasets("bittensor", None)`
For a successful run you can comment that line and uncomment this line below:
`# dataset = load_raw_datasets("wikitext", "wikitext-2-raw-v1")`
Many thanks<|||||>Thank you for the remaining information I needed.
Using it I was successful at reproducing the hanging with just 2 gpus and while reducing your setup to a much smaller setup:
```
deepspeed run-text-gen.py --deepspeed ds_config_zero3.json \
--model_name_or_path EleutherAI/gpt-neo-125m \
--output_dir=EleutherAI/gpt-neo-1.3B-aB --evaluation_strategy steps \
--eval_steps 50 --num_train_epochs 4 --dataset_name bittensor --save_strategy \
no --warmup_steps 20 --lr_scheduler_type cosine_with_restarts --weight_decay \
0.2 --learning_rate 0.00006 --per_device_train_batch_size 1 \
--per_device_eval_batch_size 1
```
happened during eval:
```
[INFO|trainer.py:1683] 2023-02-10 19:14:47,595 >> ***** Running training *****
[INFO|trainer.py:1684] 2023-02-10 19:14:47,595 >> Num examples = 6441
[INFO|trainer.py:1685] 2023-02-10 19:14:47,595 >> Num Epochs = 4
[INFO|trainer.py:1686] 2023-02-10 19:14:47,595 >> Instantaneous batch size per device = 1
[INFO|trainer.py:1687] 2023-02-10 19:14:47,595 >> Total train batch size (w. parallel, distributed & accumulation) = 2
[INFO|trainer.py:1688] 2023-02-10 19:14:47,595 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1689] 2023-02-10 19:14:47,595 >> Total optimization steps = 12884
[INFO|trainer.py:1690] 2023-02-10 19:14:47,596 >> Number of trainable parameters = 0
0%|β | 50/12884 [00:15<58:55, 3.63it/s][INFO|trainer.py:3011] 2023-02-10 19:15:02,688 >> ***** Running Evaluation *****
[INFO|trainer.py:3013] 2023-02-10 19:15:02,688 >> Num examples = 170
[INFO|trainer.py:3016] 2023-02-10 19:15:02,688 >> Batch size = 1
60%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 51/85 [00:05<00:03, 9.22it/s]
```
getting `py-spy` traces:
```
Thread 246134 (active): "MainThread"
_prepare_input (transformers/trainer.py:2518)
<dictcomp> (transformers/trainer.py:2508)
_prepare_input (transformers/trainer.py:2508)
_prepare_inputs (transformers/trainer.py:2526)
prediction_step (transformers/trainer.py:3273)
evaluation_loop (transformers/trainer.py:3056)
evaluate (transformers/trainer.py:2875)
_maybe_log_save_evaluate (transformers/trainer.py:2180)
_inner_training_loop (transformers/trainer.py:1920)
train (transformers/trainer.py:1576)
main (run-text-gen.py:461)
<module> (run-text-gen.py:465)
Thread 246240 (idle): "Thread-1"
wait (threading.py:306)
wait (threading.py:558)
run (tqdm/_monitor.py:60)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246551 (idle): "Thread-3"
wait (threading.py:306)
wait (threading.py:558)
run (tqdm/_monitor.py:60)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246733 (idle): "Thread-4"
wait (threading.py:306)
get (queue.py:179)
run (tensorboard/summary/writer/event_file_writer.py:227)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246735 (idle): "APScheduler"
wait (threading.py:306)
wait (threading.py:558)
_main_loop (apscheduler/schedulers/blocking.py:30)
run (threading.py:870)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246739 (idle)
Thread 246740 (idle)
Thread 246793 (idle): "ThreadPoolExecutor-2_0"
_worker (concurrent/futures/thread.py:78)
run (threading.py:870)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246135 (idle): "MainThread"
backward (torch/autograd/__init__.py:197)
backward (torch/_tensor.py:488)
backward (deepspeed/runtime/fp16/loss_scaler.py:51)
backward (deepspeed/runtime/zero/stage3.py:2096)
wrapped_fn (deepspeed/utils/nvtx.py:9)
backward (deepspeed/runtime/engine.py:1968)
wrapped_fn (deepspeed/utils/nvtx.py:9)
training_step (transformers/trainer.py:2604)
_inner_training_loop (transformers/trainer.py:1843)
train (transformers/trainer.py:1576)
main (run-text-gen.py:461)
<module> (run-text-gen.py:465)
Thread 246251 (idle): "Thread-1"
wait (threading.py:306)
wait (threading.py:558)
run (tqdm/_monitor.py:60)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246533 (idle): "Thread-3"
wait (threading.py:306)
wait (threading.py:558)
run (tqdm/_monitor.py:60)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 246741 (idle)
Thread 246742 (active)
synchronize (torch/cuda/streams.py:219)
__reduce_and_partition_ipg_grads (deepspeed/runtime/zero/stage3.py:1153)
decorate_context (torch/autograd/grad_mode.py:27)
wrapped_fn (deepspeed/utils/nvtx.py:9)
reduce_independent_p_g_buckets_and_remove_grads (deepspeed/runtime/zero/stage3.py:1102)
reduce_ready_partitions_and_remove_grads (deepspeed/runtime/zero/stage3.py:1342)
reduce_partition_and_remove_grads (deepspeed/runtime/zero/stage3.py:1066)
wrapped_fn (deepspeed/utils/nvtx.py:9)
```
So this is some sort of gpu synchronization issue.
As you can see one of the processes is in training, while the other is in eval stages which is very wrong and doesn't make much sense.
I have never seen this sort of hanging.
I will try to analyse it when I get a chance and get back to you, @benproton
<|||||>ok, the hanging has nothing to do with deepspeed, you can set it to `"stage": 0,` which completely disables deepspeed and it still hangs in eval. This looks more like some kind of network issue where it blocks on fetching new data but no new data comes.
`bitternsor`appears to have issues itself, as when you kill the main process it continues running multiple python processes.
so please feel free to close the issue, since your hanging isn't an issue in Transformers or Deepspeed.
And I gave you information on how to debug this on your own if you'd like to proceed with figuring it out.<|||||>Ok, got it, @stas00 thank you for investigating. Happy to close the issue but out of interest, here is the original script I've been trying to adapt to deepspeed/Trainer: https://github.com/opentensor/clm_model_tuning/blob/320145eb796dd1c28916245a294d9bfa8578bf5a/finetune_using_clm.py
If there's anything glaring you can see I'm getting wrong or if there's any additional pointers you can think of I'd be truly grateful. Thanks again.<|||||>I'm not sure you understand what the problem is - the issue isn't on your side - the issue is on the bittensor side - you're not using a normal dataset, but a network-based one - so it predownloads some data at the start and then later it blocks not giving any data to the evaluator or trainer.
```
Thread 246134 (active): "MainThread"
_prepare_input (transformers/trainer.py:2518)
<dictcomp> (transformers/trainer.py:2508)
_prepare_input (transformers/trainer.py:2508)
_prepare_inputs (transformers/trainer.py:2526)
prediction_step (transformers/trainer.py:3273)
evaluation_loop (transformers/trainer.py:3056)
```
it is waiting for the worker to feed it data.
That's also why it's inconsistent and appears to hang in a different place every time
so it is possible that if you wait for a long time it'll unblock
The simplest test is to just use the dataloader alone in a short script and drain it fast iterating over the batches - I'm 99% sure it'll hang too. I think it's a good experiment to run.
> here is the original script
Accelerate supports deepspeed - same setup, so you can use the original script with deepspeed no problem. https://huggingface.co/docs/accelerate/usage_guides/deepspeed<|||||>Ahhhh... finally got it now thanks. I'll look into getting it locally. I learn something new with each of your comments, appreciated man. |
transformers | 21,555 | closed | Finetuning RAG-Getting git.exc.InvalidGitRepositoryError | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.27
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
For finetuning,I ran below command
!python "finetune_rag.py" \
--data_dir "RAGData" \
--output_dir "output_ft" \
--model_name_or_path facebook/rag-token-base \
--model_type rag_token \
--distributed_retriever pytorch \
--gpus 2 \
--do_train \
--do_predict
Getting following error:
The tokenizer class you load from this checkpoint is 'RagTokenizer'.
The class this function is called from is 'BartTokenizerFast'.
Traceback (most recent call last):
File "/workspace/finetune_rag.py", line 649, in <module>
main(args)
File "/workspace/finetune_rag.py", line 586, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "/workspace/finetune_rag.py", line 157, in __init__
save_git_info(self.hparams.output_dir)
File "/workspace/utils_rag.py", line 145, in save_git_info
repo_infos = get_git_info()
File "/workspace/utils_rag.py", line 161, in get_git_info
repo = git.Repo(path)
File "/opt/conda/lib/python3.10/site-packages/git/repo/base.py", line 282, in __init__
self.working_dir: Optional[PathLike] = self._working_tree_dir or self.common_dir
File "/opt/conda/lib/python3.10/site-packages/git/repo/base.py", line 363, in common_dir
raise InvalidGitRepositoryError()
git.exc.InvalidGitRepositoryError
Please help.
### Who can help?
@lhoestq
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For finetuning,I ran below command in Jupyter notebook
!python "finetune_rag.py" \
--data_dir "RAGData" \
--output_dir "output_ft" \
--model_name_or_path facebook/rag-token-base \
--model_type rag_token \
--distributed_retriever pytorch \
--gpus 2 \
--do_train \
--do_predict
### Expected behavior
Finetuning of RAG | 02-10-2023 06:24:25 | 02-10-2023 06:24:25 | Hi @harithareddy84 π It seems like you have a `git`-related issue, not a `transformers`-related issue. I'm afraid we won't be able to help -- try searching for the error message :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,554 | closed | shift_tokens_right function in XforConditionalGeneration classes | Hi there,
I am using BART for summarization and I noticed an issue with the `DataCollator` class when using it with multiple batches. The class uses the `prepare_decoder_input_ids_from_labels` function to create the `decoder_input_ids` for the labels (target sequence), which shifts the tokens one to the right and appends the `decoder_start_token_id` at the start of the shifted tensor. This works well when the batch size is 1, but it seems to be buggy when using multiple batches.
For example, I observed that the second example in my batch (which is padded with -100 in the labels tensor) has two `decoder_input_ids`, one to the left and one to the right. I would like to understand if this implementation is correct, and if so, what is the reasoning behind shifting the longer example while appending the `decoder_start_token_id` to the shorter example while keeping it at the end of the target sequence?
Here's an example with `batch_size` of 2:
(Examples are truncated for better visualization and with the `decoder_input_ids`s as `**2**`)
```
pdb>> decoder_input_ids
tensor([[ **2**, 0, 133, 6741, 31275, 2633, 41, 13433, 9, 5,
3425, 6411, 9, 10, 6063, 797, 2187, 4, 50267, ...
0, 133, 6063, 40, 45],
[ **2**, 0, 133, 27913, 39322, 31275, 3373, 1437, 103, 12720,
8, 8047, ... , 7, 33, 10, 2200, 12628,
50267, 0, 2264, 8089, 5, 6063, 197, 283, 11, 4,
50267, **2**, 1, 1, 1]], device='cuda:0')
```
P.S.:
Given the reasoning behind shifting operation in decoding time, the second example makes more sense to me; as the first "2" denotes the start of the sequence and makes the decoder to predict the first word of the target sequence, and the last "2" is again predicted in the second example to denote the end of the sequence. However, I do not see any second "2" in the first example. Will the decoder be able to take care of both examples? | 02-10-2023 04:44:13 | 02-10-2023 04:44:13 | cc @ArthurZucker and @younesbelkada <|||||>Hey for BART, the `eos_token_id=2`, which is why you have one at the end of a sequence, and a the beginning (for training purposes). Does that make sense?
|
transformers | 21,553 | closed | Trainer has no metric calculation and output in evaluation stage after using newest version | ### System Info
Transformers version 4.22.0 (older one) 4.26.1 (new one with issues)
Ubuntu 22.04
Pytorch version 1.13.1 with only CPU as compute platform
Python 3.9.0
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the code which is slightly modified from [official code for decision transformer](https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynb).
```
import os
import random
from dataclasses import dataclass
import numpy as np
import torch
from datasets import load_dataset
from transformers import DecisionTransformerConfig, DecisionTransformerModel, Trainer, TrainingArguments
dataset = load_dataset("edbeeching/decision_transformer_gym_replay", "halfcheetah-expert-v2")
@dataclass
class DecisionTransformerGymDataCollator:
return_tensors: str = "pt"
max_len: int = 20 #subsets of the episode we use for training
state_dim: int = 17 # size of state space
act_dim: int = 6 # size of action space
max_ep_len: int = 1000 # max episode length in the dataset
scale: float = 1000.0 # normalization of rewards/returns
state_mean: np.array = None # to store state means
state_std: np.array = None # to store state stds
p_sample: np.array = None # a distribution to take account trajectory lengths
n_traj: int = 0 # to store the number of trajectories in the dataset
def __init__(self, dataset) -> None:
self.act_dim = len(dataset[0]["actions"][0])
self.state_dim = len(dataset[0]["observations"][0])
self.dataset = dataset
# calculate dataset stats for normalization of states
states = []
traj_lens = []
for obs in dataset["observations"]:
states.extend(obs)
traj_lens.append(len(obs))
self.n_traj = len(traj_lens)
states = np.vstack(states)
self.state_mean, self.state_std = np.mean(states, axis=0), np.std(states, axis=0) + 1e-6
traj_lens = np.array(traj_lens)
self.p_sample = traj_lens / sum(traj_lens)
def _discount_cumsum(self, x, gamma):
discount_cumsum = np.zeros_like(x)
discount_cumsum[-1] = x[-1]
for t in reversed(range(x.shape[0] - 1)):
discount_cumsum[t] = x[t] + gamma * discount_cumsum[t + 1]
return discount_cumsum
def __call__(self, features):
batch_size = len(features)
# this is a bit of a hack to be able to sample of a non-uniform distribution
batch_inds = np.random.choice(
np.arange(self.n_traj),
size=batch_size,
replace=True,
p=self.p_sample, # reweights so we sample according to timesteps
)
# a batch of dataset features
s, a, r, d, rtg, timesteps, mask = [], [], [], [], [], [], []
for ind in batch_inds:
# for feature in features:
feature = self.dataset[int(ind)]
si = random.randint(0, len(feature["rewards"]) - 1)
# get sequences from dataset
s.append(np.array(feature["observations"][si : si + self.max_len]).reshape(1, -1, self.state_dim))
a.append(np.array(feature["actions"][si : si + self.max_len]).reshape(1, -1, self.act_dim))
r.append(np.array(feature["rewards"][si : si + self.max_len]).reshape(1, -1, 1))
d.append(np.array(feature["dones"][si : si + self.max_len]).reshape(1, -1))
timesteps.append(np.arange(si, si + s[-1].shape[1]).reshape(1, -1))
timesteps[-1][timesteps[-1] >= self.max_ep_len] = self.max_ep_len - 1 # padding cutoff
rtg.append(
self._discount_cumsum(np.array(feature["rewards"][si:]), gamma=1.0)[
: s[-1].shape[1] # TODO check the +1 removed here
].reshape(1, -1, 1)
)
if rtg[-1].shape[1] < s[-1].shape[1]:
print("if true")
rtg[-1] = np.concatenate([rtg[-1], np.zeros((1, 1, 1))], axis=1)
# padding and state + reward normalization
tlen = s[-1].shape[1]
s[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, self.state_dim)), s[-1]], axis=1)
s[-1] = (s[-1] - self.state_mean) / self.state_std
a[-1] = np.concatenate(
[np.ones((1, self.max_len - tlen, self.act_dim)) * -10.0, a[-1]],
axis=1,
)
r[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), r[-1]], axis=1)
d[-1] = np.concatenate([np.ones((1, self.max_len - tlen)) * 2, d[-1]], axis=1)
rtg[-1] = np.concatenate([np.zeros((1, self.max_len - tlen, 1)), rtg[-1]], axis=1) / self.scale
timesteps[-1] = np.concatenate([np.zeros((1, self.max_len - tlen)), timesteps[-1]], axis=1)
mask.append(np.concatenate([np.zeros((1, self.max_len - tlen)), np.ones((1, tlen))], axis=1))
s = torch.from_numpy(np.concatenate(s, axis=0)).float()
a = torch.from_numpy(np.concatenate(a, axis=0)).float()
r = torch.from_numpy(np.concatenate(r, axis=0)).float()
d = torch.from_numpy(np.concatenate(d, axis=0))
rtg = torch.from_numpy(np.concatenate(rtg, axis=0)).float()
timesteps = torch.from_numpy(np.concatenate(timesteps, axis=0)).long()
mask = torch.from_numpy(np.concatenate(mask, axis=0)).float()
return {
"states": s,
"actions": a,
"rewards": r,
"returns_to_go": rtg,
"timesteps": timesteps,
"attention_mask": mask,
}
class TrainableDT(DecisionTransformerModel):
def __init__(self, config):
super().__init__(config)
def forward(self, **kwargs):
output = super().forward(**kwargs)
# add the DT loss
action_preds = output[1]
action_targets = kwargs["actions"]
attention_mask = kwargs["attention_mask"]
act_dim = action_preds.shape[2]
action_preds = action_preds.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0]
action_targets = action_targets.reshape(-1, act_dim)[attention_mask.reshape(-1) > 0]
loss = torch.mean((action_preds - action_targets) ** 2)
return {"loss": loss}
def original_forward(self, **kwargs):
return super().forward(**kwargs)
def cal_metric(p):
return {'Fake metric': 50}
collator = DecisionTransformerGymDataCollator(dataset["train"])
config = DecisionTransformerConfig(state_dim=collator.state_dim, act_dim=collator.act_dim)
model = TrainableDT(config)
training_args = TrainingArguments(
output_dir="output/",
remove_unused_columns=False,
num_train_epochs=2,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_strategy="epoch",
learning_rate=1e-4,
weight_decay=1e-4,
warmup_ratio=0.1,
optim="adamw_torch",
max_grad_norm=0.25,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["train"],
compute_metrics=cal_metric,
data_collator=collator,
)
trainer.train()
```
### Expected behavior
The code was slightly modified from official code as I added evaluation settings as well as my own metric function in the arguments of Trainer.
If I used the old version of transformers which is 4.22.0, then the output after the evaluation contains both eval_loss and my own metrics:

On the other hand, if I used the newest version 4.26.1, then no eval_loss and my own metric were outputed:

| 02-10-2023 02:14:49 | 02-10-2023 02:14:49 | This results from a change in the way we detect wether there are labels in the inputs passed (which was wrong before and caused some bugs with other models). It looks like they are named `"actions"` in your case, so just pass along `label_names=["actions"]` to your TrainingArguments and it should work.<|||||>Thanks a lot for helping me out! So now the labels need to be detected before metrics are calculated in evaluation stage. |
transformers | 21,552 | closed | Update albert.mdx | Co-authored-by: JuheonChu <[email protected]>
Co-authored-by: mollerup23 <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-10-2023 01:14:19 | 02-10-2023 01:14:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21552). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,557 | closed | Transformers `pipeline` documentation is missing text2text_generation pipeline | **Is your feature request related to a problem? Please describe.**
I had the question, "How can I use the new Seq2Seq model I've trained in a Transformers pipeline?" There's pipeline tasks for summarization, generation, etc, but nothing listed on this page for how to use the model I've trained. (I'm training a T5 model.) https://huggingface.co/docs/transformers/v4.26.1/en/quicktour#pipeline
I discovered by digging into the source code that the pipeline I want is [text2text_generation](https://github.com/huggingface/transformers/blob/ea55bd86b9a452c87c5383afc707ab7d710a3043/src/transformers/pipelines/__init__.py#L272-L278).
**Describe the solution you'd like**
It'd be great if this pipeline were listed on https://huggingface.co/docs/transformers/v4.26.1/en/quicktour#pipeline
Is it intentionally omitted or is that just an oversight?
**Describe alternatives you've considered**
Digging into the source code solved my problem.
**Additional context**
NA
| 02-10-2023 00:36:56 | 02-10-2023 00:36:56 | cc @stevhliu and @MKhalusova <|||||>Thanks for the feedback! I left the `text2textgeneration` pipeline out intentionally because I didn't want to make the table in the Quicktour super long with all the supported pipelines (I think we have 25 pipelines now!). I intended for it to give you a quick idea of some of the example tasks you can do with the pipeline, and if you're interested in seeing all the supported tasks, you can check out the [Pipeline API Reference](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines).
I think there are two things we can do to improve:
1. List all the pipelines in the table for completeness.
2. Make it super clear this isn't a comprehensive list of all the pipelines; for that, you should head over to the Pipeline API docs.<|||||>Thanks @stevhliu ! Useful to know that this was an intentional decision and that all the available pipelines are under the API reference.
If you went with (1), that might help with completeness, but could be hard to keep up-to-date, since documentarians would need to remember to update.
For the reason above, I like (2) more than (1). However, I would caveat it with the feedback that I think there's already too many items on the table currently. If you move forward with (2) by making it clear that more exist, I might recommend removing a few of them to trim it down.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closed in #21607. |
transformers | 21,551 | closed | Add _mp_fn to run_mae.py for XLA testing | Testing MAE ViT on TPU https://github.com/GoogleCloudPlatform/ml-testing-accelerators/pull/771
Based on update required to `run_glue.py` in https://github.com/huggingface/transformers/pull/4146 | 02-09-2023 22:04:20 | 02-09-2023 22:04:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,550 | closed | No metric log calculation and output after using newest version. | Hello,
I met a problem that with the same code, the Trainer in huggingface was not calculating metrics using the compute_metric function and outputed the metric logs.
When I using the older version (4.22.0) Trainer the output is like:

However with new version of Trainer (4.26.1), there're only few outputs:

No loss and metrics define by my own function. It's really confusing and I don't konw how to solve it. Please help. | 02-09-2023 22:00:23 | 02-09-2023 22:00:23 | We can't magically help you without knowing the code you are running. Please follow the issue template.<|||||>Write in template in [new issue](https://github.com/huggingface/transformers/issues/21553). |
transformers | 21,549 | closed | SageMaker Sharded Data Parallel Support for Trainer | # What does this PR do?
This PR adds support for SageMaker Sharded Data Parallel with SMP version >= 1.15.
We mainly follow Deepspeed's checkpointing logic in our integration.
When sharded data parallel is enabled, we have special checkpointing logic.
We do not save the full model by default (with `save_model`) as it is an expensive synchronization action like Deepspeed. Instead, the user can enable full model saving by adding an environment variable called `HF_TRAINER_SMP_SDP_SAVE_FULL_MODEL` in their user script.
SageMaker Model Parallel saves these SDP partial checkpoints in a folder with a "_partial" appended to the input tag.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-09-2023 21:04:13 | 02-09-2023 21:04:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21549). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,548 | closed | Adding type hints to call() functions in this file | Co-authored by @katiele47
# What does this PR do?
Added type hints to the TFMarian model by adding types to Marian model call inputs. We added these types to each call() function according to the 'MARIAN_INPUTS_DOCSTRING' documentation contained within this file.
Fixes #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Issue #16059](https://github.com/huggingface/transformers/issues/16059)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
Tagging @Rocketknight1 (author of the issue)
| 02-09-2023 19:15:28 | 02-09-2023 19:15:28 | Hi @mollerup23! This looks good but there are a few issues mostly caused by your fork of transformers being a little out of date. Can you try [syncing your main branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork) and then rebasing your PR branch onto main, followed by a force push? If you need me to supply specific git commands for you, let me know! <|||||>> Hi @mollerup23! This looks good but there are a few issues mostly caused by your fork of transformers being a little out of date. Can you try [syncing your main branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork) and then rebasing your PR branch onto main, followed by a force push? If you need me to supply specific git commands for you, let me know!
Hi @Rocketknight1,
Thanks for the review/response! I will work on syncing up the PR. Also, our PR failed a lot of the CircleCI tests related to using `make` to format/lint the code properly. However, whenever it says I don't have the proper Jupyter dependencies installed to run these commands. Do you know how I can fix this?<|||||>Hi @mollerup23, if you can do the rebase I can run any fixup commands needed for you!<|||||>> Hi @mollerup23, if you can do the rebase I can run any fixup commands needed for you!
OK, sounds good. Will work on that ASAP.<|||||>@Rocketknight1, I think I was able to force push the synced changes. Are you able to confirm this on your end?<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,547 | closed | Added with torch.no_grad() to XLM-Roberta integration test | # What does this PR do?
Fixes #14642
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @LysandreJik | 02-09-2023 19:12:40 | 02-09-2023 19:12:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,546 | closed | Fix inclusion of non py files in package | # What does this PR do?
This PR fixes the inclusion of non py files in the transformer package. The setup was missing `include_package_data=True`. | 02-09-2023 19:12:20 | 02-09-2023 19:12:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21546). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,545 | closed | Generate: baamboozle TF export | # What does this PR do?
WIP
cc @mfuntowicz | 02-09-2023 18:43:30 | 02-09-2023 18:43:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21545). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,544 | closed | Added with torch.no_grad() to Camembert integration test | # What does this PR do?
Fixes #14642
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh Please let me know if there are any problems! | 02-09-2023 18:42:00 | 02-09-2023 18:42:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,543 | closed | Remove more unused attributes in config classes | # What does this PR do?
β οΈ Need to ping a few model importer to double check
Remove more unused attributes in config classes | 02-09-2023 18:04:03 | 02-09-2023 18:04:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,542 | closed | Fix from_pretrained API with config and state_dict | # What does this PR do?
The `from_pretrained` method documents its possible to pass along `pretrained_model_name_or_path=None` as long the user provides a `config` and a `state_dict`. This API was not tested and broke, this PR fixes that and adds a test.
Fixes #21526 | 02-09-2023 16:47:28 | 02-09-2023 16:47:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,541 | closed | Cannot import name 'BioGptModel' from 'transformers' | ### System Info
https://huggingface.co/docs/transformers/model_doc/biogpt
Cannot import name 'BioGptModel' from 'transformers'
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BioGptModel, BioGptConfig
### Expected behavior
.. | 02-09-2023 15:45:13 | 02-09-2023 15:45:13 | Please follow the issue template. You likely need to upgrade your Transformers version (`pip install transformers --upgrade`) but we can't know without more information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,540 | closed | [Don't merge] Check workflow on fork 2 | # What does this PR do?
[Don't merge] Check workflow on fork 2 | 02-09-2023 14:59:21 | 02-09-2023 14:59:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21540). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,539 | closed | [Don't merge] Check workflow on fork | # What does this PR do?
[Don't merge] Check workflow on fork | 02-09-2023 14:53:47 | 02-09-2023 14:53:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21539). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,538 | closed | Deepspeed Stage3 using trainer and base DONUT model results in RecursionError. | ### System Info
- Running on AzureML Standard_NC6S_V3 with curated environment: AzureML-ACPT-pytorch-1.12-py39-cuda11.6-gpu
- `transformers` version: 4.26.0
- Platform: Linux-5.0.0-1036-azure-x86_64-with-glibc2.31
- Python version: 3.9.15
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Through trainer
- Using distributed or parallel set-up in script?: Through deepspeed/trainer
### Who can help?
I am using a base DONUT model,
The error only happens with Deepspeed stage3: @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am fine-tuning a DONUT based model on an Azure Standard_NC6S_V3 (1 x V100 (16GB)) using AzureML. Below is a minimal example to reproduce the recursion error.
```python
# Train script
import transformers
from transformers import (
DonutProcessor,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
VisionEncoderDecoderModel,
)
from PIL import Image
import datasets
base_model = "naver-clova-ix/donut-base"
image_size = { "width": 680, "height": 960 }
def main():
# Main
training_args = Seq2SeqTrainingArguments(
output_dir='./output',
num_train_epochs=1,
per_device_train_batch_size=2,
fp16=True,
deepspeed='deepspeed_config.json',
)
model = VisionEncoderDecoderModel.from_pretrained(base_model)
processor = DonutProcessor.from_pretrained(base_model)
# Resize image size in model/processor
processor.image_processor.size = image_size
model.config.encoder.image_size = tuple(processor.image_processor.size.values())[::-1]
model.config.hidden_size = model.config.encoder.hidden_size # Deepspeed needs this fix
# Generate bogus dataset
image = Image.new('RGB', (image_size['width'], image_size['height']))
text = '{"great_key": "great_value"}'
N = 16
data = [{'image': image, 'text': text} for _ in range(N)]
dataset = datasets.Dataset.from_list(data)
# Tokenize bogus dataset
def tokenize(example, processor):
pixel_values = processor(
example["image"],
random_padding=True,
return_tensors="pt",
).pixel_values.squeeze()
input_ids = processor.tokenizer( # type: ignore
example["text"],
add_special_tokens=False,
max_length=512,
padding="max_length",
truncation=True,
return_tensors="pt",
)["input_ids"].squeeze(0)
labels = input_ids.clone()
return {
"pixel_values": pixel_values,
"labels": labels,
"target_sequence": example["text"],
}
input_dataset = dataset.map(
lambda x: tokenize(x, processor),
remove_columns=['image', 'text'],
)
# Train
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=input_dataset,
)
trainer.remove_callback(transformers.integrations.AzureMLCallback)
trainer.train()
if __name__ == "__main__":
main()
```
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"train_batch_size": "auto",
"fp16": {
"enabled": "auto"
}
}
```
Probably not relevant but here the submit job script.
```python
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
from azure.ai.ml import command
compute_name = ""
environment_name = ""
ml_client = MLClient.from_config(
credential=DefaultAzureCredential(),
path='/',
)
environment = ml_client.environments.get(environment_name, label="latest")
fail_job = command(
code='./fail_train',
command="transformers-cli env && deepspeed --num_gpus 1 failure_train_script.py",
compute=compute_name,
environment=environment,
)
job = ml_client.jobs.create_or_update(
fail_job,
experiment_name="testing",
)
```
### Expected behavior
When using Deepspeed stage2 all is working but for large images I get OOM on the V100 16GB GPU. Therefore, I want to try Deepspeed stage3 but this results in the maximum recursion error.
From what I have read, the recursion error is due to deepspeed's zero initialisation, however these bits are a bit hidden when using trainer and I am not sure where to look. I am more than happy to investigate but I definitely need some guidance (-:
I expect training to start with hopefully some memory savings such that I can train a DONUT based model on V100 or smaller GPU. | 02-09-2023 12:33:37 | 02-09-2023 12:33:37 | Hi @dennisbakhuis
In a bit we will move this to https://github.com/microsoft/DeepSpeed/issues as this is not the integration problem.
As I discovered this recently when trying to build a multi-modal model based on 2 pre-trained models you can only use `zero.Init` once. If you use it again it breaks (infinite recursion) in `deepspeed.initialize`.
p.s. I edited the OP to remove other maintainers since this ticket is mine ;)
But let's try to unravel it here first and I think I have a workaround for you as well.<|||||>Meanwhile the workaround I did is this: as one of the models was much smaller than the other I initialized the smaller one w/o `zero.Init` and the other normally w/ `zero.Init` and it worked. Is this a similar situation here, and the processor is much smaller than the cv model?
I could rig up a workaround for you. But your situation is different - you have 2 separate models. Let me think.
ok, `processor` is not a model, so it shouldn't even be called under `zero.Init` in the first place. this is interesting!<|||||>The minimal repro is just this:
```
from transformers import DonutProcessor, VisionEncoderDecoderModel
import torch
import deepspeed
from transformers.deepspeed import HfDeepSpeedConfig
ds_config = dict(train_batch_size=1, zero_optimization=dict(stage=3))
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
deepspeed_engine, *_ = deepspeed.initialize(model=model, config_params=ds_config)
```
```
$ deepspeed --num_gpus 1 test.py
Traceback (most recent call last):
File "test.py", line 13, in <module>
deepspeed_engine, *_ = deepspeed.initialize(model=model, config_params=ds_config)
File "/mnt/nvme0/code/github/00optimize/DeepSpeed-optim-grad-accessors/deepspeed/__init__.py", line 125, in initialize
engine = DeepSpeedEngine(args=args,
File "/mnt/nvme0/code/github/00optimize/DeepSpeed-optim-grad-accessors/deepspeed/runtime/zero/partition_parameters.py", line 350, in wrapper
if not hasattr(module, "_ds_child_entered"):
File "/mnt/nvme0/code/github/00optimize/DeepSpeed-optim-grad-accessors/deepspeed/runtime/engine.py", line 490, in __getattr__
[...]
if name in dir(self):
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2026, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/nvme0/code/github/00optimize/DeepSpeed-optim-grad-accessors/deepspeed/runtime/engine.py", line 490, in __getattr__
if name in dir(self):
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2026, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/nvme0/code/github/00optimize/DeepSpeed-optim-grad-accessors/deepspeed/runtime/engine.py", line 490, in __getattr__
if name in dir(self):
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2024, in __dir__
module_attrs = dir(self.__class__)
RecursionError: maximum recursion depth exceeded while calling a Python object
```
The initial thought that 2 `from_pretrained` caused it - isn't the case, the problem is somewhere in the `from_pretrained` of this model.<|||||>The cause proved to be 2 `from_config` calls each invoking `zero.Init` context internally.
https://github.com/huggingface/transformers/blob/97d3390fc8edb210fcf0aad6a079406b018655b9/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L191-L195
<|||||>BTW, do you have enough cpu memory to load this model?
In this case a temp hack would be very simple, just disable the `zero.Init` contexts directly:
```
diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
index c9d304f25..c2e530275 100644
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -1085,7 +1085,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
if torch_dtype is not None:
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
- if is_deepspeed_zero3_enabled():
+ if 0: # is_deepspeed_zero3_enabled():
import deepspeed
logger.info("Detected DeepSpeed ZeRO-3: activating zero.init() for this model")
@@ -2453,7 +2453,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
# Instantiate model.
init_contexts = [no_init_weights(_enable=_fast_init)]
- if is_deepspeed_zero3_enabled():
+ if 0: #is_deepspeed_zero3_enabled():
import deepspeed
logger.info("Detected DeepSpeed ZeRO-3: activating zero.init() for this model")
```
This should unblock you. Let me know if it does.
The accelerate deepspeed integration has separated zero3 and `zero.Init`, which was a smart move, so with a single flag you can disable `zero.Init, while still using zero3. When designing the HF Trainer integration I made a wrong assumption that someone wanting to use zero3 would always want to use `zero.Init` but as you can see there are rare cases when it's the case.
Meanwhile I will try to reduce this to a simple test case that we can present to the Deepspeed team to make it work.<|||||>OK, I reduced the problem to this repro:
```
import torch
import deepspeed
ds_config = dict(train_batch_size=1, zero_optimization=dict(stage=3))
class MyModel(torch.nn.Module):
def __init__(self, m1):
super().__init__()
self.m1 = m1
with deepspeed.zero.Init(config_dict_or_path=ds_config):
with deepspeed.zero.Init(config_dict_or_path=ds_config):
m1 = torch.nn.Linear(1,1)
model = MyModel(m1, m1)
deepspeed_engine, *_ = deepspeed.initialize(model=model, config_params=ds_config)
```<|||||>OK, I filed the report here: https://github.com/microsoft/DeepSpeed/issues/2811
<|||||>Hi @stas00,
Thanks for the elaborate answers and way of thought.
Let me rephrase from what I understood: `deepspeed.zero.Init` should only be called once. This is something I have seen mentioned in other issues in the Deepspeed repo as well. As we have an encoder + decoder, we practically have two models, which each do a `deepspeed.zero.init` during the `.from_config` method.
What is unclear to me is who to "blame" (in a positive sense (-;). If we are only suppose to call `deepspeed.zero.init` once, something in transformers should be fixed, while if nested `deepspeed.zero.init` should be allowed (as in your minimal example), Deepspeed needs a fix.
Just thinking out loud.
I will try your suggested *hacky* fix and will report later.
<|||||>> deepspeed.zero.Init should only be called once
at the moment, yes
> What is unclear to me is who to "blame" (in a positive sense (-;). ...
If you read my bug report https://github.com/microsoft/DeepSpeed/issues/2811 it already asks your exact questions:
And there is a 2nd problem that will emerge if the first one is fixed, see: https://github.com/microsoft/DeepSpeed/issues/2812 - I discovered it some months back but also yesterday when I was hoping to give you a simpler hack - specifically in the diff I shared disabling zero.Init only for `from_config`. I have some hacky ideas to solve it, but not yet an elegant solution.
I will ponder meanwhile how we can fix this on the integration side. This should be totally doable, just need to find an elegant way of doing that.
Mind you, composed models is a new thing, so a new need calls for a new solution.<|||||>I can confirm that with the hacky solution as shown in https://github.com/huggingface/transformers/issues/21538#issuecomment-1425138938, the recursion error is gone. It took a bit longer as I had to patch the files from within a container on Azure, something I do not do every day.
Unfortunately you were also right that I still get an OOM on the 16GB V100 from Azure. I was hoping that with offloading parameters I would possibly fit. I will try to fiddle a bit with the Deepspeed parameters but I guess I have to use gradient accumulation.<|||||>Thank you for doing the experiment, Dennis. Glad to hear it worked.
The Deepspeed team are actively working on resolving these 2 issues: https://github.com/microsoft/DeepSpeed/issues/2811, https://github.com/microsoft/DeepSpeed/issues/2812 so hopefully we should have a working solution soon, which would require just updating your deepspeed version.<|||||>how about this one https://github.com/microsoft/DeepSpeed/issues/2637 . It seems the only option is disable zero.init with accelerate. <|||||>This issue is being addressed in:
- https://github.com/microsoft/DeepSpeed/issues/2811
- https://github.com/microsoft/DeepSpeed/issues/2812
which I think should resolve the leak as well. The Deepspeed team are actively working on solving both. <|||||>Actually @tohtana has just created a PR that is supposed to fix both issues: https://github.com/microsoft/DeepSpeed/pull/2989
I will be able to try it probably tomorrow, but please go ahead and try it and send any yay/nay feedback to that PR if you do. Thank you!<|||||>> Actually @tohtana has just created a PR that is supposed to fix both issues: [microsoft/DeepSpeed#2989](https://github.com/microsoft/DeepSpeed/pull/2989)
>
> I will be able to try it probably tomorrow, but please go ahead and try it and send any yay/nay feedback to that PR if you do. Thank you!
I will, thanks. if i get the result, i will update result here <|||||>https://github.com/microsoft/DeepSpeed/issues/2637 still exists with https://github.com/microsoft/DeepSpeed/pull/2989
my setting is here https://github.com/huggingface/peft/issues/161<|||||>@dennisbakhuis, please try with this PR https://github.com/microsoft/DeepSpeed/pull/2989 - I tested and your repro now works.
you will also need to add to `ds_config.json` top-level (this is an unrelated change)
```
"zero_force_ds_cpu_optimizer": false,
```
Please let me know if it works for you.<|||||>Thank you for testing https://github.com/microsoft/DeepSpeed/pull/2989, @dumpmemory - sorry to hear it didn't resolve the leak - perhaps file a new issue in DS, as the one I posted I couldn't provide a repro script as it was part of the complex system, but perhaps you could. That should help a lot with solving it.<|||||>> Thank you for testing [microsoft/DeepSpeed#2989](https://github.com/microsoft/DeepSpeed/pull/2989), @dumpmemory - sorry to hear it didn't resolve the leak - perhaps file a new issue in DS, as the one I posted I couldn't provide a repro script as it was part of the complex system, but perhaps you could. That should help a lot with solving it.
thanks for your response. i had posted an issue there. thanks again. <|||||>https://github.com/microsoft/DeepSpeed/pull/2989 has been merged, so closing this Issue.
To verify the solution please use deepspeed@master until the next release (0.9.1? is made) |
transformers | 21,537 | closed | Tag tests as slow β | # What does this PR do?
I've tagged as slow/made speedup changes to tests that our push CI was complained about for the past few days β | 02-09-2023 12:25:08 | 02-09-2023 12:25:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,536 | closed | warnings when converting WhisperModel to onnx format | ### System Info
transformers 4.25.1
torch 1.13.1+cu116
platform Windows 11
### Who can help?
@sanchit-gandhi
### Information
When converting WhisperDecoder to onnx format. onnx.export will generate few warnings:
```
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:753: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:74: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:79: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if past_key_values_length > 0:
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:200: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:207: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
```
### Reproduction
convert WhisperDecoder model to onnx format using `onnx.export`
### Expected behavior
No warning. | 02-09-2023 12:15:09 | 02-09-2023 12:15:09 | cc @mht-sharma @fxmarty
(this might be a more appropriate issue for the `optimum` repo: https://github.com/huggingface/optimum)<|||||>@ling0322 The ONNX export from transformers is deprecated and the ONNX export is now actively supported in Optimum.
The usage should be very similar, you can refer to the [documentation](https://huggingface.co/docs/optimum/v1.6.3/en/exporters/onnx/usage_guides/export_a_model) (version 1.6.3).
Feel free to open an issue in Optimum if you encounter an issue with the Whisper export!<|||||>FYI
Exporting via optimum results in same warnings<|||||>@frankiedrake Thank you for notifying. The only warning that is worth investigating is
```
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
d:\run\whisper\transformers\models\whisper\modeling_whisper.py:79: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
```
I'll have a look shortly. |
transformers | 21,535 | closed | Add BioGptForSequenceClassification | # What does this PR do?
Fixes #21530
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@NielsRogge
| 02-09-2023 11:41:55 | 02-09-2023 11:41:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21535). All of your documentation changes will be reflected on that endpoint.<|||||>Hi, it seems the CI didn't run properly. Could you push an empty commit to trigger it?<|||||>What Niels forgot to say is that there seems to be an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) and then push some commit to your PR?<|||||>> What Niels forgot to say is that there seems to be an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) and then push some commit to your PR?
I tried but it didn't work π
<|||||>Hi @GuillemGSubies. If the issue is still open, i would like to contribute it.
Thanks<|||||>@upjabir I don't understand why the pipelines fail, you have any idea? Otherwise you can open another PR and I'll close this one<|||||>@GuillemGSubies currently I don't have any idea about the issue. But let me try it, so I can get a clear picture<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,534 | closed | wave2vec2 with feat_extract_norm == "group" normalizes over channels not token which causes issue when padding | ### System Info
Any GPU/CPU system
### Who can help?
_No response_
### Information
While there is support for padding, I believe it affects the accuracy - (even when running with\without padding on just one sample). My central claim is that since the first [normalization ](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L336 ) is done over the channels and not the tokens and thus the mean and var values change when the sequence is padded.
I used similar code to the given [example](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.forward.example) where in the processor I used padding="max_length", truncation=True,max_length=1182340.
Note that wish small enough padding for instance max_length=100000 you want see any issue.
Am I missing something? Can anyone help?
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the example code with and without padding
# With padding:
```python
from transformers import AutoProcessor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt", padding="max_length", truncation=True,max_length=1182340)
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])
```
**output: ISTE COIS THE COL OF T I CLASES AND WE RLITO O HIS GOSPLE**
# Without padding:
```python
from transformers import AutoProcessor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])
```
**output: 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'**
### Expected behavior
I expect that adding won't affect the output of the model | 02-09-2023 11:34:30 | 02-09-2023 11:34:30 | cc @sanchit-gandhi <|||||>Hey @itayhubara!
Thanks for the nice write-up π€ Edited your OP to format the reproducible code as a markdown codesnippet!
The phenomenon you're describing is due to a bug in the original Wav2Vec2 implementation, where layer-norm was applied **after** the attention layer (instead of **before** as it should have been). This was a bug that was in the original `fairseq` codebase for the wav2vec2-base model. The model was trained and release with this bug, and thus it was copied over to `transformers` when the model was added.
What's interesting is that the wav2vec2-large model applies layer-norm in the correct way! Layer-norm is applied **before** the attention layer.
We differentiate between applying layer-norm before/after the attention layer with the config parameter [`do_stable_layer_norm`](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2Config.do_stable_layer_norm).
If `False` (like the base model), we apply layer-norm after the attention layer: [wav2vec2-base-960h/config.json#L43](https://huggingface.co/facebook/wav2vec2-base-960h/blob/22aad52d435eb6dbaf354bdad9b0da84ce7d6156/config.json#L43)
If `True` (like the large model), we apply layer-norm correctly before the attention layer: [wav2vec2-large-960h-lv60-self/config.json#L43](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/54074b1c16f4de6a5ad59affb4caa8f2ea03a119/config.json#L43)
Running the code snippet with the large model (which has correct layer-norm) and placing the model on the GPU (if available) gives the correct output, even with extreme padding:
```python
from transformers import AutoProcessor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model.to(device)
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt",padding="max_length", truncation=True,max_length=1182340)
inputs = {key: value.to(device) for key, value in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])
```
**Print Output:**
```
MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL
```<|||||>Hi @sanchit-gandhi, thanks for the response!
Unfortunately, we are still seeing the accuracy issue with padding with the large model. Please find the code snippet below. We modified your code snippet for the dataset we are using, librispeech test clean.
WER=0.051 when a input sequence is padded with 60000 tokens, whereas WER=0.076 when no padding.
Can you please take a look at our example?
```
from transformers import AutoProcessor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
from jiwer import wer
dataset = load_dataset("librispeech_asr", "clean", split="test")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model.to(device)
inputs=[571]
for i in inputs:
inputs_long_pad = processor(dataset[i]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt",padding="max_length", truncation=True,max_length=len(dataset[i]["audio"]["array"])+60000)
inputs_long_pad = {key: value.to(device) for key, value in inputs_long_pad.items()}
actual=dataset[i]['text']
with torch.no_grad():
logits_long_pad = model(**inputs_long_pad).logits
predicted_ids_long_pad = torch.argmax(logits_long_pad, dim=-1)
transcription_long_pad = processor.batch_decode(predicted_ids_long_pad)
wer_long_pad = wer(actual,transcription_long_pad[0])
inputs_short_pad = processor(dataset[i]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt",padding="max_length", truncation=True,max_length=len(dataset[i]["audio"]["array"]))
inputs_short_pad = {key: value.to(device) for key, value in inputs_short_pad.items()}
with torch.no_grad():
logits_short_pad = model(**inputs_short_pad).logits
predicted_ids_short_pad = torch.argmax(logits_short_pad, dim=-1)
transcription_short_pad = processor.batch_decode(predicted_ids_short_pad)
wer_short_pad = wer(actual,transcription_short_pad[0])
print("long pad: ", transcription_long_pad[0], ", long pad WER: ", wer_long_pad)
print("no pad: ", transcription_short_pad[0], ", no pad WER: ", wer_short_pad)
print("ground truth: ", actual)
```
Outputs from the run:
```
long pad: FOR MANY THEN THIS BOOK HAS BEEN A SOURCE OF FASCINATION SURELY ONE OF THE MOST INFLUENTIAL NOVELS EVER WRITTEN AN INSPIRATION FOR SUCH SCIENTISTS AND DISCOVERERS AS ENGINEER SIMON LAKE OCEANOGRAPHER WILLIAM BB POLAR TRAVELLER SIR ERNEST SHACKLETON , long pad WER: 0.05128205128205128
no pad: FOR MANY THEN THIS BOOK HAS BEEN A SOURCE OF FASCINATION SURELY ONE OF THE MOST INFLUENTIAL NOVELS EVER WRITTEN AN INSPIRATION FOR SUCH SCIENTISTS AND DISCOVERERS AS ENGINEER SIMON LAKE OCEANOGRAPHER WILLIAM B B POLAR TRAVELLER SIR ERNEST SHACKLETON , no pad WER: 0.07692307692307693
ground truth: FOR MANY THEN THIS BOOK HAS BEEN A SOURCE OF FASCINATION SURELY ONE OF THE MOST INFLUENTIAL NOVELS EVER WRITTEN AN INSPIRATION FOR SUCH SCIENTISTS AND DISCOVERERS AS ENGINEER SIMON LAKE OCEANOGRAPHER WILLIAM BEEBE POLAR TRAVELER SIR ERNEST SHACKLETON
```<|||||>Hey @schoi-habana!
I think what we're seeing in this particular example is the effect of numerical precision on our model outputs. The padding mask is not perfect: we set the attention values to a very large negative number for the padded inputs, however they are not entirely masked to minus infinity (due to numerical precision constraints).
Essentially, we set the attention mask for padded inputs to the most negative value permitted by our dtype:
https://github.com/huggingface/transformers/blob/a8eb4f79f946c5785f0e91b356ce328248916a05/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L763
So in our attention computation, the padded attention values are masked **as much as possible**. This is as good as we can get for padded inputs. Here, we are bounded by numerical precision, and can't go down further. The workaround would be to perform un-batched inference, but here you pay the penalty of slower inference time.
We can see that the transcription is by-and-large correct, we've just got an extra space in a name:
* No pad: `BB`
* Pad: `B B`
* Ground truth: `BEEBE`
The extra space is giving an additional insertion error for the padding case. But we can see that the transcription is pretty much identical in all other aspects.
If we evaluate the model on the full LibriSpeech test-clean corpus, we find that the results are the same to within numerical precision:
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest", sampling_rate=16000)
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER no pad:", wer(result["text"], result["transcription"]))
def map_to_pred_with_pad(batch):
audio = batch["audio"]["array"]
max_length = len(audio) + 60000
inputs = processor(audio, return_tensors="pt", padding="max_length", max_length=max_length, sampling_rate=16000)
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)[0]
batch["transcription"] = transcription
return batch
pad_result = librispeech_eval.map(map_to_pred_with_pad, remove_columns=["audio"])
print("WER with pad:", wer(pad_result["text"], pad_result["transcription"]))
```
**Print Output:**
```
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2620/2620 [01:53<00:00, 23.12ex/s]
WER no pad: 0.018620663420572125
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2620/2620 [02:31<00:00, 17.29ex/s]
WER with pad: 0.018620663420572125
```<|||||>@sanchit-gandhi thanks we were able to reproduce the results with the wav2vec2-large model. Any plan to fix the bug in the wav2vec2-base model and publish the trained model?<|||||>Hey @schoi-habana! Very glad to hear that! Unfortunately, the bug in the wav2vec2-base model in the HuggingFace code was somewhat deliberate and is not something we can really change. What we're trying to provide with Transformers is an easy-to-use codebase that's built on-top of the official weights/code. Transformers seeks to match the _official_ Facebook implementation of Wav2Vec2 as closely as possible. Since the official weights have the bug, our easy-to-use codebase also has to have the bug, such that our implementation gets the same results as the official one. So we can't really change this for the base model! If Facebook release a variant of the base model that has the bug fixed, you can be sure we'll also adapt the code to accommodate and host the weights on the HF Hub! But this seems quite unlikely.
The padding situation you've described seems very extreme! Is it really affecting your results quite drastically? From the codesnippet you shared it seems to be quite localised to one/two samples in the dataset no?
I think at this point we need to view it as a flaw with the model rather than a flaw with the codebase! I guess the options are either:
1. Group your audio samples such that they have similar lengths and minimise padding (see https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.group_by_length)
2. Use the large checkpoint since it doesn't have the padding bug
3. Acknowledge that the base checkpoint has the flaw with padding and accept the small degradation to WER! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving this closed since the 'bug' is baked into the official implementation and thus propagated onto the π€ Transformers implementation. |
transformers | 21,533 | closed | [WIP] Not a real PR, just gauging things on the CI. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 02-09-2023 10:51:34 | 02-09-2023 10:51:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,532 | closed | Add FocalNet | # What does this PR do?
This PR adds FocalNet to the library.
To do:
- [x] remove print_values
- [x] Upload checkpoints
- [x] Update model cards
- [x] Update integration tests
- [x] Transfer checkpoints | 02-09-2023 10:27:25 | 02-09-2023 10:27:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>CC @NielsRogge
I cleaned up the config dependent arguments a bit, uploaded the models and edited the model cards. @amyeroberts @sgugger could you take another look?
FocalNet is the backbone of X-Decoder so it'd be great it to merge the PR soon. |
transformers | 21,531 | closed | Fix ClearML Integration to run in ClearML pipelines and external Tasks. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi! We've had reports from users of ClearML that the integration doesn't work when running inside our pipelines module. This is because a ClearML task was already running with the pipelinecontroller and so the integration would brick on this.
This is a simple fix that looks if an existing task is already found, if so, it reports to that task, if not, it creates a new one.
This fix will also allow users to run a Task.init command themselves and connect their own configurations (which can later be automatically overridden by ClearML automation).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
Pinging @sgugger on this according to the list of people responsible for the different modules :) | 02-09-2023 09:30:43 | 02-09-2023 09:30:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,530 | closed | BioGPTForSequenceClassification | ### Feature request
It would be nice to have this available.
### Motivation
I'm studying some biomedical classification datasets and I would like to try BioGPT with them.
### Your contribution
I could send a PR if you want. I guess it shouldn't be too hard, I could look at another model trained with Casual LM and implement `BioGPTForSequenceClassification` like that one, am I right? | 02-09-2023 07:59:56 | 02-09-2023 07:59:56 | Sure, see this PR as an example: https://github.com/huggingface/transformers/pull/18123<|||||>Ty so much, I'll give it a try.<|||||>@GuillemGSubies There has been no activity on your PR #21535 since a while. I would like to fix the issue and complete the PR. Should I proceed? Are you still working on this issue? <|||||>You can take it, I'm not working on it right now<|||||>Hello @awinml , were you able to resolve this issue? If not I'd be interested in working on this problem.<|||||>@jprivera44 Yeah, I am still working on it. Most of the work is done. There's just one test left to be fixed. I will mostly fix it up by the end of this week.<|||||>@awinml are u still working on this issue??
<|||||>@sushmanthreddy As mentioned in https://github.com/huggingface/transformers/issues/21530#issuecomment-1504765529, I am still working on the issue. There is only one failing test left to be fixed. The PR is still under review, hence the inactivity. |
transformers | 21,529 | closed | Fix missing unfinished_sequences | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/21461#issuecomment-1423684619
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante @ydshieh
| 02-09-2023 07:33:47 | 02-09-2023 07:33:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @tokestermw. I confirmed the 2 failed doc-tests pass now with the changes in this PR β€οΈ .
I will leave @gante to double check the logic. |
transformers | 21,528 | closed | fix typo in run_speech_recognition_ctc.py | There should be `# limitations under the License` line at the end of the documentation section.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-09-2023 07:33:03 | 02-09-2023 07:33:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,527 | closed | Fix stuff related to the causal_mask in CodeGen. | 1. Line 613, `_keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias", r"h\.\d+\.attn\.bias"]` => `_keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.causal_mask"]` to load correctly from CodeGen checkpoint without `causal_mask`.
2. Line 152, `causal_mask = self.causal_mask[:, :, key_length - query_length : key_length, :key_length]
` => `causal_mask = self.causal_mask[:, :, key_length - query_length : key_length, :key_length].bool()
` to alleviate potential user warning saying like `UserWarning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead.`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-09-2023 05:21:21 | 02-09-2023 05:21:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker and @younesbelkada <|||||>Following up on my comment, here is a script I ran:
```python
import torch
import torch.nn as nn
class DummyModelUint(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
self.register_buffer("attention_mask", torch.ones(10, 10, dtype=torch.uint8))
def forward(self, x):
return self.linear(x)
class DummyModelBool(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
self.register_buffer("attention_mask", torch.ones(10, 10, dtype=torch.bool))
def forward(self, x):
return self.linear(x)
model_uint8 = DummyModelUint()
state_dict = model_uint8.state_dict()
model_bool = DummyModelBool()
model_bool.load_state_dict(state_dict)
print(model_bool.attention_mask.dtype)
>>> torch.bool
```
So even if you save `uint8` buffers on the state dict, if you load it through a model that has `torch.bool` buffer it will overwrite it in `torch.bool`. So I don't think the first fix is needed if we merge #21384 !
However your second fix might make sense, just need more clarification ;) <|||||>@younesbelkada
Thanks for your review.
1. Removing `[r"h\.\d+\.attn\.masked_bias", r"h\.\d+\.attn\.bias"]` basically due to there are not `masked_bias` or `bias` in `attn`. I conjecture the reason why they leave these two terms here is that they copy part of the code from GPTJ (https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py#L723).
2. Introducing `.bool()` due to it will raise a warning when later execute the `torch.where` (https://github.com/huggingface/transformers/blob/main/src/transformers/models/codegen/modeling_codegen.py#L165). However, it seems that this will be addressed in another PR.
<|||||>2. The causal mask should not be a `bool` no? I might not be looking at the correct line in the code, but the only bool is going the be the attention mask that we feed the network, which is then used to created the causal mask by multiplying with `torch.finfo(...).min`. (so something like -1e-9)<|||||>I agree that we should leave the fix 2 to PR https://github.com/huggingface/transformers/pull/21384. |
transformers | 21,526 | closed | transformers 4.26.0: Error of `from_pretrained(pretrained_model_name_or_path=None, ...)` | ### System Info
Env:
```shell
ubuntu 20.04
gcc --version 9.4.0
python 3.8.16
torch 1.7.1+cu110
torch-cluster 1.5.9
torch-geometric 2.0.4
torch-scatter 2.0.7
torch-sparse 0.6.9
torch-spline-conv 1.2.1
torchvision 0.8.2+cu110
transformers 4.26.0
```
### Who can help?
@gante @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, when I run the following code with transformers 4.26.0, the program crashes with error:
```python
model_class = ... # self- definition model class
model = model_class.from_pretrained(
pretrained_model_name_or_path=None,
config=model_config,
state_dict=checkpoint
)
>>>
model = model_class.from_pretrained(
File "/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
File "/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2718, in _load_pretrained_model
folder = os.path.sep.join(resolved_archive_file[0].split(os.path.sep)[:-1])
TypeError: 'NoneType' object is not subscriptable
python-BaseException
```
### Issues:
```python
def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs):
# In /lib/python3.8/site-packages/transformers/modeling_utils.py L: 2111
def
if pretrained_model_name_or_path is not None:
...
else:
resolved_archive_file = None
In /lib/python3.8/site-packages/transformers/modeling_utils.py L:2718
def _load_pretrained_model():
folder = os.path.sep.join(resolved_archive_file[0].split(os.path.sep)[:-1])
```
As said in [from_pretrained API](https://huggingface.co/docs/transformers/main_classes/model#transformers.PreTrainedModel.from_pretrained.pretrained_model_name_or_path), the param `pretrained_model_name_or_path` can be None if you are both providing the configuration and state dictionary (resp. with keyword arguments `config` and `state_dict`).
### Solutions
Besides, when I run the same code with `transformers 4.12.0 + python 3.6`, the program runs fine.
For transformers 4.26.0 , the above issue can be solved by add the following code to https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/modeling_utils.py#L2718
```python
if resolved_archive_file is not None:
folder = os.path.sep.join(resolved_archive_file[0].split(os.path.sep)[:-1])
``` | 02-09-2023 05:17:33 | 02-09-2023 05:17:33 | The fix might be a bit more involved to make sure it doesn't break anything else, but this looks like the essence of it. A test should also be added for this API to make sure it doesn't break again. Will try to have a look later today or tomorrow/<|||||>Thank you for your efforts!
I have modified the code for `transformers 4.26.0` and tested it on my custom model. There were no further errors and the program didn't break anything else. |
transformers | 21,525 | closed | Generate: make TF `.generate()` signature == PT `.generate()` signature | # What does this PR do?
TF generation test addition PR 3 (out of ???).
In an effort to move generation integration tests to be framework-agnostic, I'll be adding low-hanging fruit to TF. This PR adds framework-agnostic tests for the signature of `.generate()`, which means that the TF signature had to be updated π
Please note that this change is backward compatible -- regardless of whether the user was passing `.generate(input_ids)`, `.generate(input_ids=input_ids)`, or (e.g.) `.generate(pixel_values=pixel_values)`, no change on their end is needed. | 02-08-2023 20:38:34 | 02-08-2023 20:38:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,524 | closed | [from_pretrained] extend `torch_dtype="auto"` to look up `config.torch_dtype` first, expand docs | 1. As discussed at https://huggingface.slack.com/archives/C01N44FJDHT/p1675880091973589 this PR proposes to expand the `from_pretrained` documentation of how `torch_dtype`'s different values impact the model loading.
2. next `torch_dtype="auto"` was expanded to first look up `config.torch_dtype` and if it's not there try to derive it from weights.
3. fixed a bug that was polluting the config object when `torch_dtype="auto"`
4. Tests were expanded to test the new functionality and the bug.
| 02-08-2023 19:30:51 | 02-08-2023 19:30:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,523 | closed | ConvBERT self-attention throws errors whenever head_ratio is not 2 | ### System Info
Here is what the command says, but multiple responses are incorrect
- `transformers` version: 4.25.1
- Platform: macOS-10.16-x86_64-i386-64bit (Really 13.1 M1)
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed (It is installed)
- JaxLib version: not installed (It is installed)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @arthr
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import jax
from transformers import ConvBertLayer, ConvBertConfig
from transformers.models.convbert.modeling_convbert import ConvBertSelfAttention
test_config = ConvBertConfig()
test_config.attention_probs_dropout_prob = 0.0
test_config.hidden_dropout_prob = 0.0
test_config.head_ratio = 4
torch_conv_attn = ConvBertSelfAttention(test_config)
input_embed = torch.tensor(np.array(jax.random.normal(jax.random.PRNGKey(42),(3,10,test_config.embedding_size))))
torch_conv_attn(input_embed)
#RuntimeError: shape '[3, 10, 768]' is invalid for input of size 11520
```
### Expected behavior
I am testing out ConvBertSelfAttention for comparisons to a similar implementation in JAX. Head_ratio is allowed to be any integer that divides into num_heads and hidden_size, but I run into errors whenever it is set to anything other than 2.
I believe the problem stems from the fact that the embedding size is reduced by a factor of head_ratio, but then only multiplied by a factor of 2 in the final concatenation step. However, this seems to be the same behavior as in the original code https://github.com/yitu-opensource/ConvBert/blob/master/model/modeling.py, so I am not sure how ConvBERT is supposed to handle head_ratio != 2 in the first place. | 02-08-2023 18:06:58 | 02-08-2023 18:06:58 | cc @ArthurZucker <|||||>Thanks for reporting. Opening a PR to fix it! |
transformers | 21,522 | closed | Make `_LazyAutoMapping` a bit less lazy πͺ π₯ | # What does this PR do?
Current `_LazyAutoMapping` is a bit too lazy, and potentially to give silent failure/error. For example, tests being skipped by mistake. One explicit example is
https://github.com/huggingface/transformers/blob/c26b1503f22feb20d37588d4243bd444d297bc13/tests/test_pipeline_common.py#L86
(so far in a not-yet merged PR), which was previously
```python
if model_mapping is None or len(model_mapping) == 0:
```
and the tests were all skipped, and I am like π .
As a former mathematician, it's hard for me to accept length being 0 in this case π’ . I decide to make it less lazy!
## Results
### Before this PR
```python
from transformers import TF_MODEL_MAPPING
print(len(TF_MODEL_MAPPING)) # 0
```
### After this PR
```after
from transformers import TF_MODEL_MAPPING
print(len(TF_MODEL_MAPPING)) # 60
``` | 02-08-2023 18:06:45 | 02-08-2023 18:06:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failing tests are irrelevant. |
transformers | 21,521 | closed | lr_scheduler not updated when auto_find_batch_size set to True and batch_size decays | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Issue:
When setting `auto_find_batch_size=True` in `TrainingArguments`, if the batch_size decays because batches can't fit in memory, the learning_rate scheduler won't get updated.
In my case, I finetune bert-base-cased on a custom dataset using:
```
training_args = TrainingArguments(
per_device_train_batch_size=512,
auto_find_batch_size=True,
num_train_epochs=3
)
```
My batch_size decays three times, and the learning rate decays to zero before the end of the first training epoch.
# Trace:
Line 1162 of `trainer.py` is `if self.lr_scheduler is None:`, on its first call it evaluates to True, but when the batch_size decays, it is called again but this time it evaluates to False which prevents the lr_scheduler from being updated on line 1163.
I think we could replace it by:
`if self.args.optimizers[1] is None:`
### Expected behavior
If `max_steps` changes because the batch_size decays, the lr_scheduler should be updated to reflect this change.
Using the default linear lr_scheduler, I expect the learning rate to go from its initial value at the beginning of training to zero at the end of training. | 02-08-2023 17:04:38 | 02-08-2023 17:04:38 | cc @muellerzr <|||||>@muellerzr I would like to pick up this issue and fix it, Looking to write a failing testcase for this bug, Any pointers ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It's been a while @muellerzr , could you have a look?<|||||>@raghavanone @thomas-schillaci could you try building from main and seeing if that fixes the issue? I think https://github.com/huggingface/transformers/pull/24521 fixed this<|||||>I'm not sure this fixes the issue, I'm going to comment on the PR directly<|||||>Same here. I built it from main.<|||||>just ran into this issue today, can confirm that it exists<|||||>Hi all, try reinstalling from main, #24758 should have fixed it this time<|||||>Hello @muellerzr, it works for me, thank you for the fix! |
transformers | 21,520 | closed | T5EncoderModel Runtime Error: Scalar type Half but found Float | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Y (V100-DGXS; Driver Version: 525.85.12 ; CUDA Version: 12.0 )
- Using distributed or parallel set-up in script?: N
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Created a new conda env from python==3.8
2. Installed latest dependcies via pip without version tag: `pip install numpy torch transformers accelerate tqdm sentencepiece`
3. Ran the following script:
```
import torch
from transformers import T5EncoderModel, T5Tokenizer
transformer_name = "Rostlab/prot_t5_xl_half_uniref50-enc"
prott5_dir = "checkpoints/prott5"
device = "cuda"
model = T5EncoderModel.from_pretrained(transformer_name, torch_dtype=torch.float16, cache_dir=prott5_dir, output_attentions=False)
model = model.to(device)
model = model.eval()
tokenizer = T5Tokenizer.from_pretrained(transformer_name, do_lower_case=False, cache_dir=prott5_dir)
token_encoding = tokenizer.batch_encode_plus(["SEQVENCE", "SEQQENCE"], add_special_tokens=True, padding='longest')
input_ids = torch.tensor(token_encoding['input_ids']).to(device)
attention_mask = torch.tensor(token_encoding['attention_mask']).to(device)
model(input_ids, attention_mask=attention_mask)
```
4. Changing `input_ids` and `attention_mask` to directly consume the dicts items (e.g. `input_ids = token_encoding['input_ids']`) leads to an `AttributeError`.
### Expected behavior
Expected: Successful execution. No output. Worked previously with CUDA 11.3
Retourned:
```
Traceback (most recent call last):
File "test.py", line 21, in <module>
model(input_ids, attention_mask=attention_mask)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1858, in forward
encoder_outputs = self.encoder(
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1052, in forward
layer_outputs = layer_module(
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 736, in forward
hidden_states = self.layer[-1](hidden_states)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 333, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 292, in forward
hidden_states = self.wo(hidden_states)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
```
Alternate error when assigning values directly from dictionary object:
```
Traceback (most recent call last):
File "test.py", line 21, in <module>
model(input_ids, attention_mask=attention_mask)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1858, in forward
encoder_outputs = self.encoder(
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/cdallago/miniconda3/envs/ppih/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 943, in forward
input_shape = input_ids.size()
AttributeError: 'list' object has no attribute 'size'
```
Suspected: torch incompatibility. | 02-08-2023 16:58:39 | 02-08-2023 16:58:39 | Hi @sacdallago
This is an issue that has been fixed in https://github.com/huggingface/transformers/pull/21281 , if you try to run the script on the `main` branch it should work. `pip install --upgrade pip+https://github.com/huggingface/transformers.git`
Also I see that your inference script might be not optimized, please take a look at `flan-t5` model card on how to run an inference using `T5` models: https://huggingface.co/google/flan-t5-xxl (especially on how you deal with the input)<|||||>1. Thanks for the quick fix. It worked, although there was a small typo in the command `pip -> git`, i.e.: ` pip install --upgrade git+https://github.com/huggingface/transformers.git`
2. Thanks for the suggestion, I'll look into it.
Closing for now as the original issue has been fixed |
transformers | 21,519 | closed | Update OPT conversion script to work for OPT-IML | null | 02-08-2023 16:47:54 | 02-08-2023 16:47:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,518 | closed | Add: document question answering task guide | This PR adds a task guide on document question answering.
The example is largely based on @NielsRogge 's [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb) with some inspuration from [this blog post](https://www.philschmid.de/fine-tuning-layoutlm#3-fine-tune-and-evaluate-layoutlm)
A Colab notebook to test the code examples is [here](https://colab.research.google.com/drive/1vIc-YoGbWdNOBGSb6mpkw9myJ4nr2vcA?usp=sharing). | 02-08-2023 16:08:42 | 02-08-2023 16:08:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@MKhalusova is it okay if I reviewed the latest changes on Monday? I am currently on travel. <|||||>> @MKhalusova is it okay if I reviewed the latest changes on Monday? I am currently on travel.
Absolutely! |
transformers | 21,517 | closed | no more dummies for speech processors | # What does this PR do?
The processor classes don't need to check for the existence of sentencepiece and speech, as the tokenizer and feature extractors already do so.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-08-2023 15:52:37 | 02-08-2023 15:52:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failure is flaky, so this is safe to merge. |
transformers | 21,516 | closed | π₯Rework pipeline testing by removing `PipelineTestCaseMeta` π | # What does this PR do?
β οΈ Before merge: We need to add pipeline test classes to `PipelineTesterMixin` and make sure all tests pass. β οΈ
- The pipeline testing now is managed by a new class `PipelineTesterMixin` (similar to `ModelTesterMixin`)
- no more `metaclass=PipelineTestCaseMeta`
- In each model testing file (only PyTorch/TensorFlow), we change
```
class ViTModelTest(ModelTesterMixin, unittest.TestCase)
```
to
```
class ViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase)
```
- There is no need of `TFPipelineTesterMixin`, the framework is automatically detected
- The directory and files `tests/pipelines/test_pipelines_xxx` and class like `XXXPipelineTests` should exist there:
- These now contain **only** _non-model-specific_ pipeline tests like `test_small_model_pt`, `test_small_model_tf` etc.
- Their class attributes `model_mapping` and `tf_model_mapping`, as well as methods `get_test_pipeline` and `run_pipeline_test` are used in `PipelineTesterMixin`:
- I believe these are better left in current files
- (if really necessary, a clean-up could be done in a separate PR. I want to keep the scope of current PR minimal)
## (Preview of the) Rresults
Running the following command
```python
python3 -m pytest --pdb -v tests/models/bert tests/models/vit -k "test_pipeline_text_classification or test_pipeline_image_classification"
```
will give
```bash
tests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_image_classification SKIPPED (Test is skipped: No model architecture under framework pt with the configuration class `BertConfig` is found for the task `image-classification`.) [ 12%]
tests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text_classification PASSED [ 25%]
tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_pipeline_image_classification SKIPPED (Test is skipped: No model architecture under framework tf with the configuration class `BertConfig` is found for the task `image-classification`.) [ 37%]
tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_pipeline_text_classification PASSED [ 50%]
tests/models/vit/test_modeling_tf_vit.py::TFViTModelTest::test_pipeline_image_classification PASSED [ 62%]
tests/models/vit/test_modeling_tf_vit.py::TFViTModelTest::test_pipeline_text_classification SKIPPED (Test is skipped: No model architecture under framework tf with the configuration class `ViTConfig` is found for the task `text-classification`.) [ 75%]
tests/models/vit/test_modeling_vit.py::ViTModelTest::test_pipeline_image_classification PASSED [ 87%]
tests/models/vit/test_modeling_vit.py::ViTModelTest::test_pipeline_text_classification SKIPPED (Test is skipped: No model architecture under framework pt with the configuration class `ViTConfig` is found for the task `text-classification`.)
``` | 02-08-2023 15:30:25 | 02-08-2023 15:30:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I requested @LysandreJik for review as this is one topic we discussed several times.<|||||>Note that tests are then run under "regular" tests of the model, so the pipeline tests as a separate job seem irrelevant after this PR (not necessarily a problem, just stating that we will need to update the tests creation).<|||||>> Note that tests are then run under "regular" tests of the model, so the pipeline tests as a separate job seem irrelevant after this PR (not necessarily a problem, just stating that we will need to update the tests creation).
@sgugger No, the current pipeline tests (the remaining test methods) and **their CI jobs should be kept there**, for other test methods in the current existing `XXXPipelineTests`, like this
https://github.com/huggingface/transformers/blob/c26b1503f22feb20d37588d4243bd444d297bc13/tests/pipelines/test_pipelines_image_classification.py#L115
The only pipeline tests being moved to modeling tests is
https://github.com/huggingface/transformers/blob/c26b1503f22feb20d37588d4243bd444d297bc13/tests/pipelines/test_pipelines_image_classification.py#L60
(which are executed with the dynamically generated tests with the meta class)
The tests like `test_small_model_pt` doesn't use the meta class, and they are bound to a specific model/repo, which is not suitable to put into modeling testing workflow.<|||||>Ok, I'm briefly going to share what we discussed internally:
- I am not too big of a fan of `skips` being ok permanently like this PR would make tests to be `like BertModel::test_pipeline_audio_classification`. If Bert isn't supposed to do audio-classification, we shouldn't have a test for it.
- I don't like moving tests from `tests/pipelines/` to `tests/models/` it makes the testing harder to reason about imo. For instance, I broke tests on purpose and still had a green test: https://app.circleci.com/jobs/github/huggingface/transformers/694859. Either the runner should run those tests, or the tests should be brought back to `tests/pipelines`.
Another argument is that the `PipelineMixin` actually needs to read `tests/pipelines`. This is putting some coupling between `test/models` and `test/pipelines`. The dependency should be reversed IMO (the pipelines depends on the models, not the other way around).
- There's a ton of skipping happening in the tests, which we could maybe take the opportunity to remove to get hard errors `AutoTokenizer.from_pretrained("hf-internal-testing/")` not working should be an auto error (if the tokenizer is supposed to exist, which we should be able to know from the task under test). We can do things progressively since the migration exhibited actual missing tests (which is really good !) But we should focus on removing them as much as possible so we can build some trust that the testing is correct. (It's really easy to miss some skipped too much tests, where it's much hard to miss hard failures)
Overall I think the core of the test changes, are about moving away from custom hard-to-reason-about logic in how we create the tiny models to make them all easily accessible on the hub. Not as much about `TestMeta` vs `TestMixin` <|||||>@Narsil
Could you tell us (me) which test(s) - the test names) you tried to break intentionally but sill get green CI?
> I broke tests on purpose and still had a green test: https://app.circleci.com/jobs/github/huggingface/transformers/694859. Either the runner should run those tests, or the tests should be brought back to tests/pipelines.<|||||>> I don't like moving tests from tests/pipelines/ to tests/models/ it makes the testing harder to reason about imo.
I don't like having a non-canonical model be tested in the pipeline. It also makes the testing harder to reason about. It's completely fine if the pipeline tests GPT-2, but having a change in some random generative model impact the test_pipeline_text_generation, and then put some custom logic there to skip it based on the model name is not a good design either.<|||||>> having a change in some random generative model impact the test_pipeline_text_generation,
If some behavior is modified in such a way that it breaks existing code, them IMO it is justified to break in the pipelines. Pipelines are users of models, if there is a breaking change in a model, and it's seen by the pipeline tests, imho it means the tests are doing their job correctly.
> and then put some custom logic there to skip it based on the model name is not a good design either.
That's the core issue. I'm not advocating for this either.<|||||>I'm curious about your solution to remove code like [this](https://github.com/huggingface/transformers/blob/b31cee67277d25fdb3dbd4619a4aa59d71a29b56/tests/pipelines/test_pipelines_common.py#L96) without moving the tester to a model common tester then. Also the TestMeta has been criticized as too hard to use by almost all model contributors. The core of the problem is not just the tiny model creation.<|||||>> I'm curious about your solution to remove code like [this](https://github.com/huggingface/transformers/blob/b31cee67277d25fdb3dbd4619a4aa59d71a29b56/tests/pipelines/test_pipelines_common.py#L96) without moving the tester to a model common tester then.
This is exactly my point. If RocBert cannot work with FillMask, it most likely means that `RocBertForMaskedLM` is not respecting an invariant of `ForMaskedLM` the pipeline is relying on.
Actually it seems to be working ok now, so maybe this condition is not needed for `fill-mask`. https://huggingface.co/weiweishi/roc-bert-base-zh
What the pipeline does ?
```
model_inputs = tokenizer(somestring_with_mask)
model_outputs = model(**model_inputs)
outputs = somefunction(model_outputs["logits"], model_outputs["input_ids"])
```
> TestMeta has been criticized as too hard to use by almost all model contributors.
Logic was indeed pretty hard to follow, but I think the issue is the complexity of the logic, not the fact that it's meta vs something else.
With the addition of tiny tests, most tests should become equivalent to
```python
pipe = pipeline(model="hf-internal-testing/my-tiny-modelForXX")
out = pipe("some string with [MASK]")
self.assertEqual(out, [{"class": "bear"}])
```
at least in principle.
If they could be spelled out in test code I would find that awesome (since explicit tests are always easier to understand)
For completeness' sake, `feature-extraction` is a good example where the opposite is True. `XXXModel` do not have any invariant to uphold, because the inputs are a bunch of tensors with no specific shape or name (since we moved away from pure NLP), so it's hard for the pipeline to build upon it (and that's why this pipeline is broken for all non-nlp models).
> TestMeta has been criticized as too hard to use by almost all model contributors.
Part of the reason of its existence was to ensure forward compatibility (so if you implement ForXX, you have a working pipeline).
We could by all means remove this. But @LysandreJik added it because we needed some kind of way of detecting non functional pipelines early rather than later (because fixing the model once it's been published almost always requires a breaking change).
Maybe if we had lower level tests on the `ModelForXX` it could become more explicit what's missing ? Like you need to be able to do:
```python
input_ids = torch.zeros(3, 10).long()
output = model(input_ids=input_ids)
assertExists(outputs.logits)
```
of some kind ?
And just as a last note. In the `ModelTester` I have seen countless times (mostly in tokenizer part of test since it's the ones I look at the most) contributors override the required test, and just skipping it because it was too hard to implement.
If you're allowed to do that, it's definitely more convenient, but you're also almost certainly introducing bugs somewhere else.
https://github.com/huggingface/transformers/blob/main/tests/models/gpt2/test_tokenization_gpt2.py#L128
https://github.com/huggingface/transformers/blob/main/tests/models/gpt2/test_tokenization_gpt2.py#L250
https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_tokenization_clip.py#L183
https://github.com/huggingface/transformers/blob/main/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L264
https://github.com/huggingface/transformers/blob/main/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L817
https://github.com/huggingface/transformers/blob/main/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L1715
And all of these bugs are silent, since the test exists and are green, yet do not test anything actually.
Final note: My stance is about what we're going to loose and why I'm against it. Now, I am not a core maintainer, and contributor happiness is not something I can easily put into my balance to do a good tradeoff in this particular decision (nor core maintainer's happiness). This is definitely an important factor in the decision to be made.
Most of the failing tests I have seen where I have reviewed the code was an actual broken invariant, rarely a pipeline issue (and when it was it was definitely off for the better in the pipeline by forcing to use cleaner code), so I view these tests as providing value in preventing bugs. Maybe the proposed change is still the way to go, maybe there are other alternatives.
I also view some of these tests as painful because I sometimes have a hard time fixing them. But 99% of the time, I view these tests as helping rather than hurting in the long run.<|||||>@Narsil Just 2 points to your comments
> This is exactly my point. If RocBert cannot work with FillMask, it most likely means that RocBertForMaskedLM is not respecting an invariant of ForMaskedLM the pipeline is relying on.
Sometimes the issue is not really at pipeline level. It could be a simple **tiny** model's tokenizer or processor issue, like the vocabulary size is not correctly set or sth similar (as you know, the creation of tiny models has a lot of potential pitfall). And not all these kinds of thing are easy to figure out. Moreover, we still have the slow tokenizer not being tested before (and now it is tested, but a few failures are currently skipped - which will be checked in next steps).
> And all of these bugs are silent, since the test exists and are green, yet do not test anything actually.
@SaulLu and me had a short conversation that we should instead use `self.skipTest` rather than silently return. This is something we can improve.<|||||>Thank you for the feedbacks, @LysandreJik. Here is my comments, question, and final thought after reading the feedback.
## Updates: 2023/02/13
I have moved forward with some of @LysandreJik's suggestion, so the comments after this **Update** section is no longer relevant (unless you still want to read it).
What have been implemented:
- Adding a class attribute to `pipieline_model_mapping`, which should be specified with an explicit mapping of task/model for pipeline testing.
https://github.com/huggingface/transformers/blob/636e90c721f682f9df637915e6b3ee0ca278ff34/tests/models/bert/test_modeling_bert.py#L448-L461
- Rely on a (newly added to `transformers`) json file regarding the information of tiny models on the Hub.
https://github.com/huggingface/transformers/blob/636e90c721f682f9df637915e6b3ee0ca278ff34/tests/test_pipeline_common.py#L52-L54
- So we don't need to access this summary file on the Hub
- The advantage of this file is we know explicitly which (tiny) model/tokenizer/processor are available on the Hub, so we don't need to `try ...from_pretrained(...) ... except ... skipTest`.
- This list needs to be updated regularly (mostly for newly added model code)
- Add a new _magic_ method to disable irrelevant tests
https://github.com/huggingface/transformers/blob/636e90c721f682f9df637915e6b3ee0ca278ff34/tests/test_pipeline_common.py#L74-L82
- This is done by setting some pipeline test methods to `None` at model test class declaration time
https://github.com/huggingface/transformers/blob/636e90c721f682f9df637915e6b3ee0ca278ff34/tests/models/bert/test_modeling_bert.py#L461
- This is to avoid some tests to be even collected: for example `test_pipeline_audio_classification` for `bert` is really irrelevant.
- So such tests won't be run and then being skipped at runtime.
- We also have a cleaner test reports.
## Comments
> a very explicit way to show which models should be supported by pipelines and which should not.
- This mappings are (eventually) derived from our exposed `XXX_MAPPING`. If we have issues at pipeline testing level (regarding architectures for specific tasks), it also implies that:
- our mappings has some issues
- (these are publicly exposed objects, but I am not sure if these are widely used)
- Auto might be affected
- pipeline classes (under `src/transformers/pipelines`) are potentially affected
- the tiny model creation are (potentially) affected (use a lot of mappings)
Therefore, from this viewpoint, **we should instead put focus on the correctness of these mappings** (with any extensive suite of tests we need/want), and rely on these mappings (which is already the cases in many places, even not all in testing suite), instead of trying to develop extra mappings (for example, the suggested dict attribute) under each model test class.
**If we do identify any missing case, a fix/update on the mappings `XXX_MAPPING` has much more benefits on `transformers`**, while a fix on a particular `dict. attribute of a model test class` just benefit that model test case.
Having new mappings also increase the chance of having inconsistent mappings (for example, the one givens by `XXX_MAPPING` (when restrict to a configuration class) and the one given by `pipelines_to_test`.
> remove the magic identification of tokenizers/processors linked to an external file on the Hub
- That repo. is necessary when we decided to use tiny models on the Hub but while we are still using metaclass. As you can see at this line
https://github.com/huggingface/transformers/blob/ab81397c8cbe0ba74c75c7dfa4a8787ce6916ce7/tests/test_pipeline_common.py#L34
during the dynamic test generation, all the 8 processes (from `pytest -8`) will try to collect the whole set of tests. Without this file, all the 8 processes tries to access hub repos. for all the involved model architectures, which leads to connection errors, and then not all processes get the same number of tests being collected, and leads to a super bad error message and state (during test collection), even before the test suite could be run.
Now, without metaclass, the tests are collected without the need to access the Hub. The access to the Hub only happens during the tests being actually run (distributed in the 8 processes). **We no longer have the issue, and we can remove the usage of this external file on the Hub.** So I will remove it and made the necessary changes.
> and gain readability, as well as ease of contribution.
- When contributor on (new) pipeline testing, they will mainly deal with this method
https://github.com/huggingface/transformers/blob/ab81397c8cbe0ba74c75c7dfa4a8787ce6916ce7/tests/test_pipeline_common.py#L154
I am a bit suspicious about if they will need to deal with or look at the logic `run_task_tests`, but you core maintainers and @Narsil have more experience regarding the user experience on pipeline testing (so far with metaclass).
## Questions
> I would remove the magic creation of models
- Could you point me where is **the the magic creation of models** you mean? The model is obtained by
https://github.com/huggingface/transformers/blob/ab81397c8cbe0ba74c75c7dfa4a8787ce6916ce7/tests/test_pipeline_common.py#L188
from the tiny models that are created offline and uploaded to the Hub.
## Final thought
I provide my viewpoints that are a bit different, I do see the arguments in the suggestions are valid from other angles.
I am open to the suggestion of being more explicit regarding which pipelines to be tests for a specific model class, if you still think that has more benefits after reading this long comment :-).
<|||||>## Extra words for pipeline testing - regarding tiny models on the Hub
I am not sure if I communicate it clear enough, but we need some improvements on the usage of tiny models on the Hub for pipeline testing
- We will need to run the creation more frequently, either in a regular basis (automatically), or manually when a new model is added to the library
- we won't be able to always get the checkpoints, or the tokenizers/processors
- even we can create all files, it's still possible we will encounter some issues when using them in pipeline testing
So at this moment, when a contributor work on a model, say `NewModel`, and add something like in `NewModelTest`
```python
pipelines_to_test = {
'sequence-classification': NewModelForSequenceClassification,
'token-classification': NewModelForTokenClassification,
}
```
it's still possible the tiny model checkpoint(s) are not on the Hub yet. Force the users to run the tiny model creation would be too hard, and even running them, they won't be able to upload to `https://huggingface.co/hf-internal-testing`.
So it seems to me we can only run and upload those tiny checkpoints afterward.
<|||||>> either in a regular basis (automatically)
This seems the best approach imo, since it would mean our tools allow for it to happen. When it works, usually it's just a matter of choosing a small config (n_layers smaller + hidden_smaller + vocab_size smaller for NLP for instance).
But you're right it's not always possible easily (I tend to think it's a modeling issue)<|||||>Hi @LysandreJik
I have added all tasks to `PipelineTesterMixin`, updated all model test classes to inherit from `PipelineTesterMixin` and defined their `self.pipieline_model_mapping`.
- In each modeling test file, I only add `PipelineTesterMixin` to only one test class
- to avoid the tests being duplicated in classes like `BartModelTest` and `BartStandaloneDecoderModelTest`
- For transparency, I use an automatic way to obtain and add such mappings from current `...PipelineTests` classes to the modeling test classes
```python
class TextGenerationPipelineTests(unittest.TestCase):
model_mapping = MODEL_FOR_CAUSAL_LM_MAPPING
tf_model_mapping = TF_MODEL_FOR_CAUSAL_LM_MAPPING
```
If everything is fine for you, I am going to merge π₯
(`test_tf` is successfully completed - circleci has some issue to report back here)<|||||>Hello @sgugger . Last week, I was about to merge this PR, but I felt something might be wrong and found out that
**tests_fetcher.py won't work well along with this PR**
In short, if I change a file named `src/transformers/pipelines/text_classification.py` in a commit, and run
```bash
python utils/tests_fetcher.py --diff_with_last_commit
```
it won't trigger those modeling test files that contain the text classification (added via `pipeline_model_mapping` in this PR).
See the bash output shown at the end.
I haven't yet look the code in `tests_fetcher.py`, **but I am wondering**:
- if you have ideas of how we should adopt `tests_fetcher.py` to this new situation?
- and if you think it might be easier for you to take over this PR and make the necessary change regarding this final part of test fetching?
- (I am not asking you to do it. I can work on it with your help/idea - it might take longer for me however)
### bash output
```bash
### MODIFIED FILES ###
- src/transformers/pipelines/text_classification.py
### IMPACTED FILES ###
- src/transformers/__init__.py
- src/transformers/commands/run.py
- src/transformers/commands/serving.py
- src/transformers/commands/train.py
- src/transformers/commands/transformers_cli.py
- src/transformers/convert_graph_to_onnx.py
- src/transformers/pipelines/__init__.py
- src/transformers/pipelines/text_classification.py
### TEST TO RUN ###
- tests/onnx/test_onnx.py
- tests/pipelines/test_pipelines_text_classification.py
- tests/utils/test_cli.py
```<|||||>For now, I think the easiest way is to add a special return statement [here](https://github.com/huggingface/transformers/blob/2d506ea4c4980a4cab43c2940d9836ddfd629524/utils/tests_fetcher.py#L432) where you put something like `tests/models/modeling_*.py` on top of the pipeline test to force-trigger all common model tests in those instances. I'll rework it when I dive into the test_fetcher v2 (normally later this week!)<|||||>@sgugger Thank you for the suggestion (as well as the good willing to work on test fetcher v2).
**I have to modify a bit more than what you suggested**. **The wild card** `_*` such as `pipelines/test_pipelines_*.py` (already on `main`) and `models/.../test_modeling_*.py` **won't work** as this statement
https://github.com/huggingface/transformers/blob/6ca844582c2ec12309e054484a419d3d467d0f8c/utils/tests_fetcher.py#L581
will remove them. I have run the command with a change to `pipelines/__init__.py` against `main`: **we do get all pipeline test files being collected, but this is not due to the wild card, but the dependency between the modules**
I therefore change them to explicit test files using `glob` to avoid further usage of wild card (to avoid potential missing test files to run - despite the above example is fine).
[Here](https://github.com/huggingface/transformers/pull/21516/commits/23c875f919176240e4b03b010a89f4d14562a56d) is the change in the last commit regarding test fetcher.
<|||||>Oh good catch! Change LGTM! |
transformers | 21,515 | open | add MaxViT [TF] | ### Model description
MaxViT: Multi-Axis Vision Transformer is one of the nice papers of late 2022 which is also published in [ECCV 2022](https://arxiv.org/pdf/2204.01697v4.pdf) by Google AI.
* This paper introduces a new attention module called "multi-axis attention" which consists of blocked local and sparse global attention for efficient and scalable spatial interactions on arbitrary input resolutions.
* It demonstrates superior performance on various vision tasks including image classification, object detection, and so on.
I think it would be nice to have it on Hugging Face. I would be happy to contribute it on Hugging face.
cc: @alara @NielsRogge
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and Weights:
* TF1: https://github.com/google-research/maxvit
* TF2: https://github.com/leondgarse/keras_cv_attention_models | 02-08-2023 13:19:30 | 02-08-2023 13:19:30 | are you working on this? @awsaf49 <|||||>Yes |
transformers | 21,514 | closed | [Pipelines] Problems with an image-to-text fine-tuned model | I have fine-tuned the [microsoft/git-base](https://huggingface.co/microsoft/git-base) model on image cpationing ([Colab Notebook](https://colab.research.google.com/drive/1sLiSY2ixv52yqlw9EUW6BPwCq9XPLc9R?usp=sharing)).
I am trying to use the model with π€ Pipelines:
```py
from transformers import pipeline
captioner = pipeline(model="sayakpaul/git-base-pokemon", tokenizer=processor.tokenizer, feature_extractor=processor)
captioner("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png")
```
It only spits:
```
[{'generated_text': 'https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png'}]
```
If you check the Colab Notebook, you will notice that it works okay when the inference is performed explicitly i.e., without pipelines. Is it because the architecture [tagged](https://huggingface.co/sayakpaul/git-base-pokemon/blob/main/config.json#L4) with the model is `GitForCausalLM`?
Also note that on the [model repo](https://huggingface.co/sayakpaul/git-base-pokemon), there is a tag "Image To Text" WHICH I HAVE MANUALLY ADDED to see if that has any effect. By default, the model gets tagged as a text generation model.
@Narsil is it out of scope to support this model in an image to text generation pipeline? | 02-08-2023 10:22:22 | 02-08-2023 10:22:22 | I'm not well versed with `Git` as a model.
Pipelines are usually agnostic to actual models. As long as model X is `AutoModelForVision2Seq` it should work out of the box.
If the architecture is different, we can discuss what's done and how to implement.
The rule of thumb:
- Most models should get supported out of the box through sheer consistency with other models (pipeline being agnostic).
- The more popular a model, the more likely we add support (Who would have guessed)
- Pipeline is defined by I/O without parameter, as long as the model can provide, we can support.
- Extra parameters are to be added regarding the task at hand, not really what model do . (For instance `timestamps` in audio recognition is useful to do automated subtitling, it doesn't matter how models actually implement it)<|||||>Fair enough!
I guess the main reason the pipeline is acting weird could be that the model is loaded into `AutoModelForCasualLM`. Looking at the [source code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/modeling_git.py), it's not implemented for `AutoModelForVision2Seq` I think. <|||||>Yes that's exactly it. In the absence of tags the hub will check the config and assign a pipeline based on architecture format `ForXX`, just like the pipeline does.<|||||>Do you have a sample script to make it work for captionning ?<|||||>> Do you have a sample script to make it work for captionning ?
If you check the Colab Notebook I linked above, you will see it at the end (the inference section). <|||||>The pipeline currently only supports classes that are instances of `VisionEncoderDecoderModel`.
`GitForCausalLM` isn't supported, as that would require extending the image-to-text pipeline.<|||||>Seems to me that the colab does pretty much what the pipeline does:
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/image_to_text.py#L114
Any reason not to implement `ForVision2Seq` ? <|||||>Yeah that is what my understanding is as well. Maybe @NielsRogge can provide more on
> Any reason not to implement ForVision2Seq ?<|||||>> Any reason not to implement ForVision2Seq ?
The image-to-text pipeline currently only supports the `MODEL_FOR_VISION_2_SEQ_MAPPING` as seen [here](https://github.com/huggingface/transformers/blob/5b67ab9924cf7587b39b59eb0bf0abd3d099e8b9/src/transformers/pipelines/image_to_text.py#L23) (hence, the `AutoModelForVision2Seq` class), however GIT is a special model that is part of the `MODEL_FOR_CAUSAL_LM_MAPPING`. The reason it's only defined in this mapping is because `GitForCausalLM` can technically also be used as a text-only generative model like GPT-2.
But in practice, `GitForCausalLM` is only used for image (and video)-to-text use cases. It is a custom model but has the same API as the `AutoModelForVision2Seq` class (it takes `pixel_values` as input to the `generate` method). So it wouldn't make sense to add the MODEL_FOR_CAUSAL_LM_MAPPING to this pipeline, rather it probably makes sense to have GitForCausalLM as a standalone class in this pipeline, but I'm not sure this is feasible/allowed.<|||||>> It is a custom model but has the same API as the AutoModelForVision2Seq class
So make it `ForVision2Seq`, no ? As long as it upholds the invariant (signature + return type) then by definition it's ok to add it...<|||||>I've added BLIP and BLIP-2 to the ForVision2Seq mapping, making them usable with the image-to-text pipeline: https://github.com/huggingface/transformers/pull/21802.
However, GIT can't be added out-of-the-box, due to `GitForCausalLM` not having `pixel_values` as the first argument in its forward call, but rather `input_ids` (unlike models like `VisionEncoderDecoderModel`, `BlipForConditionalGeneration`, etc). <|||||>What are those `input_ids` corresponding to ?
If they are `LongTensor` like regular input ids, where did the image go ? Does it need a combination or not ?<|||||>GIT is a bit special in the sense that it can be viewed as a GPT-2 model, taking `input_ids` as input (the text prompt), and one can optionally provide `pixel_values` to condition the model on an image.
* If you only provide `pixel_values`, then it will caption the image.
* If you only provide `input_ids`, then it will behave like GPT-2.<|||||>Shouldn't we implement `GitForVision2Seq` in the first case ? It's a classic `encoder-decoder` case, correct ?<|||||>No `GitForCausalLM` should ideally be added directly to the Vision2Seq mapping. It's a decoder-only Transformer. The only reason we can't add it directly is that it doesn't take `pixel_values` as first keyword argument, which is what the image-to-text pipeline expects.<|||||>IT seems `input_ids` is **not** necessary: https://colab.research.google.com/drive/1sLiSY2ixv52yqlw9EUW6BPwCq9XPLc9R#scrollTo=b3XKBvEcU_PR&line=2&uniqifier=1
No ?
If it's a regular decoder for the text, then the `decoder_input_ids` should automatically be set by `generate` making `GitForVision2Seq` possible. No ?<|||||>Yes correct, `input_ids` is an optional argument, which can serve as text prompt. If you don't provide one, the model will start generating text from the BOS token with id = 1.
But the problem is [here](https://github.com/huggingface/transformers/blob/0c7f93f5f118b080bc4eb7ae91ea35432db80708/src/transformers/pipelines/image_to_text.py#L114). The inputs, prepared using the image processor, will be `pixel_values`, but GitForCausalLM has `input_ids` as first keyword argument. <|||||>Oh this code is already not looking pretty, there could be a way to make it better.
But we could always add
```python
GitForVision2Seq(GitForCausalLM):
def forward(self, pixel_values, ***):
return super().formward(self, pixel_values=pixel_values)
```
for isntance? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have removed the Good first issue label as there is no clear plan explaining to a beginner what to do to solve this issue. Please add this if you want to re-put that label.<|||||>If @Narsil agrees, we can add the `GitForVision2Seq` class to make GIT be supported by the pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>To fix this issue, one can:
* add `GitForVision2Seq` as explained above to transformers/models/git/modeling_git.py
* add this class to the MODEL_FOR_VISION_2_SEQ_MAPPING [here](https://github.com/huggingface/transformers/blob/eb5b5ce64153e6fb12319cc8dcd8e1d7a81fd273/src/transformers/models/auto/modeling_auto.py#L526)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This was fixed by #23362 |
transformers | 21,513 | closed | Fixing backward compatiblity `image_processor` in pipeline. | # What does this PR do?
Fixes #20851
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 02-08-2023 10:21:05 | 02-08-2023 10:21:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,512 | closed | [Tasks] Adds image captioning | This PR adds a guide on image captioning introducing a brand new section for tasks: Multimodal :)
The idea was originated by @NielsRogge and the PR builds on top of his great work [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb).
An interactive Colab Notebook that was used to develop the task guide is available [here](https://colab.research.google.com/drive/1sLiSY2ixv52yqlw9EUW6BPwCq9XPLc9R?usp=sharing).
Not bad quality, eh?

Generated caption:
> a drawing of a pink and blue pokemon | 02-08-2023 10:06:26 | 02-08-2023 10:06:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,511 | closed | [Doc] Minor URL fixes in PyTorch Text Classification Readme | Hi,
this PR introduces some minor url fixes in the PyTorch text classification readme. | 02-08-2023 09:31:08 | 02-08-2023 09:31:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,510 | closed | BLIP computes loss no matter what | ### System Info
transformers-4.27.0.dev0
### Who can help?
@younesbelkada
### Reproduction
Currently, `BlipForConditionalGeneration` will always return a loss, even when the user hasn't provided any labels:
```
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForConditionalGeneration
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
print(outputs.keys())
```
which prints:
```
odict_keys(['loss', 'decoder_logits', 'image_embeds', 'last_hidden_state'])
```
And what's also weird is that this loss is NaN:
```
print(outputs.loss)
```
returns
```
tensor(nan, grad_fn=<AddBackward0>)
```
### Expected behavior
Returning a loss in case the user doesn't provide `labels` isn't really compliant to the design of all models in the Transformers library, which only return a loss in case the user provides labels.
Hence I'm advocating to remove [the following lines of code](https://github.com/huggingface/transformers/blob/5b67ab9924cf7587b39b59eb0bf0abd3d099e8b9/src/transformers/models/blip/modeling_blip.py#L1008-L1009). Similar to models like `T5ForConditionalGeneration`, `BartForConditionalGeneration`, it is the user's responsibility to create the appropriate labels.
Moreover, I'm not sure why [the following lines of code](https://github.com/huggingface/transformers/blob/5b67ab9924cf7587b39b59eb0bf0abd3d099e8b9/src/transformers/models/blip/modeling_blip.py#L1005-L1006) are included, as they don't seem necessary and also lead to unexpected behaviour. Again here, models like T5 and BART don't set the `input_ids` automatically to the decoder start token ID in case the user doesn't provide `input_ids`. It's the user's responsibility to provide `input_ids`. Instead, those lines should also simply be removed IMO. | 02-08-2023 09:03:20 | 02-08-2023 09:03:20 | i would like to work on this can you assign me |
transformers | 21,509 | closed | Errors when using FSDP's cpu-offload to train the OPT model | ### System Info
### Environment
ubuntu 20.0.4
python 3.9.10
torch=1.12.0+cu113
transformers==4.26.0
### Who can help?
@sgugger @ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### command
torchrun --nproc_per_node=4 [run_clm.py](https://github.com/huggingface/transformers/blob/v4.26.0/examples/pytorch/language-modeling/run_clm.py) \\
--model_name_or_path "facebook/opt-1.3b" \\
--per_device_train_batch_size 1 \\
--output_dir "/workspace/workfile/Downloads/output" \\
--overwrite_output_dir --fp16 --do_train \\
--max_train_samples 500 --num_train_epochs 1 \\
--dataset_name "wikitext" --dataset_config "wikitext-2-v1" \\
--fsdp "full_shard offload auto_wrap" \\
--fsdp_transformer_layer_cls_to_wrap "OPTDecoderLayer" \\
--optim "adamw_torch" --block_size 1024
### Expected behavior
Traceback (most recent call last):
File "/workspace/workfile/Projects/fsdp_test/run_clm.py", line 608, in <module>
main()
File "/workspace/workfile/Projects/fsdp_test/run_clm.py", line 558, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 1543, in train
return inner_training_loop(
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 1791, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 2539, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 2571, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2280, in forward
outputs = self.module(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
return self.module(*inputs, **kwinputs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 932, in forward
outputs = self.model.decoder(
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 697, in forward
layer_outputs = decoder_layer(
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2244, in forward
self._lazy_init()
File "/usr/local/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 1454, in _lazy_init
self._init_param_attributes(p)
File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 1527, in _init_param_attributes
assert p.device == torch.device("cpu"), (
AssertionError: Expected param to be on CPU when cpu_offloading is enabled. If CPU offloading is enabled correctly, you may be accidentally moving the model to CUDA after FSDP initialization. | 02-08-2023 08:35:08 | 02-08-2023 08:35:08 | cc @pacman100 <|||||>Hello @young-chao,
Could you please update the torch version to 1.13 and check if that fixes the issue?
<|||||>Also, cpu offload seems to hang indefinitely with mixed_precision, will raise issue with pytorch repo on this. FSDP CPU Offload without mixed precision is working on 1.13 and later versions <|||||>> Also, cpu offload seems to hang indefinitely with mixed_precision, will raise issue with pytorch repo on this. FSDP CPU Offload without mixed precision is working on 1.13 and later versions
Thank you for your replyοΌI updated the torch version to 1.13 and this error disappeared.<|||||>Hello @young-chao, happy that the issue is resolved. Please feel free to close the issue. |
transformers | 21,508 | closed | [Discuss] Handling images when the number of dimensions is 2 | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- TensorFlow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
Ccing @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue thread is rather for discussing how to best handle images when the number of dimensions is two. When this is the case any image processor (subclass of `BaseImageProcessor`) will error out, complaining that this number is invalid (which it should).
I think there are two simple ways to handle this:
* We can either discard these images.
* Or we can handle it like so:
```python
from PIL import Image
import numpy as np
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np.expand_dims(np_image, -1), 3)
return Image.fromarray(tiled_image)
else:
return Image.fromarray(np_image)
```
I was working through the [Semantic Segmentation task guide](https://huggingface.co/docs/transformers/tasks/semantic_segmentation). Depending on the shuffling, there would be cases when one would hit this situation. The task guide uses the [Scene Parse 150 dataset](https://huggingface.co/datasets/scene_parse_150).
In this case, the grayscale (image dimension is 2) image is not worth throwing out IMO:

Here's the corresponding segmentation map:

Here are my full data pipeline utilities in case someone runs into the same issue:
```python
from PIL import Image
import numpy as np
from datasets import load_dataset
from torchvision.transforms import ColorJitter
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np.expand_dims(np_image, -1), 3)
return Image.fromarray(tiled_image)
else:
return Image.fromarray(np_image)
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
def train_transforms(example_batch):
images = [jitter(handle_grayscale_image(x)) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [handle_grayscale_image(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
ds = load_dataset("scene_parse_150", split="train[:150]")
ds = ds.train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
```
### Expected behavior
Maybe suggest some potential alternatives from the error message that is thrown by the image processor? | 02-08-2023 03:03:10 | 02-08-2023 03:03:10 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,507 | closed | Bump cryptography from 36.0.2 to 39.0.1 in /examples/research_projects/decision_transformer | Bumps [cryptography](https://github.com/pyca/cryptography) from 36.0.2 to 39.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>39.0.1 - 2023-02-07</p>
<pre><code>
* **SECURITY ISSUE** - Fixed a bug where ``Cipher.update_into`` accepted Python
buffer protocol objects, but allowed immutable buffers. **CVE-2023-23931**
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.0.8.
<p>.. _v39-0-0:</p>
<p>39.0.0 - 2023-01-01
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for OpenSSL 1.1.0 has been removed.
Users on older version of OpenSSL will need to upgrade.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for LibreSSL < 3.5. The new
minimum LibreSSL version is 3.5.0. Going forward our policy is to support
versions of LibreSSL that are available in versions of OpenBSD that are
still receiving security support.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Removed the <code>encode_point</code> and
<code>from_encoded_point</code> methods on
:class:<code>~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicNumbers</code>,
which had been deprecated for several years.
:meth:<code>~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey.public_bytes</code>
and
:meth:<code>~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey.from_encoded_point</code>
should be used instead.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for using MD5 or SHA1 in
:class:<code>~cryptography.x509.CertificateBuilder</code>, other X.509 builders, and
PKCS7 has been removed.</li>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Dropped support for macOS 10.10 and 10.11, macOS
users must upgrade to 10.12 or newer.</li>
<li><strong>ANNOUNCEMENT:</strong> The next version of <code>cryptography</code> (40.0) will change
the way we link OpenSSL. This will only impact users who build
<code>cryptography</code> from source (i.e., not from a <code>wheel</code>), and specify their
own version of OpenSSL. For those users, the <code>CFLAGS</code>, <code>LDFLAGS</code>,
<code>INCLUDE</code>, <code>LIB</code>, and <code>CRYPTOGRAPHY_SUPPRESS_LINK_FLAGS</code> environment
variables will no longer be respected. Instead, users will need to
configure their builds <code>as documented here</code>_.</li>
<li>Added support for
:ref:<code>disabling the legacy provider in OpenSSL 3.0.x<legacy-provider></code>.</li>
<li>Added support for disabling RSA key validation checks when loading RSA
keys via
:func:<code>~cryptography.hazmat.primitives.serialization.load_pem_private_key</code>,
:func:<code>~cryptography.hazmat.primitives.serialization.load_der_private_key</code>,
and
:meth:<code>~cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateNumbers.private_key</code>.
This speeds up key loading but is :term:<code>unsafe</code> if you are loading potentially
attacker supplied keys.</li>
<li>Significantly improved performance for
:class:<code>~cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305</code></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/d6951dca25de45abd52da51b608055371fbcde4e"><code>d6951dc</code></a> changelog + security fix backport (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/8231">#8231</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/138da90c8450446b19619e3faa77b9da54c34be3"><code>138da90</code></a> workaround scapy bug in downstream tests (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/8218">#8218</a>) (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/8228">#8228</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/69527bc79095c9646d7e839121f0783477892ecc"><code>69527bc</code></a> bookworm is py311 now (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/8200">#8200</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/111deefb659b8d73c56d3ce89458f2df973d60e4"><code>111deef</code></a> backport main branch CI to 39.0.x (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/8153">#8153</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/338a65a7df74e189f6b5d1d3a6315ffa911b21c2"><code>338a65a</code></a> 39.0.0 version bump (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7954">#7954</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/84a3cd7abb16f594d8c315e8aedb4be02583bf6a"><code>84a3cd7</code></a> automatically download and upload circleci wheels (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7949">#7949</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/525c0b3d5d89eab7f953be5de5d2b75da1c816f8"><code>525c0b3</code></a> Type annotate release.py (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7951">#7951</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/46d2a94d1b574abf5b9e88f84fa7400a138c4edb"><code>46d2a94</code></a> Use the latest 3.10 release when wheel building (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7953">#7953</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f150dc15582c05b1b94cf08ed3b1fbc9c4f52267"><code>f150dc1</code></a> fix CI to work with ubuntu 22.04 (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7950">#7950</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/8867724b2b6db528d2900414ef86c122a1f5602a"><code>8867724</code></a> fix README for python3 (<a href="https://github-redirect.dependabot.com/pyca/cryptography/issues/7947">#7947</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pyca/cryptography/compare/36.0.2...39.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 02-08-2023 02:58:00 | 02-08-2023 02:58:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,506 | closed | A possible bug in the transformers logging file | ### System Info
In this file https://github.com/huggingface/transformers/blob/main/src/transformers/utils/logging.py there is a bug that I am not sure of so I wanted to discuss it here before sending a pull request in case I misunderstood anything.
In this function:
```
def remove_handler(handler: logging.Handler) -> None:
"""removes given handler from the HuggingFace Transformers's root logger."""
_configure_library_root_logger()
assert handler is not None and handler not in _get_library_root_logger().handlers
_get_library_root_logger().removeHandler(handler)
```
Shouldn't it be
```
assert handler is not None and handle in _get_library_root_logger().handlers
```
Because if the handler is asserted to not be in the `_get_library_root_logger().handlers`, how will it be removed from the `_get_library_root_logger`?
### Who can help?
@Phi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use this test case:
```
def test_remove_handler():
# Create a sample handler
handler = logging.StreamHandler()
library_logger = _get_library_root_logger()
library_logger.addHandler(handler)
# Ensure the handler is in the library logger's handlers list
assert handler in library_logger.handlers
# Call remove_handler and pass in the handler as an argument
remove_handler(handler)
# Check that the handler is no longer in the library logger's handlers list
assert handler not in library_logger.handlers
```
### Expected behavior
I expect that there will be no `AssertionError` on testing this test case because I simply added a handler then removed it. | 02-08-2023 01:24:33 | 02-08-2023 01:24:33 | cc @LysandreJik <|||||>Oh yeah great catch @bahgat-ahmed ! I think it was introduced by mistake in https://github.com/huggingface/transformers/pull/10633.
I think some changes need to be done in this module:
- This needs to be updated
- The assertions need to be updated to `ValueError`s.
Would you like to try your hand at it @bahgat-ahmed? Happy to guide you if you'd like.<|||||>Thanks a lot @LysandreJik for this, and for offering the guide! Definitely, happy to try my hand and help the community.
So I think this is the update that you want, right?
```
def remove_handler(handler: logging.Handler) -> None:
"""Removes given handler from the HuggingFace Transformer's root logger."""
_configure_library_root_logger()
if handler is None:
raise ValueError("Handler is None")
if handler not in _get_library_root_logger().handlers:
raise ValueError("Handler is not in the list of handlers")
_get_library_root_logger().removeHandler(handler)
```
If that is okay, tell me and I will make a Pull Request for this modification.
But do you see that it is better to also convert all `assertions` to `ValueErrors` in all the functions in this same file?
By the way, sorry for my late response. It is my first issue to open on GitHub :), so I thought that your answer to my issue will be sent as a notification on GitHub, but I didn't receive any notification. So, I have just checked now my issue and found your comments :)<|||||>That looks correct to me! If you want to update all assertions in the file, that would be welcome, but then I would split the PR in two: one PR focusing on the errors, and another one focusing on the initial bug you reported.
Does that sound good to you? Please ping me on the PR(s) and I'll be happy to review. Thanks!<|||||>Yes, sure sounds good, @LysandreJik . I can even send separate pull requests from my side. The first one for the bug, and the second one for updating all assertions. If that sounds good to you, just confirm to me :)
You're most welcome, @LysandreJik . And thank you for your support :)<|||||>> Yes, sure sounds good, @LysandreJik . I can even send separate pull requests from my side. The first one for the bug, and the second one for updating all assertions. If that sounds good to you, just confirm to me :)
>
> You're most welcome, @LysandreJik . And thank you for your support :)
The aim is to disable log output to the console because I want to implement an exe wrapper that does not display cmd on the command line,what should I do?

https://github.com/huggingface/diffusers/issues/2394<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,505 | closed | [tests] add missing `report_to none` | wandb keeps on breaking, ensuring it doesn't run in deepspeed tests | 02-07-2023 23:05:13 | 02-07-2023 23:05:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,504 | closed | [deepspeed] deal with models w/o `config.hidden_size` | This PR
1. extends the deepspeed integration config setup to models that use `config.hidden_sizes` instead of `config.hidden_size`
2. and while at it adds a check so that it only ever tries to figure hidden size if it's actually needed.
3. and if the model config has neither of the 2 attributes - a proper assert is used.
Fixes: https://github.com/huggingface/transformers/issues/21342 | 02-07-2023 22:44:40 | 02-07-2023 22:44:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc: @pacman100 - you may probably want to sync this special case with accelerate, right? |
transformers | 21,503 | closed | Wrap RemBert integration test forward passes with torch.no_grad() | # What does this PR do?
Fixes #14642. Wrapped the forward pass inside RemBert's integration test with `with torch.no_grad()` to ensure no gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik Please let me know if this fix works. Thank you:) | 02-07-2023 22:29:08 | 02-07-2023 22:29:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,502 | closed | Replace input_values_processing with unpack_inputs | # What does this PR do?
Updates Hubert and Wav2Vec2 to use the repo standard `unpack_inputs` decorator instead of `input_values_processing`. Previously, TensorFlow models use `input_values_processing`, a function which processed inputs ready to be passed to the TF layers. This was removed and replaced with unpack inputs, see #16051. For some reason, wav2vec2 wasn't part of this update. [A bug was reported](https://github.com/huggingface/transformers/issues/16051#issuecomment-1067938180), but the conversation and details can't be found in the discord π’
This PR also updates any relevant tests:
* Adds `unittest.skip` decorator so tests are marked as skipped rather than passed
* Modifies batch sizes for tests that were previously skipped for OOM to make sure logic is fully tested.
* Adds back tests which shouldn't have been skipped. This caught the issue with loss not being calculated (see below)
**Note:** The loss calculation for `TFHubertForCTC` wasn't being calculated properly. Without the `unpack_inputs` decorator, when `model.fit` was being called, `labels` were passed into the model as part of the `input_values` dictionary. These were then processed and [part of the `inputs` dictionary](https://github.com/huggingface/transformers/blob/c35bb6de547f8839434c3d5772777c699e9595de/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1645). However, `labels` was always [taken from the function kwargs](https://github.com/huggingface/transformers/blob/c35bb6de547f8839434c3d5772777c699e9595de/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1677) and so loss calculation skipped. [This is why the previous test 'returned the wrong shape'](https://github.com/huggingface/transformers/blob/c35bb6de547f8839434c3d5772777c699e9595de/tests/models/hubert/test_modeling_tf_hubert.py#L328). The issue was resolved for `TFWav2Vec2ForCTC` in a previous PR #18014
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 02-07-2023 22:01:27 | 02-07-2023 22:01:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,501 | closed | Fix import in Accelerate for find_exec_bs | # What does this PR do?
This seems to have slipped through the cracks of the reorg in Accelerate. Should fix the import error seen in the tests. | 02-07-2023 21:25:32 | 02-07-2023 21:25:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Abusing my admin right to merge this right now since all tests are green and this fixes a test failing on many other PRs :-) |
transformers | 21,500 | closed | Check for mapping/dict in distributed_concat function | # What does this PR do?
* Feature request for checking if input to the distributed_concat function is of type Mapping/Dict.
* The [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function currently checks for the List/Tuple type, but does not check for the Mapping/Dict type.
* The [nested_xla_mesh_reduce](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L171) & [smp_gather](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L1084) functions have an IF statement to check if the input is of type Mapping/Dict and then iterates over the items.
Fixes # (issue): https://github.com/huggingface/transformers/issues/21497
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? **Yes**
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. **Yes**. Link: https://github.com/huggingface/transformers/issues/21497
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? **No**
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @stas00 | 02-07-2023 21:08:14 | 02-07-2023 21:08:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sylvain, wrt failing test - it looks like `src/accelerate/utils/memory.py` is the new location of `find_executable_batch_size` - backcompat breakage in accelerate? need to require `accelerate>=0.16.0` in `setup.py` here?
```
tests/trainer/test_trainer_utils.py:432:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
function = <function TrainerUtilsTest.test_executable_batch_size.<locals>.mock_training_loop_function at 0x7fe748946290>
starting_batch_size = 64, auto_find_batch_size = True
def find_executable_batch_size(
function: callable = None, starting_batch_size: int = 128, auto_find_batch_size: bool = False
):
"""
Args:
A basic decorator that will try to execute `function`. If it fails from exceptions related to out-of-memory or
CUDNN, the batch size is cut in half and passed to `function` `function` must take in a `batch_size` parameter as
its first argument.
function (`callable`, *optional*)
A function to wrap
starting_batch_size (`int`, *optional*)
The batch size to try and fit into memory
auto_find_batch_size (`bool`, *optional*)
If False, will just execute `function`
"""
if function is None:
return functools.partial(
find_executable_batch_size,
starting_batch_size=starting_batch_size,
auto_find_batch_size=auto_find_batch_size,
)
if auto_find_batch_size:
requires_backends(find_executable_batch_size, "accelerate")
import accelerate.memory_utils as mem_utils
> return mem_utils.find_executable_batch_size(function=function, starting_batch_size=starting_batch_size)
E AttributeError: module 'accelerate.memory_utils' has no attribute 'find_executable_batch_size'
src/transformers/trainer_utils.py:651: AttributeError
```<|||||>This is fixed in #21501 |
transformers | 21,499 | closed | Replace the frequent permutation generations with a pre-allocated permutation table | # What does this PR do?
This PR fixes the performance issue in model big_bird. In model bigbird, the random masks are generated each time, and it calls a huge number of `np.random.permutation` which has big overheads.
the function `bigbird_block_rand_mask` is only executed when the values of `from_seq_length` and `to_seq_length` are less than [4096](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py#L56) and the values of `from_block_size` and `to_seq_length` are 64. With all the above limitations, the number of all possible permutations is less than 249,984. Since all variables mentioned above are configured only once at the initialization stage, we generate all possible permutations at this stage and only randomly generate the indexes by using `numpy.random.randint` when we need permutations.
This optimization significantly reduces the computation on the CPU, and its speedup for the whole model is about 1.12X in TorchBench.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-07-2023 19:44:48 | 02-07-2023 19:44:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21499). All of your documentation changes will be reflected on that endpoint.<|||||>@ArthurZucker Could you please also help me check the CI errors? In the first CI test 'add model like runner', the error is `src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py:271:21: F821 Undefined name permutations`. but this PR doesn't modify the code in bigbird_pegasus and I can't find the related imports etc. in bigbird_pegasus. <|||||>Sure! Seems like you need to run `make style` and `make fix-copies`.
If `make style` does not work, (meaning the tests for quality are still failing) then you have to do `pip install --upgrade -e ".[dev]". Basically to make sure that the `black` version you are using is the correct one<|||||>You also need to rebase on main ! <|||||>Okay! Now fixing this should only need `make fixup`. But to make sure you have the correct black version run `pip install -e .` should do the trick. Once you are ready for a final review, feel free to ping @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>sorry for the late update. will finish it this week. <|||||>Hi @ArthurZucker , I updated the docstrings. As for the fast function, I explained why it couldn't be replaced with `np.random.randint` in [this reply](https://github.com/huggingface/transformers/pull/21499#discussion_r1154818120). Could you double-check it? <|||||>Sorry for the delay, went in holidays. This seems like a big modification, so I think having performance stats would be better before merging (this adds a layer of complexity which we are not usually big fans of)><|||||>> Sorry for the delay, went in holidays. This seems like a big modification, so I think having performance stats would be better before merging (this adds a layer of complexity which we are not usually big fans of)>
Hi @ArthurZucker, thanks for your comments! I agree that a big project like huggingface should have high code quality requirements! no worries. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,498 | closed | Add XLM-V to Model Doc | Hi,
as discussed in #21330 it would be good to have an extra entry for the new XLM-V model in the Model Doc.
This PR adds it with some additional information about the model and conducted experiments with it. | 02-07-2023 19:37:24 | 02-07-2023 19:37:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>CI is failing, I'm going to read the Flan-T5 PR (https://github.com/huggingface/transformers/pull/19892) to see how it should be done!<|||||>You just need to add the model type (same as what you picked for the page in the doc) and name in the configuration_auto file. The PR you mention also does it :-) <|||||>Failures are unrelated to this PR so merging! |
transformers | 21,497 | closed | Distributed Concat (Nested Gather - HF Trainer): Support for Mapping/Dict | ### Feature request
* The request is with respect to the [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function call in the [nested gather](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer.py#L3182) function in the HuggingFace Trainer.
* The feature request: Addition of an additional **IF** statement to check if the **input (tensor)** is of type **Mapping/Dict**.
* The [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function currently checks for the **List/Tuple** type, but does not check for the **Mapping/Dict** type.
```
def distributed_concat(tensor: Any, num_total_examples: Optional[int] = None) -> Any:
try:
if isinstance(tensor, (tuple, list)):
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
tensor = atleast_1d(tensor).contiguous()
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
dist.all_gather(output_tensors, tensor)
concat = torch.cat(output_tensors, dim=0)
# truncate the dummy elements added by SequentialDistributedSampler
if num_total_examples is not None:
concat = concat[:num_total_examples]
return concat
except AssertionError:
raise AssertionError("Not currently using distributed training")
```
### Motivation
* The [nested_xla_mesh_reduce](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L171) & [smp_gather](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L1084) functions have an **IF** statement to check if the input is of type **Mapping/Dict** and then iterates over the items. The code runs without throwing any errors.
* For example in [nested_xla_mesh_reduce](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L171):
```
def nested_xla_mesh_reduce(tensors, name):
if is_torch_tpu_available():
import torch_xla.core.xla_model as xm
if isinstance(tensors, (list, tuple)):
return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors))
if isinstance(tensors, Mapping):
return type(tensors)(
{k: nested_xla_mesh_reduce(t, f"{name}_{i}") for i, (k, t) in enumerate(tensors.items())}
)
tensors = atleast_1d(tensors)
return xm.mesh_reduce(name, tensors, torch.cat)
else:
raise ImportError("Torch xla must be installed to use `nested_xla_mesh_reduce`")
```
* The [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function currently checks for the **List/Tuple** type, but does not check for the **Mapping/Dict** type.
```
def distributed_concat(tensor: Any, num_total_examples: Optional[int] = None) -> Any:
try:
if isinstance(tensor, (tuple, list)):
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
tensor = atleast_1d(tensor).contiguous()
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
dist.all_gather(output_tensors, tensor)
concat = torch.cat(output_tensors, dim=0)
# truncate the dummy elements added by SequentialDistributedSampler
if num_total_examples is not None:
concat = concat[:num_total_examples]
return concat
except AssertionError:
raise AssertionError("Not currently using distributed training")
```
* The same code and data (which contains labels of type **Mapping/Dict**) run without any errors when using torch/cuda/python. The nested_xla_mesh_reduce function is called and no errors are thrown.
* The same code however throws an error when I run it using deepspeed, since the [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function is called, and the items of the **Mapping/Dict** object are not appropriately gathered and converted.
* The fix seems to be simple, I currently got the code to work by adding the following code to the [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function:
```
if isinstance(tensor, Mapping):
return type(tensor)(
{k: distributed_concat(t, num_total_examples) for k, t in tensor.items()}
)
```
### Your contribution
I could submit a **PR** with the following lines of code added to the [distributed_concat](https://github.com/huggingface/transformers/blob/b9af152efb748b1bff8f6fe0130e62ebb8e11a53/src/transformers/trainer_pt_utils.py#L188) function:
```
if isinstance(tensor, Mapping):
return type(tensor)(
{k: distributed_concat(t, num_total_examples) for k, t in tensor.items()}
)
```
**The entire function would look something like this:**
```
def distributed_concat(tensor: Any, num_total_examples: Optional[int] = None) -> Any:
try:
if isinstance(tensor, (tuple, list)):
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
if isinstance(tensor, Mapping):
return type(tensor)(
{k: distributed_concat(t, num_total_examples) for k, t in tensor.items()}
)
tensor = atleast_1d(tensor).contiguous()
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
dist.all_gather(output_tensors, tensor)
concat = torch.cat(output_tensors, dim=0)
# truncate the dummy elements added by SequentialDistributedSampler
if num_total_examples is not None:
concat = concat[:num_total_examples]
return concat
except AssertionError:
raise AssertionError("Not currently using distributed training")
``` | 02-07-2023 19:24:46 | 02-07-2023 19:24:46 | Happy to review a PR if you want to make one :-) <|||||>Sure, will do! Thanks!<|||||>I just raised a PR (https://github.com/huggingface/transformers/pull/21500). I haven't raised a PR before for HuggingFace. Let me know if I need to change or fix anything or if I forgot to do something. Thanks! |
transformers | 21,496 | closed | Replace inefficient torch.sqrt taking scalar input with numpy.sqrt | # What does this PR do?
This PR fixes a performance issue for the model reformer. In model reformer running on GPUs, there are two calls for `torch.sqrt` and `torch.rsqrt` of a scalar `self.attention_head_size`. From the profiling results, we found that it calls `aten::to`, `aten::copy`, and `cudaStreamSynchronize` etc. It is because `torch.sqrt` and `torch.rsqrt` only accept tensors as arguments, so it has to change the scalar to a tensor with only one element. By replacing them with `numpy.sqrt`, those extra operations are removed. Here is an [example script](https://gist.github.com/FindHao/abbaa7a8a0b74173d9eb2233684bb6a9) to test the impacted function.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-07-2023 19:23:41 | 02-07-2023 19:23:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for working on this. Can you add the improvements that you have in terms of performances? (this will be useful if we look back at this code to know the motivation).<|||||>> Thanks for working on this. Can you add the improvements that you have in terms of performances? (this will be useful if we look back at this code to know the motivation).
@ArthurZucker @sgugger Thanks for your approval! Since this function call doesn't take a big portion of the whole model, the speedup at the model level is trivial. But at the function level, function `_len_and_dim_norm` obtains 27X speedup and removes CPU-GPU synchronizations.<|||||>Thanks for the number! |
transformers | 21,495 | closed | Add inverse sqrt learning rate scheduler | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds the original invserse sqrt learning rate scheduler from [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762).
It is argued that this scheduler achieves the best performance when scaling ViTs on indefinite training times ([Zhai et al. (2022)](https://arxiv.org/abs/2106.04560)).
This PR ads a `get_inverse_sqrt_schedule` function and also updates the test in `tests/optimization/test_optimization.py` and the docs.
The implementation is adapted from:
- [https://github.com/google-research/big_vision/blob/f071ce68852d56099437004fd70057597a95f6ef/big_vision/utils.py#L930](https://github.com/google-research/big_vision/blob/f071ce68852d56099437004fd70057597a95f6ef/big_vision/utils.py#L930)
Timescale equals the no. of warmup steps by default as in:
- [https://github.com/google-research/big_vision/blob/fd2d3bd2efc9d89ea959f16cd2f58ae8a495cd44/big_vision/configs/proj/clippo/train_clippo.py#L144](https://github.com/google-research/big_vision/blob/fd2d3bd2efc9d89ea959f16cd2f58ae8a495cd44/big_vision/configs/proj/clippo/train_clippo.py#L144)
- [https://github.com/google-research/big_vision/blob/6ff6d080d62c1f47e2e4eeb8b6474deb38dfe406/big_vision/configs/proj/scaling_laws/train_vit_g.py#L79](https://github.com/google-research/big_vision/blob/6ff6d080d62c1f47e2e4eeb8b6474deb38dfe406/big_vision/configs/proj/scaling_laws/train_vit_g.py#L79)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-07-2023 17:38:39 | 02-07-2023 17:38:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,494 | closed | Epoch is zero is updated in epoch 2 instead of 1 twice in the Callback | ### System Info
Google Colab, transformers 4.26
### Who can help?
@sta
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import TrainerCallback, TrainingArguments,TrainerControl,TrainerState
from typing import Dict, Any
class DebugCallback(TrainerCallback):
def on_train_begin(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs: Dict,
) -> None:
print("training start")
def on_evaluate(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs: Dict,
) -> None:
print("valiidation")
def on_epoch_begin(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs: Dict,
) -> None:
self.state = state
print("epoch", state.epoch)
print("train" )
def on_epoch_end(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs: Dict,
) -> None:
print("val")
print("epoch", state.epoch)
def on_train_end(
self,
args: TrainingArguments,
state: TrainerState,
control: TrainerControl,
**kwargs: Dict,
) -> None:
print("test")
dc= DebugCallback()
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=food["train"].remove_columns("id"),
eval_dataset=food["test"].remove_columns("id"),
tokenizer=image_processor,
compute_metrics=compute_metrics,
callbacks=[dc]
)
```
### Expected behavior
Expected outcome:
```
epoch 0
epoch 1
epoch 2
```
actual outcome.
```training
epoch 0
train
val
epoch 0.992
valiidation
epoch 0
train
val
epoch 1.992
valiidation
epoch 1
train```
So epoch 0 gets logged twice? | 02-07-2023 17:21:46 | 02-07-2023 17:21:46 | Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?<|||||>@jackapbutler go for it :)<|||||>Hey @franz101, I'm unable to reproduce the issue you've highlighted, do you have access to the `TrainingArguments` that you used for this experiment?
When I run the code I get the following expected output;
```bash
[I 230214 16:26:49 yo:27] training start
[I 230214 16:26:49 yo:46] train
[I 230214 16:26:49 yo:47] epoch 0
[I 230214 16:27:06 yo:56] val
[I 230214 16:27:06 yo:57] epoch 0.96
[I 230214 16:27:12 yo:36] valiidation
[I 230214 16:27:13 yo:46] train
[I 230214 16:27:13 yo:47] epoch 0.96
[I 230214 16:27:29 yo:56] val
[I 230214 16:27:29 yo:57] epoch 1.96
[I 230214 16:27:36 yo:36] valiidation
[I 230214 16:27:36 yo:46] train
[I 230214 16:27:36 yo:47] epoch 1.96
[I 230214 16:27:53 yo:56] val
[I 230214 16:27:53 yo:57] epoch 2.96
[I 230214 16:28:00 yo:36] valiidation
[I 230214 16:28:00 yo:66] test
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,493 | closed | Cleanup quality | # What does this PR do?
This PR cleans up a couple of things after the big #21480
- adds the ruff cache to gitignore just in case (that folders comes with its own auto-generated gitignore so shouldn't be necessary but better be safe)
- remove all mentions of flake8 and isort from the doc
- use the per file rules to enable back quality checks on all inits. | 02-07-2023 16:23:54 | 02-07-2023 16:23:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,492 | closed | :pen: fix typo in pytorch semantic segmentation readme | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
There is a minor typo (wrong variable name) in the script for creating a `DatasetDict` in the `examples/pytorch/semantic-segmentation/README.md`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 02-07-2023 13:51:55 | 02-07-2023 13:51:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,491 | closed | Whisper: Add functionality to decode without conditioning on previous text. | # What does this PR do?
Add whisper functionality to decode *without* conditioning on previous text. https://github.com/openai/whisper/blob/7858aa9c08d98f75575035ecd6481f462d66ca27/whisper/transcribe.py#L278
When condition_on_prev_text=False (in the config), the following happens:
Decoder attention is masked from the consecutive timestamps, such that each whisper segment within the chunk input does not attend to text in previous segments.
This has been shown to drastically reduce hallucination. Works for any batch_size, providing rapid decoding strategy.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/21467
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-07-2023 13:28:12 | 02-07-2023 13:28:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21491). All of your documentation changes will be reflected on that endpoint.<|||||>Great job working on this! Can you run `make style` and make sure you rebase on main first? We recently merged a huge PR related to the code styling! See #21480. This will make sure all tests are green! π will also review soon<|||||>Feel free to ping me once you are done adressing all the previous comments and want another review π
<|||||>@ArthurZucker @sanchit-gandhi Thanks for the review and comments sanchit!
Unfortuntately I am pushing for ICCV/interspeech deadline for the next couple weeks so I dont have time at the moment to polish this for merge. Feel free to edit directly / use as inspo.
Btw it seems from my results that it can still occasionally repeat words (I think this is due to greedy decoding), but overall this is worth the speedup compared to beam search of 5<|||||>Best of luck with the ICCV/Interspeech deadline @m-bain! If either of us get the chance we'll have a look at the PR, otherwise feel free to pick it up when you're freed up!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,490 | closed | Sanity check the type of id2label and label2id arguments of from_pretrained for TokenClassification models | Fixes #20773
Sanity check the type of id2label and label2id arguments of from_pretrained for TokenClassification models | 02-07-2023 13:13:27 | 02-07-2023 13:13:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger One of TF test is failing due to timeout. Not sure if it is related to changes .
<|||||>No, this isn't linked to the PR. Thanks for your contribution! |
transformers | 21,489 | closed | Add limit_all_gathers option to fsdp_config and fix forward_prefetch bug | fixes #21156
Add limit_all_gathers option to fsdp_config.
Fix forward_prefetch bug. | 02-07-2023 12:53:51 | 02-07-2023 12:53:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger All changes resolved. |
transformers | 21,596 | closed | Dead link in Documentation π | Dead link found on: https://huggingface.co/docs/transformers/tasks/summarization#inference ("_For more details about the different text generation strategies and parameters for controlling generation, check out the_ [Text Generation API](https://huggingface.co/docs/transformers/tasks/main_classes/text_generation)") . The URL should lead to the Text Generation documentation, which can actually be found at: [https://huggingface.co/docs/transformers/main_classes/text_generation](https://huggingface.co/docs/transformers/main_classes/text_generation). | 02-07-2023 11:05:06 | 02-07-2023 11:05:06 | Hey there! Thanks for the report! Would you like to open a PR to update the corresponding file in `transformers`? https://github.com/huggingface/transformers/tree/main/docs/source/en/tasks <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @danadascalescu00 - thanks for raising this issue! I'm closing it now as it's resolved in #22274 |
transformers | 21,488 | closed | changed "ot" to "to" | null | 02-07-2023 10:06:45 | 02-07-2023 10:06:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,487 | closed | [`Doc`] Fix int8 docs | # What does this PR do?
Since the `0.37.0` release of `bitsandbytes`, all GPUs architectures should support int8 matrix multiplication. This PR clarifies this on the documentation
cc @sgugger
| 02-07-2023 08:49:57 | 02-07-2023 08:49:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,486 | closed | [Tests] Improve flax test_attention_outputs | # What does this PR do?
A copy of https://github.com/huggingface/transformers/pull/20701 by [NielsRogge](https://github.com/NielsRogge) for making corresponding changes in flax. These changes are also necessary for passing tests in [flax convnext implementation PR](https://github.com/huggingface/transformers/pull/21485).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi | 02-07-2023 08:25:09 | 02-07-2023 08:25:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,485 | closed | Convnext flax | # Flax Implementation of `facebook/convnext-tiny-224`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Flax: @sanchit-gandhi
## TODO
Last Updated : 10 Feb, 2023
- [x] Fixing tests failed in `ci/circleci: tests_flax` actions.
- [ ] Uploading [Shubhamai/convnext-tiny-224](https://huggingface.co/Shubhamai/convnext-tiny-224) flax weights to [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224).
- [x] Depends on merge of [_Improve flax test_attention_outputs_ PR](https://github.com/huggingface/transformers/pull/21486) to pass (or technically skip) `test_attention_outputs` test.
| 02-07-2023 08:24:31 | 02-07-2023 08:24:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21485). All of your documentation changes will be reflected on that endpoint.<|||||>@sanchit-gandhi this PR is also ready for your review in case it was missed. And again, thank you so much for taking the time to review the PR.<|||||>@sanchit-gandhi Reminder incase my previous message got missed! Also the https://github.com/huggingface/transformers/pull/21472 ( previous reviews implemented ) and https://github.com/huggingface/transformers/pull/21867 PR are awaiting review. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,484 | closed | Add/fix documentation around VideoMAEForPretraining's `bool_masked_pos` argument | ### System Info
N/A
### Who can help?
@NielsRogge @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `VideoMAEForPretraining` class is missing a docstring for an important argument to the `forward` function, `bool_masked_pos`. Can see this in the docs for the `main` branch [here](https://huggingface.co/docs/transformers/main/en/model_doc/videomae#transformers.VideoMAEForPreTraining.forward)...it's listed in the function signature but not in the documented arguments.
There is an [example code snippet](https://huggingface.co/docs/transformers/main/en/model_doc/videomae#transformers.VideoMAEForPreTraining.forward.example) that creates this input tensor that you can see in the docs, but I'm not sure it's correct when applied to video inputs with `batch_size > 1` (if not done carefully). When I calculated it example-by-example, I was getting errors, which is what led me to this issue.
I dug a little deeper and noticed that the test suite for VideoMAE has [different logic](https://github.com/huggingface/transformers/blob/5b49376202863d3798d2ff8a8ba61590542a1141/tests/models/videomae/test_modeling_videomae.py#L145-L149) for creating `bool_masked_pos`. When I updated my training code to use that logic, my problems went away. I assume this is related to the note in the tests that mentions each video needing the same number of masked patches.
### Expected behavior
To resolve this issue, I think we should:
1. Add a docstring for `bool_masked_pos` that explains what it is for users
2. Update the example snippet to use the correct logic from the test suite.
Even better I guess would be to handle this for users in the `ImageProcessor`, but I'll leave that for a separate issue. | 02-07-2023 01:36:33 | 02-07-2023 01:36:33 | Thanks for raising this issue!
VideoMAE indeed uses the same mask ratio (number of masked patches) per video to make batching possible. See [this class](https://github.com/MCG-NJU/VideoMAE/blob/main/masking_generator.py) which the authors use to generate boolean masks.
In the tests, I just define the same mask for all examples in the batch, but in practice one would use different masks (but with the same mask ratio) in a batch.<|||||>@NielsRogge can you add the docstring for the param, please? :) I think you'd be best person to write it.
As for fixing the example...maybe we write a quick function and include it in the snippet.<|||||>Thanks for raising @nateraw ! We should definitely update the docstring and example snippet.
I'm not sure `bool_mask_pos` should be generated in the image processor. The image processor takes the images and makes them ready to be passed into the model, however it doesn't handle other transformations which might be part of the training procedure e.g. augmentation. For similar tasks, where input tokens are randomly masked, this in handled in e.g. `DataCollatorForLanguageModeling`, rather than the tokenizer. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Reopening as it's not resolved yet |
transformers | 21,483 | closed | [tokenizer] sanitize saved config | this PR fixes tokenizer's `save_pretrained` to remove the `name_or_path` entry from `tokenizer_config.json` because:
1. it usually contains the local path that was used to save the model, which is not only invalid once published on the hub, it could potentially reveal some personal information.
2. it is not used anywhere, since one needs to know `name_or_path` before they can load this file.
it also adjusts one tokenizer test not to test for the `name_or_path` entry | 02-07-2023 00:20:47 | 02-07-2023 00:20:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,482 | closed | Running Trainer.train() with deepspeed throws OSError: handle is closed error when saving checkpoint | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, via deepspeed
- Using distributed or parallel set-up in script?: yes, via deepspeed
### Who can help?
@stas00, @pacman100
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I've been trying to use the Trainer with deepspeed using the following guide: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/deepspeed#trainer-deepspeed-integration
Below is my python code:
```
#!/usr/bin/env python
# coding=utf-8
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for sequence to sequence.
"""
# You can also adapt this script on your own sequence to sequence task. Pointers for this are left as comments.
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
import evaluate
import transformers
from transformers import (
AutoConfig,
AutoTokenizer,
HfArgumentParser,
M2M100Tokenizer,
MBart50Tokenizer,
MBart50TokenizerFast,
MBartTokenizer,
MBartTokenizerFast,
Trainer,
TrainingArguments,
AutoModelForCausalLM,
default_data_collator,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version
import bittensor
from itertools import chain
from tqdm.auto import tqdm
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.27.0.dev0")
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/translation/requirements.txt")
logger = logging.getLogger(__name__)
# A list of all multilingual tokenizer which require src_lang and tgt_lang attributes.
MULTILINGUAL_TOKENIZERS = [MBartTokenizer, MBartTokenizerFast, MBart50Tokenizer, MBart50TokenizerFast, M2M100Tokenizer]
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `huggingface-cli login` (necessary to use this script "
"with private models)."
)
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
source_lang: str = field(default=None, metadata={"help": "Source language id for translation."})
target_lang: str = field(default=None, metadata={"help": "Target language id for translation."})
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a jsonlines)."})
validation_file: Optional[str] = field(
default=None,
metadata={
"help": "An optional input evaluation data file to evaluate the metrics (sacrebleu) on a jsonlines file."
},
)
test_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input test data file to evaluate the metrics (sacrebleu) on a jsonlines file."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
max_source_length: Optional[int] = field(
default=1024,
metadata={
"help": (
"The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
max_target_length: Optional[int] = field(
default=128,
metadata={
"help": (
"The maximum total sequence length for target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
val_max_target_length: Optional[int] = field(
default=None,
metadata={
"help": (
"The maximum total sequence length for validation target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded. Will default to `max_target_length`."
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
"during ``evaluate`` and ``predict``."
)
},
)
pad_to_max_length: bool = field(
default=False,
metadata={
"help": (
"Whether to pad all samples to model maximum sentence length. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch. More "
"efficient on GPU but very bad for TPU."
)
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
)
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
)
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
)
},
)
num_beams: Optional[int] = field(
default=None,
metadata={
"help": (
"Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
"which is used during ``evaluate`` and ``predict``."
)
},
)
ignore_pad_token_for_loss: bool = field(
default=True,
metadata={
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
},
)
source_prefix: Optional[str] = field(
default=None, metadata={"help": "A prefix to add before every source text (useful for T5 models)."}
)
forced_bos_token: Optional[str] = field(
default=None,
metadata={
"help": (
"The token to force as the first generated token after the :obj:`decoder_start_token_id`.Useful for"
" multilingual models like :doc:`mBART <../model_doc/mbart>` where the first generated token needs to"
" be the target language token.(Usually it is the target language token)"
)
},
)
def __post_init__(self):
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
# accepting both json and jsonl file extensions, as
# many jsonlines files actually have a .json extension
valid_extensions = ["json", "jsonl"]
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in valid_extensions, "`train_file` should be a jsonlines file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in valid_extensions, "`validation_file` should be a jsonlines file."
if self.val_max_target_length is None:
self.val_max_target_length = self.max_target_length
def load_raw_datasets(name: str, confName: str) -> DatasetDict:
if name == "bittensor":
dataset = bittensor.dataset(
no_tokenizer=True,
# batch_size=cfg.training.train_batch_size,
# block_size=cfg.dataset.block_size,
)
dataloader = dataset.dataloader(1000)
bittensor_dataset = {"text": []}
for batch in tqdm(dataloader, desc="Loading data from bittensor IPFS"):
bittensor_dataset["text"].extend(batch)
raw_datasets = Dataset.from_dict(bittensor_dataset)
dataset.close() # Avoid leaving threadqueue running.
return raw_datasets
if os.path.exists(name):
data_files = {"text": name}
dataset_args = {}
extension = os.path.splitext(name)[-1].lstrip(".")
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = True
raw_datasets = load_dataset(
extension, data_files=data_files, **dataset_args)
raw_datasets = raw_datasets["text"]
else:
raw_datasets = load_dataset(name, confName)
return raw_datasets
def load_model_and_tokenizer(model_args: ModelArguments):
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# tokenizer.pad_token = cfg.tokenizer.pad_token
if tokenizer.pad_token is None and tokenizer.eos_token is not None:
tokenizer.pad_token = tokenizer.eos_token
# model = AutoModelForCausalLM.from_pretrained(
# name,
# from_tf=bool(".ckpt" in name),
# config=config,
# )
# model.to('cuda')
# model.resize_token_embeddings(len(tokenizer))
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
# on a small vocab and want a smaller embedding size, remove this test.
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
return tokenizer, model
def preprocess(blockSize, tokenizer, raw_datasets):
# First we tokenize all the texts.
column_names = raw_datasets.column_names
text_column_name = "text" if "text" in column_names else column_names["train"][0]
if True is True:
pad = False
else:
pad = "max_length"
def group_texts(examples):
# print(examples)
# Concatenate all texts.
concatenated_examples = {
k: list(chain(*examples[k])) for k in examples.keys()}
# print(concatenated_examples)
total_length = len(concatenated_examples[list(examples.keys())[0]])
if total_length >= blockSize:
total_length = (
total_length // blockSize
) * blockSize
# Split by chunks of max_len.
result = {
k: [
t[i: i + blockSize]
for i in range(0, total_length, blockSize)
]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
def tokenize_fn(examples):
# result = tokenizer(
# examples[text_column_name],
# padding=pad,
# truncation=True,
# max_length=cfg.dataset.block_size,
# )
# result["labels"] = result["input_ids"].copy()
# return result
return tokenizer(examples[text_column_name])
tokenized_datasets = raw_datasets.map(
tokenize_fn,
batched=True,
remove_columns=text_column_name,
load_from_cache_file=not False,
desc="Running tokenizer on dataset",
)
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=None,
load_from_cache_file=not False,
desc=f"Grouping texts in chunks of {blockSize}",
)
return lm_datasets
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_translation", model_args, data_args)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
logger.info(f"Training/evaluation parameters {training_args}")
tokenizer, model = load_model_and_tokenizer(model_args)
# dataset = load_raw_datasets("bittensor", None)
dataset = load_raw_datasets("wikitext", "wikitext-2-raw-v1")
tokenized_datasets = preprocess(2, tokenizer, dataset)
if "train" not in tokenized_datasets.column_names:
tokenized_datasets = tokenized_datasets.train_test_split(
test_size=5 / 100
)
tokenized_datasets_test_valid = tokenized_datasets["test"].train_test_split(
test_size=0.5
)
tokenized_datasets["test"] = tokenized_datasets_test_valid["train"]
tokenized_datasets["validation"] = tokenized_datasets_test_valid["test"]
train_dataset = tokenized_datasets["train"]
eval_dataset = tokenized_datasets["validation"]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
# tokenizer=tokenizer,
# compute_metrics=compute_metrics,
)
trainer.train()
if __name__ == "__main__":
main()`
The JSON config I'm using for deepspeed is:
`{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
And the command I'm using is:
`deepspeed examples/pytorch/translation/run-text-gen.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir=bennyD --evaluation_strategy epoch --num_train_epochs 2 --dataset_name wikitext --dataset_config "wikitext-2-raw-v1"`
The full stack trace:
```
Exception in thread MsgRouterThr:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/horza/.local/lib/python3.10/site-packages/wandb/sdk/interface/router.py", line 69, in message_loop
msg = self._read_message()
File "/home/horza/.local/lib/python3.10/site-packages/wandb/sdk/interface/router_queue.py", line 32, in _read_message
msg = self._response_queue.get(timeout=1)
File "/usr/lib/python3.10/multiprocessing/queues.py", line 117, in get
res = self._recv_bytes()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 212, in recv_bytes
self._check_closed()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
It's worth noting that if I run the following code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py used in the guide, and modify it to make checkpoints, I do not get the same error.
Additionally if I add `--save_strategy no` to my command, it completes with no errors. But I need the checkpoints.
Please help, been trying to figure this one out for a while.
### Expected behavior
The command runs with checkpoints and completes without errors. | 02-06-2023 22:57:20 | 02-06-2023 22:57:20 | thank you for the detailed report, @benproton
As you may have derived from the traceback this has nothing to do with deepspeed
You have an issue inside `wandb`, which is a 3rd party package, you can either remove it:
```
pip uninstall wandb
```
or a better long term solution - in your command line add `--report_to none` which will disable wandb (or any other reporting package you happened to have installed in your environment)
Please try again and let me know if it fixes the problem.<|||||>Hey! Thanks so much for the quick reply.
Hmm, still exits at the point of the checkpoint, just not with the error I mentioned:
```
{'loss': 6.8437, 'learning_rate': 5e-05, 'epoch': 0.01}
0%|β | 500/224238 [51:15<381:53:08, 6.14s/it][INFO|trainer.py:2753] 2023-02-06 16:37:12,461 >> Saving model checkpoint to bennyD/checkpoint-500
[INFO|configuration_utils.py:453] 2023-02-06 16:37:12,462 >> Configuration saved in bennyD/checkpoint-500/config.json
[INFO|configuration_utils.py:359] 2023-02-06 16:37:12,464 >> Configuration saved in bennyD/checkpoint-500/generation_config.json
[INFO|modeling_utils.py:1720] 2023-02-06 16:37:12,809 >> Model weights saved in bennyD/checkpoint-500/pytorch_model.bin
[2023-02-06 16:37:18,583] [INFO] [engine.py:3500:save_16bit_model] Saving model weights to bennyD/checkpoint-500/pytorch_model.bin
[2023-02-06 16:37:18,583] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/pytorch_model.bin...
[2023-02-06 16:37:31,072] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/pytorch_model.bin.
[2023-02-06 16:37:31,187] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!
/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-02-06 16:37:31,225] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-02-06 16:37:31,225] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-02-06 16:37:31,841] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-02-06 16:37:31,843] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-02-06 16:37:38,871] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 442827
[2023-02-06 16:37:38,875] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 442828
[2023-02-06 16:37:45,767] [ERROR] [launch.py:324:sigkill_handler] ['/usr/bin/python3', '-u', 'examples/pytorch/translation/run-text-gen.py', '--local_rank=1', '--deepspeed', 'tests/deepspeed/ds_config_zero3.json', '--model_name_or_path', 'EleutherAI/gpt-neo-1.3B', '--output_dir=bennyD', '--evaluation_strategy', 'epoch', '--num_train_epochs', '3', '--dataset_name', 'wikitext', '--dataset_config', 'wikitext-2-raw-v1', '--report_to', 'none'] exits with return code = -9
```
This is with the following command: `deepspeed examples/pytorch/translation/run-text-gen.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir=bennyD --evaluation_strategy epoch --num_train_epochs 3 --dataset_name wikitext --dataset_config "wikitext-2-raw-v1" --report_to none`<|||||>I don't see any traceback there.
This often happens when you run out of cpu memory.
As it happens during saving the checkpoint, does the problem go away if you set ` "stage3_gather_16bit_weights_on_model_save": true` to `false`?<|||||>Dude! That worked, thanks so much, would never have got that. Logs:
0%|β | 500/224238 [53:12<396:24:51, 6.38s/it][WARNING|trainer.py:2707] 2023-02-06 18:39:45,438 >> deepspeed.save_16bit_model didn't save the model, since stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use zero_to_fp32.py to recover weights
[INFO|trainer.py:2753] 2023-02-06 18:39:45,439 >> Saving model checkpoint to bennyD/checkpoint-500
[INFO|configuration_utils.py:453] 2023-02-06 18:39:45,440 >> Configuration saved in bennyD/checkpoint-500/config.json
[INFO|configuration_utils.py:359] 2023-02-06 18:39:45,442 >> Configuration saved in bennyD/checkpoint-500/generation_config.json
[INFO|modeling_utils.py:1720] 2023-02-06 18:39:45,795 >> Model weights saved in bennyD/checkpoint-500/pytorch_model.bin
[2023-02-06 18:39:45,825] [INFO] [engine.py:3491:save_16bit_model] Did not save the model bennyD/checkpoint-500/pytorch_model.bin because `stage3_gather_16bit_weights_on_model_save` is False
[WARNING|trainer.py:2707] 2023-02-06 18:39:45,825 >> deepspeed.save_16bit_model didn't save the model, since stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use zero_to_fp32.py to recover weights
[2023-02-06 18:39:45,865] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!
/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-02-06 18:39:45,873] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-02-06 18:39:45,873] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-02-06 18:39:46,413] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-02-06 18:39:46,414] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-02-06 18:40:37,554] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-02-06 18:40:37,560] [INFO] [engine.py:3397:_save_zero_checkpoint] zero checkpoint saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-02-06 18:40:37,615] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step500 is ready now!
[2023-02-06 18:40:37,656] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!
[2023-02-06 18:40:37,679] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-02-06 18:40:37,679] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-02-06 18:40:38,307] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-02-06 18:40:38,310] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-02-06 18:41:19,334] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-02-06 18:41:19,341] [INFO] [engine.py:3397:_save_zero_checkpoint] zero checkpoint saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-02-06 18:41:19,443] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step500 is ready now!
0%|β | 512/224238
So what does that do and what is the impact of setting it to false? Thanks again
<|||||>Excellent. It's because it tries to gather the model on cpu and you don't have enough cpu memory to do that. But you don't need to gather the model on cpu.
You can read here about the cost of using `stage3_gather_16bit_weights_on_model_save` and more importantly what you need to know if you're not using it.
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#getting-the-model-weights-out
In particular please make sure to read all the way through to and including `Offline FP32 Weights Recovery` - which you will have to use when you finished training.
You may close the Issue if you're satisfied, @benproton
If you run into new issues please always open a new Issue. Thank you.<|||||>Ok thanks. Is that because I'm offloading to cpu? If I choose not to do that, will that prevent the issue?<|||||>indeed. the offloading takes a lot of space on cpu.<|||||>Last question then I'll close. Can we therefore assume that the reason I was able to run https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py with checkpoints successfully - without any errors - is because that script isn't as intensive on the cpu? Thanks<|||||>It's hard to tell, as they are different programs. It's possible that with one program itself you were using more memory than the other
it's very easy to tell though, just add `--skip_memory_metrics 0`, run a few steps and it'll print you full stats on memory usage - so you can compare the 2 programs. do not use this in production since it adds an overhead.
In general if you were able to start training you should be able to continue training w/o cpu memory oom events. This is one exception where due to `zero.Init` when the model gets inited it loads the model directly onto the gpu, so your CPU memory can be actually quite small (smaller than gpu) and it'll still work. However if a user chooses to save the full model they have to consolidate it first on cpu and that's where there might not be enough of memory. That setting is set to `True` by default to make it easy for users to start right of the box. As they learn the ropes they will then discover more efficient ways of doing things.
Also unrelated to your questions: If you have plenty of free gpu memory you may want to consider turning offloading off for one or both config entries and even switch to zero stage 2. Each of these will use more gpu memory but will make your training faster. Measure the different options and see which one gives you the fastest training. Again all the stats are printed at the end of each training.<|||||>That's all incredibly helpful, thanks so much. I think the main culprit was wandb, disabling that stopped the errors. I just tried turning off cpu offloading altogether and training is now running much faster as you anticipated and the checkpoint saving is still working. I have a good amount of GPU memory across 2 x GPUs (48GB total) and I've been attempting to run larger models across multiple GPUs as the previous code I was using was hindered by relying on the capabilities of a single GPU, so from what I've learned from the docs, zero stage 3 for sure seems the way to go for this, correct? Goal was to prove I can achieve this before investing in more GPUs so mission accomplished! Again thanks so much for all of your help.<|||||>You're welcome, @benproton. I'm glad your goal has been reached without spending additional $$.
And zero stage 2 is even faster than stage 3 if you have enough gpu memory to not need to shard model weights.
Also enabling `--gradient_checkpointing 1` will use less gpu memory at the cost of 20% slowdown, but which would enable a larger batchsize or a switch to stage 2, so the overall training will be faster.
Spend some time experimenting with different knobs and you should be able to get an even faster training.<|||||>Typically the optimal approach would be along these steps:
1. enable `--gradient_checkpointing 1` if oom then
2. try zero stage 2 first - if oom then
3. switch to zero 3 - if oom then
4. enable `offload_param` to `cpu` - if oom then
5. enable `offload_optimizer` to `cpu` - if oom
6. repeat all of the above with bs=1 (if it wasn't 1 already) and if possible shorter seq-len - if using `generate` use smaller beam search, etc. or alternatively always start with `bs=1` and instead progress from there.
7. obviously use mixed half-precision over fp32 - so bf16 on ampere and fp16 on earlier gpus
remember you have `--gradient_accumulation_steps=XXX` to get whatever effective batch size you need regardless of your gpu size and `--per_device_train_batch_size`<|||||>All super helpful pointers thanks again <|||||>@stas00 I've been experimenting and everything is working great when using a hugging face dataset such as the example I gave. However, whenever I try using the bittensor dataset the program always just hangs early on, either while training or while evaluating with nothing obvious appearing in the logs. Any ideas? Is there anything I can do to determine what is causing the hanging? Thanks.
E.g.: `Time to load utils op: 0.00036215782165527344 seconds
[INFO|trainer.py:1516] 2023-02-09 22:55:56,474 >> ***** Running training *****
[INFO|trainer.py:1517] 2023-02-09 22:55:56,474 >> Num examples = 39291
[INFO|trainer.py:1518] 2023-02-09 22:55:56,474 >> Num Epochs = 4
[INFO|trainer.py:1519] 2023-02-09 22:55:56,474 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1520] 2023-02-09 22:55:56,474 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1521] 2023-02-09 22:55:56,474 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1522] 2023-02-09 22:55:56,474 >> Total optimization steps = 9824
[INFO|integrations.py:579] 2023-02-09 22:55:56,994 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
0%| | 0/9824 [00:00<?, ?it/s][2023-02-09 22:56:02,149] [WARNING] [stage3.py:1939:step] 1 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding torch.cuda.empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time
1%|β | 50/9824 [01:54<6:00:31, 2.21s/it][INFO|trainer.py:2753] 2023-02-09 22:57:52,401 >> ***** Running Evaluation *****
[INFO|trainer.py:2755] 2023-02-09 22:57:52,401 >> Num examples = 1034
[INFO|trainer.py:2758] 2023-02-09 22:57:52,401 >> Batch size = 8
49%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 32/65 [00:31<00:32, 1.01it/s]
`<|||||>yes, and I will reply once you open a new Issue and fully document the Issue.
I will give you a quick pointer: https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-hanging-solutions.md but we won't continue this discussion in this Issue.
This issue has been resolved and closed for good. New problems require new Issues.
thank you.<|||||>> yes, and I will reply once you open a new Issue and fully document the Issue.
>
> I will give you a quick pointer: https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-hanging-solutions.md but we won't continue this discussion in this Issue.
>
> This issue has been resolved and closed for good. New problems require new Issues.
>
> thank you.
Done, thank you @stas00 |
transformers | 21,481 | closed | Bump oauthlib from 3.2.1 to 3.2.2 in /examples/research_projects/decision_transformer | Bumps [oauthlib](https://github.com/oauthlib/oauthlib) from 3.2.1 to 3.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/releases">oauthlib's releases</a>.</em></p>
<blockquote>
<h2>3.2.2</h2>
<h2>OAuth2.0 Provider:</h2>
<ul>
<li>CVE-2022-36087</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/blob/master/CHANGELOG.rst">oauthlib's changelog</a>.</em></p>
<blockquote>
<h2>3.2.2 (2022-10-17)</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li>CVE-2022-36087</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/oauthlib/oauthlib/commit/e6c33e41a8ce6dadff387cdc4613a55b63d1827e"><code>e6c33e4</code></a> Add 3.2.2 version</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/4a4d65f8eeecfe7d778269466871c5c15fe9c1bc"><code>4a4d65f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/832">#832</a> from oauthlib/3.2.1</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/2e40b412c844ecc4673c3fa3f72181f228bdbacd"><code>2e40b41</code></a> Merge pull request from GHSA-3pgj-pg6c-r5p7</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b4bdd09c56aa5dedb475529e75ce73c092ca0898"><code>b4bdd09</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a> from dasm/master</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/5d85c61998692643dd9d17e05d2646e06ce391e8"><code>5d85c61</code></a> Fix IPV6 regex used to check redirect_uri</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/e514826eea15f2b62bbc13da407b71552ef5ff4c"><code>e514826</code></a> Add check of performance of ipv6 check</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/9aa45aaff0cdeab258d18c025cf66e9bdba529c0"><code>9aa45aa</code></a> Restored test for port 0.</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/f52f641d763e4958d108e875e0cd6fca50d110f2"><code>f52f641</code></a> Merge branch 'oauthlib:master' into master</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/ed0cb63945c4a5940b185823809693b7f97989ad"><code>ed0cb63</code></a> Removed unused query and fragment</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/d05c388078b45285ac4a012c568a5e2d56556a34"><code>d05c388</code></a> Removed dependency on split</li>
<li>Additional commits viewable in <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.1...v3.2.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 02-06-2023 22:08:12 | 02-06-2023 22:08:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,480 | closed | Update quality tooling for formatting | # What does this PR do?
This updates the quality tools for 2023. Mainly:
- use black 23
- replace isort and flake8 by ruff, which is faster and merges imports contrarily to isort
- change a few rules in the isorting | 02-06-2023 19:45:22 | 02-06-2023 19:45:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,479 | closed | [`pipeline`] A simple fix for half-precision & 8bit models | # What does this PR do?
Currently on the `main` branch of `transformers` if a user wants to run a `pipeline` using large models (thus, ideally loaded with `device_map=...`) and in half precision (or in int8), they may encounter some issues when calling `pipeline` with `top_p` & `top_k` sampling:
```bash
RuntimeError: "topk_cpu" not implemented for 'Half'
```
## Snippet to reproduce & explanations:
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = "EleutherAI/gpt-neo-125M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=20, temperature=1, do_sample=True, top_p=0.95, top_k=60, num_return_sequences=3)
text = "What can you tell me about the LHC?"
response = pipe(text)
print(response[0]["generated_text"])
```
This is because the `input_ids` are automatically set on `cpu` since the argument `device` is not passed when initializing the `pipeline`. A model that is loaded with `device_map=...` (i.e. with `accelerate`) always sets the output tensor of the model to the `device` of the input tensor thanks to the forward hooks. Therefore when calling the top_k method, the output tensor is in fp16 (because the model has been loaded in fp16) & on `cpu` hence the torch error above.
Currently a hack to fix this is to add `device=0` when initializing the `pipeline` but this leads to inconsistent and undesirable behaviours for some cases, for example when loading large models in several GPUs, since the call `model.to(self.device)` will break some internals (the hooks will be still there but the weights will be set on the wrong devices). A snippet to reproduce below:
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = "EleutherAI/gpt-neo-125M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="balanced", torch_dtype=torch.float16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=20, temperature=1, do_sample=True, top_p=0.95, top_k=60, num_return_sequences=3, device=0)
text = "What can you tell me about the LHC?"
response = pipe(text)
print(response[0]["generated_text"])
```
adding this hack also breaks the usage of `pipeline` with int8 models, since the `to` method is blocked for these models:
```bash
ValueError: `.to` is not supported for `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
Thus, I propose to fix this by simply checking whether a model has been loaded with `accelerate` by looking at the attribute `hf_device_map` , and set the model on the correct device only if it has not been loaded with accelerate as backend. This fixes 3 bugs: using `pipeline` with a fp16 model that has been loaded with `accelerate` without having any error in case of multi-gpu usage, using `pipeline` with a fp16 model w `accelerate` & sampling strategies, and using `pipeline` with int8 models & sampling strategies.
cc @sgugger @Narsil | 02-06-2023 19:44:23 | 02-06-2023 19:44:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the well thought out issue and proposed fix.
I don't particularly like the fix because it depends on some weird internal and still forces users to use `device_map` and `device` iiuc.
Couldn't we just use `device_map` and use `accelerate` api to figure out where to put the inputs? (most likely cuda:0 but still cpu if no gpus are available I think.)
That or just do something special for `device_map` without asking where the model is (if the API doesn't exist or is tricky).
Imo using `device_map` and `device` should be an error (ambiguous intent) <|||||>Thanks for the feedback @Narsil !
I think
> Imo using device_map and device should be an error (ambiguous intent)
Makes sense !
Another fix would be to force-upcast the logits in fp32 when doing top_k & top_p sampling on the generation side only if the logits are on `cpu`, is this solution a reasonable fix @gante ? Happy to open a PR to fix it!<|||||>> force-upcast
I would highly advise against it too. There's a limit to magic. Doing half precision on cpu should crash in a lot of places. We shouldn't upcast on behalf of a user that explicitely asked for half precision imo. That's breaking user intent.
But the user also asked for GPU, that's where we're breaking his intent and that's what should be fixed IMO.
Does `accelerate` allow to know on which device is the start of the model ?<|||||>I see, makes sense!
> Does accelerate allow to know on which device is the start of the model ?
I am not sure here, maybe @sgugger & @muellerzr knows better<|||||>> I am not sure here, maybe @sgugger & @muellerzr knows better
if not the pipeline could have the simplest heuristic `'cuda:0' if torch.cuda.is_avalaible() else 'cpu'` which should work 99% of the time.
But it wouldn't if a user specified an odd map (which is why having direct access would be better).<|||||>What do you think:
```diff
diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
index 30402b36e..e698f1aa3 100644
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -749,7 +749,7 @@ class Pipeline(_ScikitCompat):
framework: Optional[str] = None,
task: str = "",
args_parser: ArgumentHandler = None,
- device: Union[int, str, "torch.device"] = -1,
+ device: Union[int, str, "torch.device"] = None,
torch_dtype: Optional[Union[str, "torch.dtype"]] = None,
binary_output: bool = False,
**kwargs,
@@ -764,6 +764,20 @@ class Pipeline(_ScikitCompat):
self.image_processor = image_processor
self.modelcard = modelcard
self.framework = framework
+
+ # Special handling
+ if self.framework == "pt" and device is not None:
+ self.model = self.model.to(device=device)
+
+ if device is None:
+ # `accelerate` device map
+ hf_device_map = getattr(self.model, "hf_device_map", None)
+ if hf_device_map is not None:
+ # Take the first device used by `accelerate`.
+ device = next(iter(hf_device_map.values()))
+ else:
+ device = -1
+
if is_torch_available() and self.framework == "pt":
if isinstance(device, torch.device):
self.device = device
@@ -775,13 +789,10 @@ class Pipeline(_ScikitCompat):
self.device = torch.device(f"cuda:{device}")
else:
self.device = device
+
self.torch_dtype = torch_dtype
self.binary_output = binary_output
- # Special handling
- if self.framework == "pt" and self.device.type != "cpu":
- self.model = self.model.to(self.device)
-
# Update config with task specific parameters
task_specific_params = self.model.config.task_specific_params
if task_specific_params is not None and task in task_specific_params:
```
Here we just modify the default device **when** the model uses `accelerate` 's `device_map`.
We still depend on something magic, but it only modifies the `default` device, and doesn't modify `model` unless `device` was specified by user (which is correct in terms of intent IMO)<|||||>I think that would totally work @Narsil ! Happy to change the PR with your proposed changes, let me know!<|||||>Sure. Let's update the doc too.<|||||>Side notes:
- you should probably do something if the user passes a device and the model has a `hf_device_map` (at least a warning) as the line `model.to(device)` will probably screw things up (it will at least error if there are some weights offloaded)
- the device on which the model is executed is determined by this rule in Accelerate, maybe you should use the same code? I can also store it on the Accelerate side in a special attribute but then you'd have to wait for a release.
```py
if set(device_map.values()) == {"cpu"} or set(device_map.values()) == {"cpu", "disk"}:
main_device = "cpu"
else:
main_device = [d for d in device_map.values() if d not in ["cpu", "disk"]][0]
```<|||||>Thanks a lot for the valuable feedback @Narsil @sgugger !
I updated the PR and added more clarification (and also a new section) on the docs<|||||>the failing test is indenpendent to our PR! Merging!
Thanks for all your comments! |
transformers | 21,478 | closed | Fix epoch number when resuming training | # What does this PR do?
As @stas00 pointed out in #21390, the epoch number reported after skipping some batches in a training resumed was wrong. This PR fixes it and adds a test. | 02-06-2023 19:35:59 | 02-06-2023 19:35:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,477 | closed | OPT: BLIP2-ready `prepare_inputs_for_generation` | # What does this PR do?
This PR makes 2 changes to OPT's `prepare_inputs_for_generation`:
1. Adds the possibility of passing `inputs_embeds`
2. Removes the case when `attantion_mask is None`: a) it is redundant, the base case of inferring an attention mask with all ones is also in the model itself [here](https://github.com/huggingface/transformers/blob/baf4bacb1f10ecb63f0efc98d07463ae8799c7e3/src/transformers/models/opt/modeling_opt.py#L636) b) the shape isn't right when `inputs_embeds` is passed
β
Slow OPT tests were run locally. | 02-06-2023 17:30:07 | 02-06-2023 17:30:07 | cc @NielsRogge <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,476 | closed | [`BLIP`] update blip path on slow tests | # What does this PR do?
This PR updates the path to the image we use for running BLIP slow tests - as pointed out by @NielsRogge better to upload these images on the Hub in case they get removed from the original place
Happy also to move the Hub repo on `hf-internal-testing` but I am not a member of the org
Also the image is quite large, so I prefer to upload it on the Hub rather than pushing it on the repo here
cc @NielsRogge @sgugger | 02-06-2023 16:56:23 | 02-06-2023 16:56:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you add it to hf-internal-testing instead? Might be better there. Happy to merge any PR or approve your demand to join ;-)<|||||>Thanks for adding! I've transferred it :) <|||||>@sgugger , I think that we can merge this! π |
transformers | 21,475 | closed | Generate: TF can now generate from embeddings in encoder-decoder models | # What does this PR do?
TF generation test addition PR 2 (out of ???).
In an effort to move generation integration tests to be framework-agnostic, I'll be adding low-hanging fruit to TF. This PR brings the ability to generate from input embeddings, with encoder-decoder models (or, more specifically, from non-input_ids inputs). The code added is almost copy-paste from PT.
Since this PR made a few changes in the main code path for generate, the following slow tests were run to ensure XLA compatibility:
- [x] GPT2
- [x] T5 | 02-06-2023 16:35:51 | 02-06-2023 16:35:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(merging -- the failing test, `test_from_pretrained_dynamic_model_distant` is a known flaky) |
transformers | 21,474 | closed | Generate: TF `.generate()` can now be exported with dynamic length | # What does this PR do?
As requested by @mfuntowicz, makes TF `.generate()` exportable with dynamic input length π₯ | 02-06-2023 14:54:01 | 02-06-2023 14:54:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,473 | closed | Removing `more_itertools` dependency. | # What does this PR do?
Removes `more_itertools` optional dependency.
Fixes #20508
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 02-06-2023 13:25:17 | 02-06-2023 13:25:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't think the failure is linked in any way to this PR, is it ? |
transformers | 21,472 | closed | Resnet flax | # Flax Implementation of `microsoft/resnet-50`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi I guess :sweat_smile:
## Status
Last Updated - Sunday, 12 February 2023
### TODO
- [x] Blocked on merge of [this PR](https://github.com/huggingface/transformers/pull/21581) to add support for BatchNorm layers.
- [ ] Uploading [Shubhamai/resnet-50](https://huggingface.co/Shubhamai/resnet-50) flax weights to [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50). | 02-06-2023 13:11:46 | 02-06-2023 13:11:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Very cool! Sorry to only reply here now - looks like you've made a really solid start to this PR! Let's get the Batch Norm PR merged ASAP and then go full send on Flax ResNet! π
Feel free to ping me with any questions / queries! More than happy to help with the integration!<|||||>@sanchit-gandhi the PR is also now ready for your review, thanks a lot for your time. <|||||>Thanks for the review, I will make the changes soon. Many of the reviews here also apply to the [Flax Convnext PR](https://github.com/huggingface/transformers/pull/21485) so will make corresponding changes there too. <|||||>Hey @Shubhamai! Really nice work on this PR - just a few small changes to go now! Feel free to ping me once you're happy with the last bits and I'll get you a final review. Taking a look at Flax RegNet in the meantime!<|||||>Made the request changes and apologies for the late reply, had a busy schedule lately.<|||||>Thanks for the review @amyeroberts π Feel free to propagate the changes forward to [Flax RegNet](https://github.com/huggingface/transformers/pull/21867) @Shubhamai and we can get a final review there!<|||||>We can merge when the CI is green! π’<|||||>Thank you so much for the review & merge @sanchit-gandhi @amyeroberts , really appreciate your work & time on this β€οΈ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.