repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,368 | closed | Add Chinese-CLIP implementation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds Chinese-CLIP model into Transformers repo. The Chinese-CLIP model was introduced in [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335). Chinese CLIP is an implementation and adaptation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing Chinese-based cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. This model was contributed by [OFA-Sys](https://huggingface.co/OFA-Sys). The original Github repo of Chinese-CLIP can be found [at this link](https://github.com/OFA-Sys/Chinese-CLIP). Currently we have released our model weights on the [Huggingface ModelHub](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16). Compared with original OpenAI CLIP, we changed the text encoder to Chinese Roberta encoder, thus we reimplemented the config, modeling and preprocessor modules of Chinese-CLIP. Necessary unit tests and documents have been added.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-22-2022 06:56:08 | 11-22-2022 06:56:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger
<|||||>> Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger
So in conclusion, all I need to is to copy all the modules imported from other models and add `# Copied from` statements. The use of feature extractor is still acceptable at this point. Am I right? @ydshieh <|||||>> > Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger
>
> So in conclusion, all I need to is to copy all the modules imported from other models and add `# Copied from` statements. The use of feature extractor is still acceptable at this point. Am I right? @ydshieh
Yes :-)<|||||>Hi, @ydshieh @sgugger I have updated my code to remove the import of the configuration and modeling files from other models. Is that able to be merged now? Meanwhile, happy Thanksgiving and best wishes ❤️ !<|||||>> Hi @yangapku Thank you for adding this model! 非常感謝您!
>
> I left a few comments, but there might be more comments next week, as I haven't review all the current files.
Excuse me, apart from the proposed comments on model implementation codes and documents, are there any more comments on the other code changes? Let me fix them together 😄. @ydshieh <|||||>Hi @yangapku
I haven't continued the review process yet after back from the weekend. I will review what you have addressed according the our comments, as well as what I haven't reviewed last time.
Hope you can understand it's not super easy to have a full review in one-go for large PRs like this 🙏. Even for small PR, it's also normal for the review being a iterative process :-)
(and we sometime get distracted to other tasks that pop up 😅).<|||||>> Hi @yangapku
>
> I haven't continued the review process yet after back from the weekend. I will review what you have addressed according the our comments, as well as what I haven't reviewed last time.
>
> Hope you can understand it's not super easy to have a full review in one-go for large PRs like this 🙏. Even for small PR, it's also normal for the review being a iterative process :-)
>
> (and we sometime get distracted to other tasks that pop up 😅).
I fully understand 👍 . Okay I will manage to address the existing comments and try to work out them all before the tomorrow's review.<|||||>@ydshieh All the comments mentioned above have been addressed. <|||||>@ydshieh Hi, is the PR able to be merged now? Thank you very much! ❤️ <|||||>@sgugger I feel there must be very good reason we want to have `CLIPTextTransformer` and `CLIPVisionTransformer`, and use these components in `CLIPTextModel`, `CLIPVisionModel` and `CLIPModel`.
(Potentially to avoid `CLIPPreTrainedModel` in another `CLIPPreTrainedModel` which might cause some issues - at least if we ever want to have a TF port).
Do you think here we need to avoid this line
```
self.text_model = ChineseCLIPTextModel(text_config, add_pooling_layer=False)
```
and to create `ChineseCLIPTextTransformer` and use it?
<|||||>Hi @yangapku , other than the above comment, LGTM! But let's wait @sgugger 's response.
There are a few places where I believe we can still use `# copied from` (probably need some tweaks) - I can help on this before merge.<|||||>@ydshieh I think this was done this way just to be more aligned with the original checkpoints in the CLIP case. Here it works fine with the checkpoint, so I wouldn't over-complexify things<|||||>@yangapku Before we merge, could you run
```python
RUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/
```
I got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏
```
FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'
FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:
FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:
FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:
FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it
```
<|||||>> @yangapku Before we merge, could you run
>
> ```python
> RUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/
> ```
>
> I got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏
>
> ```
> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'
> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:
> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:
> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:
> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it
> ```
Okay I will try to fix them today.<|||||>> > @yangapku Before we merge, could you run
> > ```python
> > RUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > I got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏
> > ```
> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'
> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:
> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:
> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:
> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it
> > ```
>
> Okay I will try to fix them today.
@ydshieh The first and last failed cases have been fixed. Now only the failed test cases with Torchscript still remain. Meanwhile, to fix the first failed case, I have to remove the copied from comment for `ChineseCLIPTextModel`, since it has diverged from `BertModel` with our customed config_class `ChineseCLIPTextConfig`.<|||||>@ydshieh Hi, is the PR able to be merged now? Do I have to fix the test cases related with Torchscript? If so, more help is needed since I am not so familiar with it 😢 .<|||||>I will take a look on those 3 tests @yangapku . <|||||>@yangapku I pushed the remaining fix. Will merge once the final CI is green 🚀 🚀 🚀
非常感謝您的工作! 💯 <|||||>Thank you very much for your brilliant support! @ydshieh @sgugger <|||||>Hi @yangapku Just a follow up. From your branch , I see the file
```
convert_chinese_clip_original_pytorch_to_hf.py
```
is last modified on Nov 22. (The change on Nov 29 doesn't count). However, the modeling file changed quite a lot since then due to our review comments. I just want to make sure the conversion script still works correctly, and the original checkpoints and the the converted HF checkpoints still have the same outputs on some test examples.
It would be super nice if you can double check, but it's your call, it's just a suggestion.
(It's always good to make sure the users get the right checkpoints to use :-))
<|||||>@ydshieh Hi, I have ensured that this conversion script works correctly 😄 . In fact, today we have also updated the other 3 model scales (ViT-L/14, ViT-L/14@336px, ViT-H/14) on our HF model hub, during which I have used this script to convert our original model to HF format. After the conversion, I have tested all the converted HF checkpoints (the `pytorch_model.bin`), all of them works in expectation.<|||||>Thank you @yangapku ! |
transformers | 20,367 | closed | Can't assign custom vocab_files_names in Wav2Vec2Tokenizers | ### System Info
python: 3.8
transformers: 4.23.1
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer
def main() -> None:
# [NOTE]: plsace insert save_dir path
output_dir = r"./output_dir"
encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder1_tokenizer.vocab_files_name = {
"vocab_file": "encoder1_vocab.json",
"tokenizer_config_file": "encoder1_tokenizer_config.json",
}
encoder2_tokenizer.vocab_files_name = {
"vocab_file": "encoder2_vocab.json",
"tokenizer_config_file": "encoder2_tokenizer_config.json",
}
encoder1_tokenizer.save_vocabulary(save_diretory=output_dir)
encoder2_tokenizer.save_vocabulary(save_diretory=output_dir)
if "__main__" in __name__:
main()
```
### Expected behavior
This is the problem that occurred when the bi-encoder had to be placed on Wav2Vec2.
When i saving the model
To prevent overwriting due to duplication when saving in tokenizer, i may have named files that are saved differently.
I overridden vocab_files_name as shown in Reproduction.
But later I found out that the file hasn't been renamed
So I looked up the problem and found the following problem
tokenization_wav2vec2.py > [L127-161](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L127-L161)
```python
class Wav2Vec2CTCTokenizer(PreTrainedTokenizer):
"""
Constructs a Wav2Vec2CTC tokenizer.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
Args:
vocab_file (`str`):
File containing the vocabulary.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sentence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sentence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
word_delimiter_token (`str`, *optional*, defaults to `"|"`):
The token used for defining the end of a word.
do_lower_case (`bool`, *optional*, defaults to `False`):
Whether or not to accept lowercase input and lowercase the output when decoding.
**kwargs
Additional keyword arguments passed along to [`PreTrainedTokenizer`]
"""
vocab_files_names = VOCAB_FILES_NAMES # [NOTE]: here
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
model_input_names = ["input_ids", "attention_mask"]
def __init__(
```
VOCAB_FILES_NAMES sets the default value for vocab_files_names
tokenization_wav2vec2.py > [L595-606](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L595-L606)
```python
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
if not os.path.isdir(save_directory):
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
return
vocab_file = os.path.join(
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] # [NOTE]: here
)
with open(vocab_file, "w", encoding="utf-8") as f:
f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")
return (vocab_file,)
```
However, if you look at the save_vocabulary function, "vocab_files_names" is not included in the checked part, but "VOCAB_FILES_NAMES" is included, so the file name does not change even if you override "vocab_files_names"
So I want to change that "VOCAB_FILES_NAMES" part to "vocab_files_names"
And this was on Wav2Vec2CTCTokenizer and Wav2Vec2Tokenizer | 11-22-2022 06:24:30 | 11-22-2022 06:24:30 | Might be of interest to @sanchit-gandhi <|||||>Hey @jp1924
I think there's some confusion between `vocab_file`, `vocab_files_names` and `VOCAB_FILES_NAMES`:
* `vocab_file`: an **argument** that denotes the path to the file containing the tokeniser's vocabulary
* `vocab_files_names`: an **attribute** of the class `Wav2Vec2CTCTokenizer` (not an input argument). Note that this is a required attribute of the base class `PreTrainedTokenizerBase`:
https://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/src/transformers/tokenization_utils_base.py#L1388-L1390
We set this attribute purely to correctly initialise the `Wav2Vec2CTCTokenizer`.
* `VOCAB_FILES_NAMES`: a **dictionary** mapping. Maps the `vocab_file` to the correct name for saving (`vocab.json`)
Of these three, the only one we should ever have to change is `vocab_file` (specifying the right path to our tokeniser vocabulary). The other two are used internally to correctly initialise the PreTrained tokeniser.
If you want to save two distinct vocabulary files, you have two options:
1. Save them in different directories (`output_dir_1` and `output_dir_2`)
2. Pass the argument `filename_prefix` when you save the vocabulary
Option 1:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer
def main() -> None:
output_dir_1 = r"./output_dir_1"
output_dir_2 = r"./output_dir_2"
encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder1_tokenizer.save_vocabulary(save_directory=output_dir_1)
encoder2_tokenizer.save_vocabulary(save_directory=output_dir_2)
if "__main__" in __name__:
main()
```
-> this will save encoder1's vocab under `output_dir_1` and encoder2's vocab under `output_dir_2`.
Option 2:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer
def main() -> None:
output_dir = r"./output_dir"
encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
encoder1_tokenizer.save_vocabulary(save_directory=output_dir, filename_prefix="encoder1")
encoder2_tokenizer.save_vocabulary(save_directory=output_dir, filename_prefix="encoder2")
if "__main__" in __name__:
main()
```
-> this will save encoder1's vocab as `encoder1-vocab.json` and encoder2's vocab as `encoder2-vocab.json` (both under `output_dir`)<|||||>I didn't think I could save it using the prefix. Thank you for letting me know! |
transformers | 20,366 | closed | trainer.evaluate infinite loop problem | ### System Info
system info
OS: Ubuntu 18.04.6 LTS
GPUS: RTX 3090 * 2
CUDA: 11.1
python: 3.8
transformers: 4.23.1
pytorch: 1.10.1+cu111
NCCL: 2.10.3+cuda11.1
### Who can help?
@sgugger
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import TrainingArguments, Trainer, BertTokenizerFast, HfArgumentParser
from transformers.utils import ModelOutput
from transformers.trainer_utils import is_main_process
from datasets import load_dataset, Dataset
import torch
import torch.nn as nn
import torch.distributed as dist
class DummyModeloutput(ModelOutput):
loss: torch.FloatTensor = None
logits: torch.FloatTensor = None
class DummyModel(nn.Module):
def __init__(self) -> None:
super(DummyModel, self).__init__()
self.dummy_layer = nn.Linear(10, 10)
self.count = 0
def forward(self, input_ids, labels, *args, **kwargs):
rank = dist.get_rank()
device = torch.device(rank)
if is_main_process(rank):
logits = torch.zeros((2, 512, 42 + self.count, 111), device=device)
else:
logits = torch.ones((2, 231, 70 + self.count, 111), device=device)
loss = torch.tensor([0.5], device=device)
self.count += 1
return DummyModeloutput(loss=loss, logits=logits)
def main(parser: HfArgumentParser) -> None:
args, _ = parser.parse_args_into_dataclasses(return_remaining_strings=True)
def imdb_preprocesser(dataset: Dataset) -> dict:
text = dataset["text"]
label = dataset["label"]
encoded_data = tokenizer(text, return_attention_mask=False)
encoded_data["label"] = label
return encoded_data
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
model = DummyModel()
imdb_data = load_dataset("imdb")
train_data = imdb_data["train"].train_test_split(0.02)["test"]
valid_data = imdb_data["test"]
train_data = train_data.map(imdb_preprocesser, num_proc=3)
valid_data = valid_data.map(imdb_preprocesser, num_proc=3)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_data,
eval_dataset=valid_data,
args=args,
compute_metrics=lambda x: x,
)
trainer.evaluate(eval_dataset=valid_data)
if "__main__" in __name__:
parser = HfArgumentParser([TrainingArguments])
main(parser)
"""
for vscode user
launch.json
{
"name": "Python: infinite_loop",
"type": "python",
"request": "launch",
"module": "torch.distributed.launch",
"console": "integratedTerminal",
"justMyCode": false,
"env": {
"CUDA_VISIBLE_DEVICES": "0, 2",
"WANDB_DISABLED": "true",
"TORCH_CPP_LOG_LEVEL": "DEBUG",
"NCCL_DEBUG": "INFO",
"NCCL_DEBUG_SUBSYS": "COLL",
// "TORCH_DISTRIBUTED_DEBUG": "DETAIL",
},
"args": [
"--standalone",
"--nnodes=1",
"--nproc_per_node=2",
"",
"--output_dir=",
"--do_train=true",
"--do_eval=true",
"--do_eval=true",
"--per_device_train_batch_size=2",
"--learning_rate=1e-5",
"--evaluation_strategy=steps",
"--eval_steps=2",
"--save_strategy=no"
]
},
"""
```
### Expected behavior
---
This issue occurred during the implementation of the Streaming model called [Transformer-Transducer](https://arxiv.org/abs/2002.02562) as HuggingFace.
Before explaining this issue, it is first necessary to know the loss used by this model.
this model uses a loss function called [RNN-T loss](https://pytorch.org/audio/stable/generated/torchaudio.functional.rnnt_loss.html#torchaudio.functional.rnnt_loss) provided by torchaudio.
Unlike CTC-loss, RNN-T loss uses logits in 4 dimensions tensors like this
```
>>> logits.shape
(batch, max seq length, max target length + 1, class)
```
Depending on the data entered here, mel_seq and max_target_length will vary
ex)
[cuda:0]output_logits shape: (4, 512, 42, 111)
[cuda:1]output_logits shape: (4, 286, 32, 111)
and this model uses LogMel-Spectrogram as train_data
---
This issue occurs in evaluation_loop when training using single-node DDP in the Trainer.
When i evaluating this model, issue occurred like below
```python
Detected mismatch between collectives on ranks. Rank 1 is running inconsistent collective:
CollectiveFingerPrint(
OpType=ALLGATHER,
TensorShape=[1, 279, 44, 72],
TensorDtypes=Float,
TensorDeviceTypes=TensorOptions(
dtype=float (default),
device=cuda,
layout=Strided (default),
requires_grad=false (default),
pinned_memory=false (default),
memory_format=(nullopt)
)
)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2003, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 212, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 3101, in _nested_gather
tensors = distributed_concat(tensors)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2987, in evaluation_loop
logits = self._nested_gather(logits)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2774, in evaluate
output = eval_loop(
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2052, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 1819, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
return inner_training_loop(
File "[My_folder_path]/transformer-transducer/main.py", line 115, in train
outputs = trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
File "[My_folder_path]/transformer-transducer/main.py", line 96, in main
train(trainer, train_args)
File "[My_folder_path]/transformer-transducer/main.py", line 160, in <module>
main(parser)
```
This is a issue that arises from the [all_gather](https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_gather) feature of DDP.
The all_gather has the function of receiving a tensors from all devices belonging to the group
However, this issue occurs in the process of importing the tensors
```python
from transformers.trainer_utils import is_main_process
import torch.distributed as dist
import torch
import os
def main() -> None:
dist.init_process_group("nccl")
rank = dist.get_rank()
device = torch.device(rank)
if is_main_process(rank):
tensor = torch.zeros((2, 100, 100), device=device)
else:
tensor = torch.ones((2, 100, 70), device=device)
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
dist.all_gather(output_tensors, tensor)
if "__main__" in __name__:
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
os.enviton["CUDA_VISIBLE_DEVICES"] = "0,1"
os.environ['TORCH_CPP_LOG_LEVEL']="DEBUG"
main()
```
the size of the "output_tensors" is smaller than the size of the "tensors", the same "mismatch between collectives" problem occurs as above.
In above code, "TORCH_DISTRIBUTED_DEBUG" is set to "DETAIL", but if it isn't done, an error will not be printed.
all_gather just returns "output_tensors" to None.
But evaluation_loop
all_gather returns "output_tensor" and then does "torch.concat" with the existing tensor
In particular, in the process of "torch.concat " "output_tensors" in the None state with an existing tensor, i found a problem that does not output errors and takes on infinite loop.
---
In fact, i know that Transformer-Transducer is a model that is not supported by Huggingface, and this problem occurs by using a model that is not suitable for Huggingface Trainer
But I think it would be cool to add a streaming ASR model such as Transformer-Transducer to the huggingface, so it's an issue i found during the experiment. So if there's any way or idea to solve this problem, I'd like you to know
| 11-22-2022 02:01:13 | 11-22-2022 02:01:13 | The evaluation loop in the `Trainer` does not support un-padded outputs indeed, as it doesn't occur with any model of the library in our examples. Fixing it would be quite involved so I'd recommend using the `Accelerate` library which provides a method to pad across processes to evaluate such models. |
transformers | 20,365 | closed | Replace assertion with ValueError exceptions in run_image_captioning_flax.py | # What does this PR do?
Replaces 4 `assert` with `ValueError` exeception in run_image_captioning_flax.py.
Co-author: @AdiaWu
Related to [#12789](https://github.com/huggingface/transformers/issues/12789).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 11-22-2022 01:32:25 | 11-22-2022 01:32:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, but the issue was intended for library code, not examples. Let's see if @sanchit-gandhi doesn't mind!<|||||>Thank you for your suggestions @sanchit-gandhi! @sgugger ahh I should have known that. I'll update the library code next time.
[Updated] @sanchit-gandhi After incorporating the suggestions I ran into the failed CI tests below, which I'm not sure how to fix. I assume it has to do with missing some code reformatting. I tried running `make style` on the target folder and received the following output:
<img width="470" alt="Screen Shot 2022-11-22 at 4 05 56 PM" src="https://user-images.githubusercontent.com/54815905/203421123-e0a20022-f62e-4e78-a6dc-903a826cd6cf.png">
I'm also new to open source, but I'll look into this issue in the meantime. Would appreciate any of your inputs! Thanks.<|||||>Hey @katiele47, sorry for the late reply!
Could you check that you've installed Transformers form source? https://huggingface.co/docs/transformers/pr_checks#checks-on-a-pull-request
You can make sure you've installed the dev version of transformers by uninstalling transformers then pip installing from within the transformers repo:
```
pip uninstall transformers
pip install -e .[dev]
```
You should then be able to run `make style` for code quality fixes!<|||||>Hello @katiele47, in my case,
I used
- `pip install black`
- `black [your filepath]`
and this enabled CircleCi to pass the code quality tests.
**Output**

<|||||>Thank you so much for reviewing my change @sanchit-gandhi! |
transformers | 20,364 | closed | Link to Google Colab notebook not working | Hi, it seems that the link to ```A notebook on how to [finetune BART for summarization with fastai using blurr](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb).``` is broken.
@patrickvonplaten | 11-22-2022 00:15:55 | 11-22-2022 00:15:55 | cc @ohmeow who wrote the notebook.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,363 | closed | Bump tensorflow from 2.8.1 to 2.9.3 in /examples/research_projects/decision_transformer | Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.1 to 2.9.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p>
<blockquote>
<h2>TensorFlow 2.9.3</h2>
<h1>Release 2.9.3</h1>
<p>This release introduces several vulnerability fixes:</p>
<ul>
<li>Fixes an overflow in <code>tf.keras.losses.poisson</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887">CVE-2022-41887</a>)</li>
<li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li>
<li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li>
<li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li>
<li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li>
<li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li>
<li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li>
<li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li>
<li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li>
<li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li>
<li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li>
<li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li>
<li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li>
<li>Fixes a heap OOB in <code>FractionalAvgPool</code> and <code>FractionalMaxPool</code>(<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900">CVE-2022-41900</a>)</li>
<li>Fixes a <code>CHECK_EQ</code> in <code>SparseMatrixNNZ</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901">CVE-2022-41901</a>)</li>
<li>Fixes an OOB write in grappler (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902">CVE-2022-41902</a>)</li>
<li>Fixes a overflow in <code>ResizeNearestNeighborGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907">CVE-2022-41907</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>PyFunc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908">CVE-2022-41908</a>)</li>
<li>Fixes a segfault in <code>CompositeTensorVariantToComponents</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909">CVE-2022-41909</a>)</li>
<li>Fixes a invalid char to bool conversion in printing a tensor (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911">CVE-2022-41911</a>)</li>
<li>Fixes a heap overflow in <code>QuantizeAndDequantizeV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910">CVE-2022-41910</a>)</li>
<li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> via missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>TensorListScatter</code> and <code>TensorListScatterV2</code> in eager mode (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li>
</ul>
<h2>TensorFlow 2.9.2</h2>
<h1>Release 2.9.2</h1>
<p>This releases introduces several vulnerability fixes:</p>
<ul>
<li>Fixes a <code>CHECK</code> failure in tf.reshape caused by overflows (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35934">CVE-2022-35934</a>)</li>
<li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li>
<li>Fixes an OOB read in <code>Gather_nd</code> op in TF Lite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35937">CVE-2022-35937</a>)</li>
<li>Fixes a <code>CHECK</code> failure in <code>TensorListReserve</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35960">CVE-2022-35960</a>)</li>
<li>Fixes an OOB write in <code>Scatter_nd</code> op in TF Lite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35939">CVE-2022-35939</a>)</li>
<li>Fixes an integer overflow in <code>RaggedRangeOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35940">CVE-2022-35940</a>)</li>
<li>Fixes a <code>CHECK</code> failure in <code>AvgPoolOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35941">CVE-2022-35941</a>)</li>
<li>Fixes a <code>CHECK</code> failures in <code>UnbatchGradOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35952">CVE-2022-35952</a>)</li>
<li>Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36027">CVE-2022-36027</a>)</li>
<li>Fixes a <code>CHECK</code> failures in <code>AvgPool3DGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35959">CVE-2022-35959</a>)</li>
<li>Fixes a <code>CHECK</code> failures in <code>FractionalAvgPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35963">CVE-2022-35963</a>)</li>
<li>Fixes a segfault in <code>BlockLSTMGradV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35964">CVE-2022-35964</a>)</li>
<li>Fixes a segfault in <code>LowerBound</code> and <code>UpperBound</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35965">CVE-2022-35965</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p>
<blockquote>
<h1>Release 2.9.3</h1>
<p>This release introduces several vulnerability fixes:</p>
<ul>
<li>Fixes an overflow in <code>tf.keras.losses.poisson</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887">CVE-2022-41887</a>)</li>
<li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li>
<li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li>
<li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li>
<li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li>
<li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li>
<li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li>
<li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li>
<li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li>
<li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li>
<li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li>
<li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li>
<li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li>
<li>Fixes a heap OOB in <code>FractionalAvgPool</code> and <code>FractionalMaxPool</code>(<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900">CVE-2022-41900</a>)</li>
<li>Fixes a <code>CHECK_EQ</code> in <code>SparseMatrixNNZ</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901">CVE-2022-41901</a>)</li>
<li>Fixes an OOB write in grappler (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902">CVE-2022-41902</a>)</li>
<li>Fixes a overflow in <code>ResizeNearestNeighborGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907">CVE-2022-41907</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>PyFunc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908">CVE-2022-41908</a>)</li>
<li>Fixes a segfault in <code>CompositeTensorVariantToComponents</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909">CVE-2022-41909</a>)</li>
<li>Fixes a invalid char to bool conversion in printing a tensor (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911">CVE-2022-41911</a>)</li>
<li>Fixes a heap overflow in <code>QuantizeAndDequantizeV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910">CVE-2022-41910</a>)</li>
<li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> via missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>TensorListScatter</code> and <code>TensorListScatterV2</code> in eager mode (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li>
</ul>
<h1>Release 2.8.4</h1>
<p>This release introduces several vulnerability fixes:</p>
<ul>
<li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li>
<li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li>
<li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li>
<li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li>
<li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li>
<li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li>
<li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li>
<li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li>
<li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li>
<li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li>
<li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li>
<li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li>
<li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/commit/a5ed5f39b675a1c6f315e0caf3ad4b38478fa571"><code>a5ed5f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58584">#58584</a> from tensorflow/vinila21-patch-2</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/258f9a1251346d93e129c53f82d21732df6067f5"><code>258f9a1</code></a> Update py_func.cc</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/cd27cfb438b78a019ff8a215a9d6c58d10c062c3"><code>cd27cfb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58580">#58580</a> from tensorflow-jenkins/version-numbers-2.9.3-24474</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/3e75385ee6c9ef8f06d6848244e1421c603dd4a1"><code>3e75385</code></a> Update version numbers to 2.9.3</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/bc72c39774b0a0cb38ed03e5ee09fa78103ed749"><code>bc72c39</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58482">#58482</a> from tensorflow-jenkins/relnotes-2.9.3-25695</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/3506c90f5ac0f471a6b1d60d4055b14ca3da170b"><code>3506c90</code></a> Update RELEASE.md</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/8dcb48e384cd3914458f3c494f1da878ae8dc6d5"><code>8dcb48e</code></a> Update RELEASE.md</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/4f34ec84994e63cf47c1d13748a404edd3d5a0d3"><code>4f34ec8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58576">#58576</a> from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/6fc67e408f239384d26acabc34d287911af92dc8"><code>6fc67e4</code></a> Replace CHECK with returning an InternalError on failing to create python tuple</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/5dbe90ad21068007cbc31a56e8ed514ec27e0b26"><code>5dbe90a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58570">#58570</a> from tensorflow/r2.9-7b174a0f2e4</li>
<li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.1...v2.9.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 11-21-2022 23:48:41 | 11-21-2022 23:48:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it. |
transformers | 20,362 | open | Add FlexiBERT | ### Model description
FlexiBERT is a suite of *flexible* and *heterogeneous* models. The design space was proposed in this [paper](https://arxiv.org/abs/2205.11656) accepted for publication in the Journal of Artificial Intelligence Research.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The weights are available [here](https://huggingface.co/shikhartuli/flexibert-mini). | 11-21-2022 21:08:10 | 11-21-2022 21:08:10 | The PR was closed by bot. Please reopen the PR and let me know if anything needs to be done to merge the PR. The PR would add the FlexiBERT suite of models to 🤗 Transformers. |
transformers | 20,361 | closed | error loading facebook/opt-30b with text generation pipeline using 8bit mixed precision | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0a0+d0d6b1f (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten, @Narsil, @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running the following on a system with one (NVIDIA A5000) GPU:
```
from transformers import pipeline
model = "facebook/opt-30b"
model_kwargs = {"device_map": "auto", "load_in_8bit": True}
generator = pipeline(task="text-generation", model=model, device=0, model_kwargs=model_kwargs)
```
yields error:
`ValueError: Could not load model facebook/opt-30b with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.opt.modeling_opt.OPTForCausalLM'>).`
### Expected behavior
Should be able to create generator with no problem and generate text with `generator.__call__`.
The code works with no error when using smaller opt model checkpoints: "facebook/opt-2.7b", "facebook/opt-6.7b".
Can create model, tokenizer, and generate without pipeline using `AutoModelForCausalLM.from_pretrained(model, device_map="auto")` with `model="facebook/opt-30b"` despite the error message. | 11-21-2022 20:38:09 | 11-21-2022 20:38:09 | If you can create your tokenizer and model, just send them to the pipeline function as a quick workaround :-)<|||||>@sgugger This actually leads to another error:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model = "facebook/opt-30b"
model_kwargs = {"device_map": "auto", "load_in_8bit": True}
m = AutoModelForCausalLM.from_pretrained(model, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)
generator = pipeline(task="text-generation", model=m, tokenizer=tokenizer, device=0, model_kwargs=model_kwargs)
```
Yields `NotImplementedError`:
```
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:980, in Module.to.<locals>.convert(t)
977 if convert_to_format is not None and t.dim() in (4, 5):
978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
979 non_blocking, memory_format=convert_to_format)
--> 980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
```<|||||>Please provide the full traceback, as we can't see what's happening otherwise especially since I can't reproduce locally on my side. cc @younesbelkada who might have better luck reproducing the bug!<|||||>hi @morrisalp
Thanks for the heads up and for flagging the issue!
Can you please try:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = "facebook/opt-30b"
model_kwargs = {"device_map": "auto", "load_in_8bit": True}
m = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)
tokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)
generator = pipeline(task="text-generation", model=m, tokenizer=tokenizer)
```
No need to add `model_kwargs` and `device=0` in addition to what you have added ;) this should work! let us know here<|||||>Full traceback for first error (using pipeline only):
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [7], line 1
----> 1 generator = pipeline(task="text-generation", model=model, device=0, model_kwargs=model_kwargs)
File /opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py:727, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
723 # Infer the framework from the model
724 # Forced if framework already defined, inferred if it's None
725 # Will load the correct model if possible
726 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> 727 framework, model = infer_framework_load_model(
728 model,
729 model_classes=model_classes,
730 config=config,
731 framework=framework,
732 task=task,
733 **hub_kwargs,
734 **model_kwargs,
735 )
737 model_config = model.config
738 hub_kwargs["_commit_hash"] = model.config._commit_hash
File /opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:266, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
263 continue
265 if isinstance(model, str):
--> 266 raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
268 framework = "tf" if model.__class__.__name__.startswith("TF") else "pt"
269 return framework, model
ValueError: Could not load model facebook/opt-30b with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.opt.modeling_opt.OPTForCausalLM'>).
```
<|||||>Full traceback for second error (creating tokenizer and model & passing them to pipeline):
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
File <timed exec>:1
File /opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py:873, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
870 if device is not None:
871 kwargs["device"] = device
--> 873 return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File /opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_generation.py:49, in TextGenerationPipeline.__init__(self, *args, **kwargs)
48 def __init__(self, *args, **kwargs):
---> 49 super().__init__(*args, **kwargs)
50 self.check_model_type(
51 TF_MODEL_FOR_CAUSAL_LM_MAPPING if self.framework == "tf" else MODEL_FOR_CAUSAL_LM_MAPPING
52 )
53 if "prefix" not in self._preprocess_params:
54 # This is very specific. The logic is quite complex and needs to be done
55 # as a "default".
56 # It also defines both some preprocess_kwargs and generate_kwargs
57 # which is why we cannot put them in their respective methods.
File /opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:778, in Pipeline.__init__(self, model, tokenizer, feature_extractor, modelcard, framework, task, args_parser, device, binary_output, **kwargs)
776 # Special handling
777 if self.framework == "pt" and self.device.type != "cpu":
--> 778 self.model = self.model.to(self.device)
780 # Update config with task specific parameters
781 task_specific_params = self.model.config.task_specific_params
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:982, in Module.to(self, *args, **kwargs)
978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
979 non_blocking, memory_format=convert_to_format)
980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
--> 982 return self._apply(convert)
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)
633 def _apply(self, fn):
634 for module in self.children():
--> 635 module._apply(fn)
637 def compute_should_use_set_data(tensor, tensor_applied):
638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
639 # If the new tensor has compatible tensor type as the existing tensor,
640 # the current behavior is to change the tensor in-place using `.data =`,
(...)
645 # global flag to let the user control whether they want the future
646 # behavior of overwriting the existing tensor or not.
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)
633 def _apply(self, fn):
634 for module in self.children():
--> 635 module._apply(fn)
637 def compute_should_use_set_data(tensor, tensor_applied):
638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
639 # If the new tensor has compatible tensor type as the existing tensor,
640 # the current behavior is to change the tensor in-place using `.data =`,
(...)
645 # global flag to let the user control whether they want the future
646 # behavior of overwriting the existing tensor or not.
[... skipping similar frames: Module._apply at line 635 (3 times)]
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)
633 def _apply(self, fn):
634 for module in self.children():
--> 635 module._apply(fn)
637 def compute_should_use_set_data(tensor, tensor_applied):
638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
639 # If the new tensor has compatible tensor type as the existing tensor,
640 # the current behavior is to change the tensor in-place using `.data =`,
(...)
645 # global flag to let the user control whether they want the future
646 # behavior of overwriting the existing tensor or not.
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:658, in Module._apply(self, fn)
654 # Tensors stored in modules are graph leaves, and we don't want to
655 # track autograd history of `param_applied`, so we have to use
656 # `with torch.no_grad():`
657 with torch.no_grad():
--> 658 param_applied = fn(param)
659 should_use_set_data = compute_should_use_set_data(param, param_applied)
660 if should_use_set_data:
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:980, in Module.to.<locals>.convert(t)
977 if convert_to_format is not None and t.dim() in (4, 5):
978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
979 non_blocking, memory_format=convert_to_format)
--> 980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
```<|||||>@younesbelkada That code gives me the following error, but if this is a GPU OOM error then that is progress :)
```
m = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [5], line 1
----> 1 m = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)
File /opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 elif type(config) in cls._model_mapping.keys():
462 model_class = _get_model_class(config, cls._model_mapping)
--> 463 return model_class.from_pretrained(
464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
465 )
466 raise ValueError(
467 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
468 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
469 )
File /opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py:2280, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2276 device_map_without_lm_head = {
2277 key: device_map[key] for key in device_map.keys() if key not in modules_to_not_convert
2278 }
2279 if "cpu" in device_map_without_lm_head.values() or "disk" in device_map_without_lm_head.values():
-> 2280 raise ValueError(
2281 """
2282 Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
2283 the quantized model. If you have set a value for `max_memory` you should increase that. To have
2284 an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.
2285 """
2286 )
2287 del device_map_without_lm_head
2289 if from_tf:
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you have set a value for `max_memory` you should increase that. To have
an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.
```<|||||>Nice! Yes this error is indeed due to the fact that your int8 model does not fit your available GPU memory. Could you share here what is the hardware you are using (with the avail GPU RAM)? thanks!
You can do something like `nvidia-smi` and post the output here<|||||>Sure, here are the specs (with nothing running currently):
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A5000 On | 00000000:C3:00.0 Off | Off |
| 30% 21C P5 29W / 230W | 2MiB / 24256MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```<|||||>I see, the 30B parameters model needs at least around `30GB` to be loaded in 8-bit. So you'll need more GPU RAM to fit the model here sadly.
However, if you absolutely want to run it, there is hacky solution. You can do:
`pip install --upgrade git+https://github.com/younesbelkada/transformers@bnb_add_custom_map`
and run:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = "facebook/opt-30b"
device_map = {
"model.decoder.embed_tokens": 0,
"model.decoder.embed_positions": 0,
"model.decoder.final_layer_norm": 0,
"lm_head": 0,
"model.decoder.layers.0": 0,
"model.decoder.layers.1": 0,
"model.decoder.layers.2": 0,
"model.decoder.layers.3": 0,
"model.decoder.layers.4": 0,
"model.decoder.layers.5": 0,
"model.decoder.layers.6": 0,
"model.decoder.layers.7": 0,
"model.decoder.layers.8": 0,
"model.decoder.layers.9": 0,
"model.decoder.layers.10": 0,
"model.decoder.layers.11": 0,
"model.decoder.layers.12": 0,
"model.decoder.layers.13": 0,
"model.decoder.layers.14": 0,
"model.decoder.layers.15": 0,
"model.decoder.layers.16": 0,
"model.decoder.layers.17": 0,
"model.decoder.layers.18": 0,
"model.decoder.layers.19": 0,
"model.decoder.layers.20": 0,
"model.decoder.layers.21": 0,
"model.decoder.layers.22": 0,
"model.decoder.layers.23": 0,
"model.decoder.layers.24": 0,
"model.decoder.layers.25": 0,
"model.decoder.layers.26": 0,
"model.decoder.layers.27": 0,
"model.decoder.layers.28": 0,
"model.decoder.layers.29": 0,
"model.decoder.layers.30": 0,
"model.decoder.layers.31": 0,
"model.decoder.layers.32": 0,
"model.decoder.layers.33": 0,
"model.decoder.layers.34": 0,
"model.decoder.layers.35": 0,
"model.decoder.layers.36": 0,
"model.decoder.layers.37": 0,
"model.decoder.layers.38": 0,
"model.decoder.layers.39": 0,
"model.decoder.layers.40": 0,
"model.decoder.layers.41": 0,
"model.decoder.layers.42": "cpu",
"model.decoder.layers.43": "cpu",
"model.decoder.layers.44": "cpu",
"model.decoder.layers.45": "cpu",
"model.decoder.layers.46": "cpu",
"model.decoder.layers.47": "cpu",
}
model_kwargs = {"device_map": device_map, "load_in_8bit": True}
m = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)
tokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)
generator = pipeline(task="text-generation", model=m, tokenizer=tokenizer)
```
But keep in mind that the layers that will be set on `cpu` will be kept in their native `dtype` and not converted in `int8`. Also this feature is not supported yet as the integration should done on `bitsandbytes` side, so you may encounter unexpected behaviours but you can always give it a try!
Related: #19090<|||||>Thanks! I mainly wanted to see what the largest LLM I could fit on one of my GPUs would be using mixed precision, and I couldn't tell previously if the 30B model would be OOM due to the other errors...<|||||>I see now thanks a lot!
Closing this issue as I consider to be completed, don't hesitate to reopen it if you have more questions ! |
transformers | 20,360 | closed | Fix toctree for Section 3 in Spanish Documentation | # What does this PR do?
Fixes #20359
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 11-21-2022 20:32:54 | 11-21-2022 20:32:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,359 | closed | Missing sections in Spanish documentation | **Expected output**: Nested sections with proper documents in Spanish translation.
**Current output**: Some sections are missing (but docs are present), so documents are not correctly organized
-----
After contributing to #15947, I noticed `_toctree.yml` for the Spanish translation is not following the proper order as the original documentation.
For example, there is no `General Usage` section in the Spanish version, but the `Create a custom architecture` ([create_a_model](https://huggingface.co/docs/transformers/v4.24.0/es/create_a_model)) document Is present.
Contributions to #15947 missed adding the right sections updating `_toctree.yml`. | 11-21-2022 20:30:10 | 11-21-2022 20:30:10 | Hi @sgugger. Maybe you can help me to verify this issue I just opened. I attached a PR to fix it (you might also be the right person to review and approve it). Thanks! |
transformers | 20,358 | closed | Integrate Timm models as vision encoders in Vision encoder decoder models | ### Feature request
Hello,
I would like to use timm models as vision encoders in vision encoder-decoder model who can I do so?
### Motivation
Timm models are very powerful and allow a lot of image processing which would allow us to make better document AI models
### Your contribution
I would be down to help out with PR | 11-21-2022 20:17:12 | 11-21-2022 20:17:12 | @NielsRogge <|||||>Hi,
I don't think that makes sense as for the Vision encoder-decoder framework, one needs a Transformer-based encoder or at least an encoder that outputs a sequence of hidden states, which can be used for cross-attention with the Transformer-based language decoder. Given that most backbones in timm are convolution-based, which output 3D feature maps, one would first need to flatten/project them into a sequence of hidden states.
The VisionEncoderDecoderModel [framework](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) currently supports ViT, DeiT, BEiT and Swin Transformer, which is already a fair amount of models.<|||||>I was more looking to use Swin with non-square image sizes and the Timm implementation allows us to do that and I would like to implement SwinV2 with bert<|||||>> I was more looking to use Swin with non-square image sizes
Our implementation also allows that, did you try it?<|||||>I havent tried it can you share an example?
<|||||>```
from transformers import AutoModelForImageClassification
import torch
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
pixel_values = torch.randn(1, 3, 244, 522)
outputs = model(pixel_values)
``` |
transformers | 20,357 | closed | BART-large + JAX produce nan loss during training/eval | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.1 (gpu)
- Jax version: 0.3.24
- JaxLib version: 0.3.24
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- CUDA: NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I first experienced with a customized dataset with facebook/bart-large. The model trained well under torch+fp16. For speed purpose, I was trying to use jax using the provided example, but experienced nan loss in the evaluation loop. Sometime in the training loop as well. There is no issue with facebook/bart-base model.
After that, I switched to the original example provided in the main repo and trained on a public dataset. Yet no luck. Here is the process of reproduction:
1. Install transformer under development mode with `pip install -e`.
2. going into example folder `cd transformers/examples/flax/summarization`.
3. run training script using following command:
```
python run_summarization_flax.py \
--output_dir ~/data/test-cnn_dailymail \
--model_name_or_path facebook/bart-large \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--dataset_name cnn_dailymail \
--dataset_config_name 3.0.0 \
--do_train \
--do_eval \
--max_train_samples 10000 \
--max_eval_samples 100 \
--max_source_length 512 \
--max_target_length 200
```
4. adding fp16 with `--dtype float16` resulting in the same issue
see log below:
```
INFO:__main__:***** Running training *****
INFO:__main__: Num examples = 10000
INFO:__main__: Num Epochs = 3
INFO:__main__: Instantaneous batch size per device = 2
INFO:__main__: Total train batch size (w. parallel & distributed) = 2
INFO:__main__: Total optimization steps = 15000
Epoch... (1/3 | Loss: nan, Learning Rate: 3.3336666092509404e-05)
Epoch... (1/3 | Eval Loss: nan | )
```
I also noticed that the torch model is saved in float16. I've tried to convert the model to float32 and load with `from_pt=True`. Receive same nan loss problem. Not sure if it related to this issue or not.
```
You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.
```
I will keep digging on this and share more information. Please kindly let me know if there is any recommend steps to debug this. :)
### Expected behavior
bart-large can be trained and evaluated with normal loss with JAX. | 11-21-2022 19:37:25 | 11-21-2022 19:37:25 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @sanchit-gandhi <|||||>Hey @fen-deepscribe! Sorry for the late reply here!
Looks like there could be two issues at play:
1. The BART-large weights are indeed stored in fp16 on the HF Hub. The Flax `.from_pretrained` method respects the dtype of the stored params (no upcast/downcast operations), so when we load from the checkpoint at [facebook/bart-large](https://huggingface.co/facebook/bart-large), we load the weights in fp16. You can read more about this here: https://github.com/huggingface/transformers/issues/16736. Loading the weights in fp16 precision might be causing undesirable behaviour during training: Flax doesn't expect a dtype of fp16 (only fp32 or bf16). This could be messing up the dtypes of the activations, giving exploding grads and updates. If you want to load the weights in fp32, you can use the checkpoint at [patrickvonplaten/bart-large-fp32](https://huggingface.co/patrickvonplaten/bart-large-fp32).
2. BART-large is know to have numerical instabilities during fine-tuning (see https://github.com/huggingface/transformers/issues/15559 and https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564 and https://github.com/huggingface/transformers/issues/15559#issuecomment-1294457798). If you're fine-tuning the model for summarisation, you can try loading the checkpoint [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) -> this checkpoint is stable and should be less prone to exploding gradients! I would give this checkpoint a go with your fine-tuning experiments. It tends to be a much easier fix than those linked in the aforementioned thread!
Let me know how you get on! More than happy to dig into this further if the exploding loss persists!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Well, I have the same problem. After a period of training ,Bart's forecast output is starting to get really weird. No matter what the model inputs, Bart will produce the same text. Whether I use FP16 or not. <|||||>Have you tried using the checkpoint [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn)? |
transformers | 20,356 | closed | Add FlexiBERT | # What does this PR do?
Implements the FlexiBERT suite of 3.32 billion transformer architectures (and also FlexiBERT 2.0 design space with 1.7 $\times$ 10<sup>88</sup> architectures). The design space supports *flexible* and *heterogeneous* transformer models with diverse attention types. The [paper](https://arxiv.org/abs/2205.11656) has been accepted for publication at the Journal of Artificial Intelligence Research.
Fixes #20362
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @sgugger
## Tasks Completed
From those provided in the [add new model](https://huggingface.co/docs/transformers/add_new_model) contribute section
- [x] (Optional) Understood the model’s theoretical aspects
- [x] Prepared 🤗 Transformers dev environment
- [x] Set up debugging environment of the original repository
- [x] Created script that successfully runs the forward() pass using the original repository and checkpoint (Available in Demo Colab)
- [x] Successfully added the model skeleton to 🤗 Transformers
- [x] Successfully converted original checkpoint to 🤗 Transformers checkpoint
- [x] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint
- [x] Finished model tests in 🤗 Transformers
- [x] Successfully added tokenizer in 🤗 Transformers
- [x] Run end-to-end integration tests
- [x] Finished docs
- [x] Uploaded model weights to the Hub (at this [link](https://huggingface.co/shikhartuli/flexibert-mini), shall add more models soon)
- [x] Submitted the pull request
- [ ] (Optional) Added a demo notebook
Thanks so much! | 11-21-2022 18:26:13 | 11-21-2022 18:26:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The PR was closed by bot. Please reopen the PR and let me know if anything needs to be done to merge the PR. |
transformers | 20,355 | closed | Add missing tokenizer tests - RemBert | # What does this PR do?
Fixes : #16627
Added tokenizer tests for Rembert. I took reference from test_tokenization_camembert.py
@SaulLu @LysandreJik | 11-21-2022 18:01:18 | 11-21-2022 18:01:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@IMvision12 It looks like the added tests do not pass. You can try locally with
```
pytest tests/models/rembert/test_tokenization_rembert.py
```<|||||>While using sentencepiece.model :
`SAMPLE_VOCAB = get_tests_dir("fixtures/test_sentencepiece.model")`
these tests are failing :
- FAILED tests/models/rembert/test_tokenization_rembert.py::RemBertTokenizationTest::test_convert_token_and_id - AssertionError: '<unk>' != '[PAD]'
- FAILED tests/models/rembert/test_tokenization_rembert.py::RemBertTokenizationTest::test_get_vocab - AssertionError: '<unk>' != '[PAD]' |
transformers | 20,354 | closed | Generate: shorter XLA contrastive search tests | # What does this PR do?
Makes XLA contrastive search tests shorter (in terms of tokens), to avoid flaky tests.
This is due to our recently failing CI for some models. The XLA code path passes the test with XLA compilation off -- i.e. the XLA code path returns the same as the non-XLA code path. However, with XLA compilation on, there is a chance of obtaining different results. I couldn't pinpoint the issue, but there is a possible explanation.
This may be due to numerical stability issues: contrastive search takes the maximum of a cosine distance between two hidden states [built from randomly initialized weights] as a penalty to the logits, which combined with the logits' low values [because the test model was untrained] could explain the mismatch.
👉 In any case, I was already planning on reinforcing contrastive search XLA testing with real examples on key models, like T5 and OPT.
| 11-21-2022 17:16:05 | 11-21-2022 17:16:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>And as always, if a one-line short comment in the code could keep your explanation more visible for other (HF) developers, don't hesitate. |
transformers | 20,353 | closed | Generate: `model_kwargs` can also be an input to `prepare_inputs_for_generation` | # What does this PR do?
Fixes #20347
`model_kwargs` can also be a model input in `prepare_inputs_for_generation` -- in some models it is `kwargs`, in others `model_kwargs`.
This PR updates the input validation function to reflect that. | 11-21-2022 15:21:46 | 11-21-2022 15:21:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger added a minor test.
I'd rather spend the energy removing the `**kwargs` and `**model_kwargs`, which are only used as a lazy pattern (as opposed to being forwarded to the model or similar). It would allow for much stricter checking :) |
transformers | 20,352 | closed | Fix nightly runs | # What does this PR do?
The pipeline that runs the nightly tests exits with `Unexpected argument(s): nightly` instead of running all tests. I think we just need to add it in the custom config as an argument (unused) so circleCI doesn't complain anymore.
Manually triggered the tests on this branch with `nightly` at `true` and it ran all tests (see workflow nightly in the tests). | 11-21-2022 15:10:36 | 11-21-2022 15:10:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,351 | open | [VideoMAE] TensorFlow implementation | Closes #18641
## TODO
- [x] modeling_tf_videomae.py
- [x] integration tests
- [x] rest of the tests
- [x] documentation | 11-21-2022 12:42:05 | 11-21-2022 12:42:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20351). All of your documentation changes will be reflected on that endpoint.<|||||>I am hitting an issue and thought of asking for help.
The issue stems from the port of `VideoMAESelfAttention` i.e., `TFVideoMAESelfAttention`. Upon analyzing deeper, I think it's because of the mismatch between `nn.functional.linear` and my custom `linear_transformation()`. This is my investigative [notebook](https://colab.research.google.com/gist/sayakpaul/d50d013f59674b943ef7e2b6ed9d2f91/scratchpad.ipynb) where I have tried debugging the issue but still no luck so far. The assertion errors seem to be within low tolerance but can easily sum up.
Cc: @amyeroberts @gante @Rocketknight1 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Added WIP label to prevent PR from being closed |
transformers | 20,350 | closed | Add missing tokenizer tests - RemBert | # What does this PR do?
Added tokenizer tests for Rembert. I took reference from `test_tokenization_camembert.py`
@SaulLu @LysandreJik | 11-21-2022 11:57:10 | 11-21-2022 11:57:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for adding this! Can you first rebase your branch on main? TensorFlow new release broke a lot of things so tests won't pass unless you do this :-)<|||||>Looks like something went wrong and GitHub adds a lot of diff. If force-pushing doesn't solve the issue, you might need to close this PR and open a fresh one. |
transformers | 20,349 | closed | [Don't merge] debug nightly CI | [Don't merge] debug nightly CI | 11-21-2022 11:38:21 | 11-21-2022 11:38:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,348 | closed | Ability to fine-tune whisper large on a GPU with 24 gb of ram | ### Feature request
I've been trying to fine-tune whisper large on a GPU with 24gb of ram (both single GPU and multi GPU) and I run out of memory while training (with batch size set to 1 and max-length of audio set to 2.5 seconds).
I made this a feature request not a bug report since I don't believe there is a problem with the code.
## Training script
<details>
<summary> Training code </summary>
```python
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-large")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-large", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
common_voice = common_voice.filter(lambda example: len(example["audio"]["array"]) < 2.5 * 16000, load_from_cache_file=False)
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-large-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=1,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=1,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=1,
predict_with_generate=True,
generation_max_length=225,
save_steps=5, # set to < max_steps
eval_steps=5, # set to < max_steps
logging_steps=1, # set to < max_steps
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
processor.save_pretrained(training_args.output_dir)
trainer.train()
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
"model_name": "whisper-large-sv-test2", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-large",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
```
</details>
Example of error
<img width="1131" alt="Screenshot 2022-11-21 at 12 32 36" src="https://user-images.githubusercontent.com/1704131/203040642-9b7dabb8-e76c-4786-bd4b-48e706a70563.png">
### Motivation
It would be great if it would be able to fine-tune the large model on a 24gb GPU since that would make it much more easy to train the larger mode..
### Your contribution
I would love to help out with this issue. | 11-21-2022 11:37:06 | 11-21-2022 11:37:06 | Hey @BirgerMoell - thanks for opening this feature request and for your interest in the Whisper model 🗣🇸🇪 I've made the code in your original post a drop-down for ease of reading.
The examples script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) has recently been updated to handle Whisper (https://github.com/huggingface/transformers/pull/19519), so you can use this as an end-to-end script for training your system! All you have to do is modify the example training config given in the README for your language of choice: [examples/pytorch/speech-recognition#whisper-model](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#whisper-model)
And then execute the command! The rest will be taken care for you 🤗
A couple of things:
* They're not joking when they say 'large' for the [large checkpoint](https://huggingface.co/openai/whisper-large)! The model is 1.6 billion parameters, which is extremely big! Have you tried using the [medium checkpoint](https://huggingface.co/openai/whisper-medium)? It's about half the size, but gets comparable results to the large checkpoint under zero-shot conditions. It'll most likely surpass the large zero-shot results with fine-tuning. I've managed to train the medium checkpoint on a V100 16GB with a batch size of 32 (`per_device_batch_size=2` and `gradient_accumulation_steps=16`). There are some things we can try to make the model / training more memory efficient if you want to use the medium or large checkpoints! (see below)
* The audio samples are **padded / truncated to 30s** before getting the log-Mel features. So setting the max length of audio samples to 2.5s will mean the **audio samples are padded to 30s**, and then the log-Mel features calculated. So the memory usage will be the same as using a max length of 30s! I explain this briefly in the blog: https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor
Now, assuming that you do want to train a bigger model than the 'small' checkpoint, you can either try the training script with the medium checkpoint and a `per_device_batch_size` of 2 or 4, **or** you can try using the large checkpoint with some memory hacks:
1. The Adam optimiser requires two params (betas) for every model parameter. So the memory requirement of the optimiser is two times that of the model! You can switch to using an **8bit version** of the Adam optimiser from [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). This will save you a lot of memory. You need to pip install bitsandbytes:
```
pip install bitsandbytes
```
and then set `optim="adamw_bnb_8bit"` when you instantiate the `Seq2SeqTrainingArguments`:
```python
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-large-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=1,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=1,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=1,
predict_with_generate=True,
generation_max_length=225,
save_steps=5, # set to < max_steps
eval_steps=5, # set to < max_steps
logging_steps=1, # set to < max_steps
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
optim="adamw_bnb_8bit", # set the optimiser!
)
```
Check out the docs for more details: (https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.optim)
2. You can use a different optimiser all together. Adam requires two optimiser params per one model param, but Adafactor uses only one. This time, set `optim="adafactor"`. This is untested for fine-tuning Whisper, so I'm not sure how Adafactor performance compares to Adam.
Neither 1 or 2 are tested, so I can't guarantee that they'll work, but they're easy approaches to try! One line code changes for each. I'd try 1 first then 2, as there shouldn't be a performance degradation trying 1, but there might be with 2.
I'll reiterate again that the medium checkpoint is a good option for a device < 80GB memory!<|||||>Thank you so much for taking the time to write explain this. I will definitely try it out. I will also try out training on the medium model size. <|||||>1. Using adamw_bnb_8bit I ran out of memory.
2. I managed to get it to work with adafactor. I just did a test so I'm not sure how it affected the performance but I can try running it longer to see what happens. The eval_wer was 30.935251798561154 after just 5 epochs. Thanks for the help!
<|||||>Here is the trained model. I haven't evaluated it but the WER is a Wer: 30.9353 which is not so good considering the model size.
https://huggingface.co/birgermoell/whisper-large-sv<|||||>Hey @BirgerMoell - glad to see it worked! I would deffo give the medium model a run as well, has been quite performant in my experiments to date!
For the large model, it looks like you trained for only 0.08 epochs / 5 training steps:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.5521 | 0.04 | 5 | 3.5048 | 48.2014 |
| 1.8009 | 0.08 | 10 | 1.5259 | 30.9353 |
I would definitely train for at least 2k training steps to get a reasonable WER. You can update the `Seq2SeqTrainingArguments` accordingly:
```python
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-large-sv-test2",
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
learning_rate=1e-5,
warmup_steps=1,
max_steps=2000, # set max steps to > 2k
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=1,
predict_with_generate=True,
generation_max_length=225,
save_steps=500,
eval_steps=500,
logging_steps=50,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
optim="adafactor",
)
```
I would also **strongly** recommend using `gradient_accumulation_steps` to increase your effective batch size - a batch-size of 1 will likely give you noisy gradient updates. If `per_device_train_batch_size=1` is the biggest you can fit, you can try `gradient_accumulation_steps=16` or even `gradient_accumulation_steps=32`.
I'm confident you'll get good results training for longer and with a bigger batch size!<|||||>Hi @sanchit-gandhi,
if instead of putting the Adam optimizer in the 8-bit version (your proposal 1), why not download Whisper in the 8-bit version?
I did try with the following code but it did not work. Do you know why?
```
#!pip install accelerate
#!pip install bitsandbytes
#!pip install git+https://github.com/huggingface/transformers.git
from transformers import WhisperForConditionalGeneration
model_name = "openai/whisper-medium"
model = WhisperForConditionalGeneration.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
Error message:
```
Downloading: 100%
1.97k/1.97k [00:00<00:00, 56.5kB/s]
Downloading: 100%
3.06G/3.06G [00:49<00:00, 76.6MB/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-6-58c82c91d282>](https://localhost:8080/#) in <module>
1 from transformers import WhisperForConditionalGeneration
2
----> 3 model = WhisperForConditionalGeneration.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
[/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2404 # Dispatch model with hooks on all devices if necessary
2405 if device_map is not None:
-> 2406 dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index)
2407
2408 if output_loading_info:
TypeError: dispatch_model() got an unexpected keyword argument 'offload_index'
```<|||||>cc @younesbelkada the 8bit master
In general though, the 8bit model will be slower. Hence the suggestion for changing the optimiser first.<|||||>Can you try to install `accelerate` from the master branch? `pip install git+https://github.com/huggingface/accelerate.git@main` this should fix your issue and you'll be able to run whisper in 8bit<|||||>Hi @younesbelkada,
Thanks for your answer but I'm still with an error. See code below and the error message:
```
#!pip install git+https://github.com/huggingface/accelerate.git@main
#!pip install bitsandbytes
#!pip install git+https://github.com/huggingface/transformers.git
from transformers import WhisperForConditionalGeneration
model_name = "openai/whisper-medium"
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained(model_name, device_map="auto", load_in_8bit=True, use_cache = False)
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-medium-hi", # change to a repo name of your choice
per_device_train_batch_size=4,
gradient_accumulation_steps=8, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=4000,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
optim="adamw_bnb_8bit", # set the optimiser
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
```
Error message:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-69786f5d74d5> in <module>
1 from transformers import Seq2SeqTrainer
2
----> 3 trainer = Seq2SeqTrainer(
4 args=training_args,
5 model=model,
2 frames
/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in to(self, *args, **kwargs)
1675 # Checks if the model has been loaded in 8-bit
1676 if getattr(self, "is_loaded_in_8bit", False):
-> 1677 raise ValueError(
1678 "`.to` is not supported for `8-bit` models. Please use the model as it is, since the"
1679 " model has already been set to the correct devices and casted to the correct `dtype`."
ValueError: `.to` is not supported for `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```<|||||>Hi @piegu
Thanks for your message - the error message is a bit misleading.
Actually it is not possible to pass an 8-bit model to a Trainer, please see the PR above this message :/ <|||||>cc @Vaibhavs10 <|||||>@younesbelkada DOes 8-bit model means both activation's and weights are in int8 ?
My goal to to generate whisper-tiny tflite model in int8 for both activation and weights
```
from transformers import WhisperForConditionalGeneration
model_name = "openai/whisper-tiny"
model = WhisperForConditionalGeneration.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
```
<|||||>Hi @nyadla-sys
Thanks for the message
Currently it's the LLM.int8: https://arxiv.org/abs/2208.07339 algorithm that is implemented, specifically the weights are in int8 whereas the activations are in float16.
The script that you shared should work out of the box with the latest version of `transformers` & `accelerate`<|||||>@younesbelkada, if activations are in float16/float32, the TFLite Whisper model works well. I am more interested in implementing an int8 version of the TFLite Whisper model. If you have any input, please share it with me
colab [notebook](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/generate_tflite_from_whisper.ipynb) for this <|||||>here is my full int8 [notebook](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_to_onnx_tflite_int8.ipynb) and [model](https://github.com/usefulsensors/openai-whisper/blob/main/models/whisper-int8.tflite) ,but am not really sure how to run inference and transcript the generated output by the model.
With this tiny.en int8 model size comes around ~36MB<|||||>Hey @nyadla-sys - looks like you're using TFWhisperModel. To get logits over the vocabulary (and thus transcriptions), you'll need to use TFWhisperForConditionalGeneration (as explained here: https://github.com/huggingface/transformers/issues/19691#issuecomment-1412440369) |
transformers | 20,347 | closed | past_key_values not accepted in generate with GPTNeoX | ### System Info
Python 3.7.13
transformers 4.22.2
### Who can help?
@LysandreJik @patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The `past_key_values` kwarg is not accepted when calling `model.generate(..., past_key_values=pkv)` on a `GPTNeoxForCausalLM`, even though the `model.forward` does accept this kwarg. It does seem to work fine with other model classes like GPT2.
Minimal example to reproduce error:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import transformers
model_id = "NinedayWang/PolyCoder-160M" # small model with GPTNeoXForCausalLM class
model = AutoModelForCausalLM.from_pretrained(model_id)
tok = AutoTokenizer.from_pretrained(model_id)
assert isinstance(model, transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXForCausalLM)
pkv = torch.rand(
(
1, # batch size
10, # number of tokens
2 * model.config.num_hidden_layers,
model.config.num_attention_heads,
model.config.hidden_size // model.config.num_attention_heads
)
)
out = model.generate(**tok("Hello world"), past_key_values=pkv)
```
Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/transformers/generation_utils.py", line 1146, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/transformers/generation_utils.py", line 862, in _validate_model_kwargs
f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the"
ValueError: The following `model_kwargs` are not used by the model: ['past_key_values'] (note: typos in the generate arguments will also show up in this list)
```
I checked the error location and located the bug ("transformers/generation_utils.py", line 862, in _validate_model_kwargs):
```
unused_model_args = []
model_args = set(inspect.signature(self.prepare_inputs_for_generation).parameters)
# `kwargs` if often used to handle optional forward pass inputs like `attention_mask`. If
# `prepare_inputs_for_generation` doesn't accept `kwargs`, then a stricter check can be made ;)
if "kwargs" in model_args:
model_args |= set(inspect.signature(self.forward).parameters)
for key, value in model_kwargs.items():
if value is not None and key not in model_args:
unused_model_args.append(key)
if unused_model_args:
raise ValueError(
f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the"
" generate arguments will also show up in this list)"
)
```
It first checks the args of `prepare_inputs_for_generation` and only adds the args of `forward` to the accepted list if `"kwargs"` is in the args of `prepare_inputs_for_generation`. However, contrary to GPT2, it only contains `model_kwargs` instead of `kwargs` for GPTNeox.
So either the GPTNeoX class should be adapted, or the _validate_model_kwargs method in generation_utils.py.
### Expected behavior
`generate` should be able to pass along all valid `model_kwargs` | 11-21-2022 10:58:50 | 11-21-2022 10:58:50 | cc @gante <|||||>Hey @ValeKnappich 👋
Yeah, `model_kwargs` needs to be added to `_validate_model_kwargs`. I'm on it :)<|||||>Great, thanks :)<|||||>@gante @sgugger
The kwarg validation was only a superficial issue. In fact, now it does not throw an error anymore, however, the `past_key_values` are still not passed on to the forward method. Looks like the `prepare_inputs_for_generation` method is at the core of the problem:
```
def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs):
input_shape = input_ids.shape
# if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
if attention_mask is None:
attention_mask = input_ids.new_ones(input_shape)
# cut decoder_input_ids if past is used
if past and past[0] is not None:
input_ids = input_ids[:, -1:]
return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past}
```
Note that model_kwargs is simply swallowed here. I will create a PR shortly<|||||>@gante @ArthurZucker I think we should rename all occurrences of `"past"` to `"past_key_values"` in `prepare_inputs_for_generation` and deprecate "past" if necessary.
`"past"` was simply the name for the past key values states before we renamed everything to `past_key_values`, so this is just a left-over.<|||||>Agreed <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,346 | closed | [Switch Transformers] Fix failing slow test | # What does this PR do?
This PR should fix the currently failing slow test for `SwitchTransformers`.
Putting the model on GPU and running the inference on GPU should fix the test. I have tested the test on PyTorch 1.12 (same as the daily CI runner), but I need to test it on the daily CI runner first before merging
cc @ydshieh @ArthurZucker @sgugger
EDIT: it passes on the CI daily runner, marking it as ready for review | 11-21-2022 10:50:00 | 11-21-2022 10:50:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks very much for the explanation! This should have been fixed now 💪 |
transformers | 20,345 | closed | Loading transformer on AWS Lambda throws OMP errno 38 | ### System Info
Apologies if this is the wrong place to post but we're looking for pointers on tracking down what appears to be a transformers-related error.
We have trained a Spacy 3.3.1 transformer textcat which we're deploying as an AWS Python 3.9 Docker image to AWS Lambda. The model loads and infers correctly on the Linux development host (both using a test Python script and also using AWS SAM local), but fails in the Lambda runtime with OpenMP runtime error no 38 (see Lambda error output below).
A web search suggests this error occurs because Lambda doesn't support Python multiprocessing, specifically it doesn't mount /dev/shm, leading to the error (see links below). The Spacy team have confirmed they do not directly invoke multiprocessing but that transformers does (see https://github.com/explosion/spaCy/discussions/11836#discussioncomment-4193368).
Further testing revealed that loading a blank Spacy model inside the Lambda runtime works perfectly, but loading the transformer on Python 3.7 gives the error, as does the base transformer model spacy.load("en_core_web_trf"). We conclude that transformers is using multiprocessing incompatible with AWS Lambda.
A solution could be to disable transformer multiprocessing when loading the Spacy model. Any suggestions how we can disable OpenMP multiprocessing through a runtime setting? Or as a last resort we may need to override multiprocessing.Pool/Queue with multiprocessing.Process/Pipe which apparently do work on Lamda (suggested in links below).
**Lambda error output**
```
OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #38: Function not implemented
OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #38: Function not implemented
START RequestId: XYZ Version: $LATEST
RequestId: XYZ Error: Runtime exited with error: signal: aborted
Runtime.ExitError
END RequestId: XYZ
REPORT RequestId: XYZ Duration: 547.37 ms Billed Duration: 548 ms Memory Size: 3008 MB Max Memory Used: 142 MB
```
**Relevant links**
https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/
https://spacy.io/usage/processing-pipelines#multiprocessing
https://forum.opennmt.net/t/unable-to-create-ctranslate2-translator-in-aws-lambda/4922
https://stackoverflow.com/questions/34005930/multiprocessing-semlock-is-not-implemented-when-running-on-aws-lambda
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Create conda environment.yml file with Spacy 3.3.1 (which installs transformers=4.18.0 as a dependency)
```
channels:
- defaults
dependencies:
- python=3.9.15
- spacy-transformers=1.1.5
- spacy-model-en_core_web_sm=3.3.0
- spacy-model-en_core_web_trf=3.3.0
```
2. Create Dockerfile (relevant extract shown below)
```
FROM public.ecr.aws/lambda/python:3.9
RUN wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
RUN conda env update -n base -f environment.yml
```
3. Create Lambda Python handler
```
import spacy
def lambda_handler(event, context):
# Works in AWS Lambda Python 3.9 runtime
nlp = spacy.load("en_core_web_sm")
# Throws OMP errno 38 in AWS Lambda Python 3.9 runtime
nlp = spacy.load("en_core_web_trf")
return {
"statusCode": 200
}
```
### Expected behavior
Lambda execution completes successfully and returns code 200. | 11-21-2022 10:14:23 | 11-21-2022 10:14:23 | There is little we can do without knowing which specific code in Transformers you are running. Loading a model with Transformers in general does not use Python multiprocessing for instance, so it's a bit hard for us to know what you want us to fix without a clear reproducer (using Transformers only and not a third-party library).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,344 | closed | [Maskformer] Add MaskFormerSwin backbone | # What does this PR do?
This is part 2 of 3 of the big #20204 PR.
This PR adds MaskFormerSwin to the AutoBackbone API. This ensures that the model can be used as backbone with the MaskFormer framework.
As it makes more sense to move MaskFormerSwin to its own modeling files, this PR implements it in a separate `modeling_maskformer_swin.py` file, along with a configuration implemented in` configuration_maskformer_swin.py`.
To do:
- [x] wait for #20407 to be merged to make the backbone get tested by the common tests, add support for hidden states and attentions | 11-21-2022 09:46:52 | 11-21-2022 09:46:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,343 | closed | DataCollator that allows strings in datasets and untouch the strings | ### Feature request
A default data_collator that allows strings to pass through and untouch the strings.
### Motivation
When I use `remove_unused_columns=True`, I can't get the raw input sequences in string format in my batch.
When I use `remove_unused_columns=False`, I can't use the default data collator, because it raises the following error.
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`translation` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
I constantly fail into this issue and I have to implement my own data collator each time. I wonder if there will be/has been an official implementation for this feature.
### Your contribution
I can submit a PR, but I am not sure if the feature already exists somewhere that I did not know. | 11-21-2022 06:44:50 | 11-21-2022 06:44:50 | Data collators inside Transformers exist for the Trainer, which in turn send all inputs to the model directly. The model won't be able to accept the raw strings, so the use case is not obvious.
Transformers is a library of models at its core, so while we provide some functionality to train/fine-tune them easily, our goal isn't to be comprehensive :-)<|||||>But it's very easy to pop those raw strings out of the batch before passing into the model. If the raw strings are not passed into the batch, it will be hard to align the raw string to tokenized data because the data is randomly shuffled.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is there are reason for "labels_ids" to be constrained to numeric values? (torch.long and torch.float). I'm working on a use case where the labels are represented as alphanumeric and it would be nice to preserve the original ID as it traces back to the original source of data. You can always convert alphanumeric values to numeric, but then you have to keep a dictionary offline to map it back. The other workaround would be to build our custom data collator as suggested in the original post but why write another function for this small change. I agree that it doesn't make sense to allow strings for a model that won't accept them. Can we make an exception for the "labels_ids"? |
transformers | 20,342 | closed | BrokenPipe Error while training GPT-2 from scratch with run_clm.py | ### System Info
Transformers version: v4.24.0
Python version: 3.9.13
system: EC2 with g5.12xlarge, Ubuntu, Pytorch (1.12.1)
Data size : ~250GB
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`python3 -m torch.distributed.launch --nproc_per_node 4 transformers/examples/pytorch/language-modeling/2_run_clm.py --tokenizer_name hf_tokenizer_model --model_type gpt2 --train_file train_file.txt --validation_split_percentage 10 --per_device_train_batch_size 32 --per_device_eval_batch_size 8 --gradient_accumulation_steps 8 --eval_accumulation_steps 8 --block_size 512 --do_train --do_eval --fp16 --output_dir MODELS --logging_dir logs --cache_dir cache --overwrite_output_dir yes --overwrite_cache yes --num_train_epochs 10 --no_cuda False --learning_rate 1e-5 --save_on_each_node False --seed 42 --disable_tqdm False --config_overrides="n_layer=12,vocab_size=96000,eos_token_id=0,bos_token_id=0" --auto_find_batch_size True --logging_strategy steps --save_strategy steps --evaluation_strategy steps --logging_steps 25000 --save_steps 25000 --eval_steps 25000 --preprocessing_num_workers 24 --dataloader_num_workers 24 --save_total_limit 15`
```
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/pool.py", line 136, in worker
put((job, i, (False, wrapped)))
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/queues.py", line 380, in put
self._writer.send_bytes(obj)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 208, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 419, in _send_bytes
self._send(header + buf)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 376, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Running tokenizer on dataset #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 38169/38170 [3:20:47<00:00, 3.17ba/s]
Process ForkPoolWorker-3:
Traceback (most recent call last):00%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 38169/38170 [3:20:47<00:00, 5.32ba/s]
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/pool.py", line 131, in worker
put((job, i, result))
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/queues.py", line 380, in put
self._writer.send_bytes(obj)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 208, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 419, in _send_bytes
self._send(header + buf)
File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 376, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
```
This happend when tokenization was just about to complete
### Expected behavior
After training, Grouping of texts (tokens) should start | 11-21-2022 05:40:21 | 11-21-2022 05:40:21 | Hey @sanprit,
What exactly is `2_run_clm.py`? Was this a typo and supposed to be `run_clm.py`? Also we would need to be able to re-run this command to reproduce the error => how can we access `train_file.txt`?<|||||>2_run_clm.py is run_clm.py only with some logging comments added in it. Regarding train_file it is too large for me to share, can you please sun it on random 230 GB data<|||||>Now, I have diagnosed the problem, "why I was getting broken pipe error". While tokenising and creating groups of the texts, in midway it requires huge disk size to store the cache, Although It deletes major part in the end. When working/testing with small dataset, I was checking only the final cache size, and I was extrapolating that size for big data. But keeping disk size by extrapolating the final cache size is not true because in the midway it creates very high memory cache.
one observation: Even for 20MB of text data, final cache size is only 138GB but in midway of tokenising and grouping, it forms 500GB cache, so we will need minimum 500GB disk. else broken pipe error will occur<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,341 | closed | Add `accelerate` support for LongT5 models | Signed-off-by: peter szemraj <[email protected]>
# What does this PR do?
This PR adds `accelerate` support for the longT5 models (i.e., make it possible to use `device_map="auto"`), so these models can be loaded in 8bit using load_in_8bit=True.
This helps enable inference with trained/fine-tuned SoTA long summarization models using limited memory :relaxed:
Took inspiration from reviewing similar PRs for other models: #19912 and #19927
cc @sgugger
## test results
I made [a Colab notebook](https://colab.research.google.com/gist/pszemraj/6ea0a3046452fc51061f4bde2df0aa77/testing-accelerate-long-t5-tglobal-base-16384-book-summary.ipynb) that clones the branch from my fork to demo the `load_in_8bit=True` working. Everything else is the same for comparison purposes (_except the function that says the model size_) as [the fp32/standard notebook](https://colab.research.google.com/gist/pszemraj/d9a0495861776168fd5cdcd7731bc4ee/example-long-t5-tglobal-base-16384-book-summary.ipynb) listed on [my fine-tuned model card](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary).
I also ran the tests for `longT5` locally:
```bash
$ python -m pytest -n auto --dist=loadfile -s -v tests/models/longt5/test_modeling_longt5.py
( ... many things here ...)
=================================================== 196 passed, 58 skipped, 118 warnings in 30.49s ===================================================
```
| 11-21-2022 01:11:03 | 11-21-2022 01:11:03 | cc @KMFODA for inputs on tests & more :crossed_fingers: <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the feedback & good catch on the Colab! I've updated the notebook - will run and resolve the slow tests/accelerate items later today/tomorrow and revert back 👌<|||||>Hey @pszemraj !
How is the integration going 💪 ? Let me know if I can help at some point to debug / make the tests pass ;) !<|||||>Hi @pszemraj !
Is it ok if I try to take over the PR? this addition could be very nice to the lib! Let me know what do you think :) <|||||>Hey! let me give it a stab today (I was sick for a week) if you don't see anything by tomorrow, feel free to take it home!
Email | ***@***.***
On 12/6/2022 8:54:39 AM, Younes Belkada ***@***.***> wrote:
Hi @pszemraj [https://github.com/pszemraj] !
Is it ok if I try to take over the PR? this addition could be very nice to the lib! Let me know what do you think :)
—
Reply to this email directly, view it on GitHub [https://github.com/huggingface/transformers/pull/20341#issuecomment-1338921676], or unsubscribe [https://github.com/notifications/unsubscribe-auth/AR3GSMFN4MP444ZC72B4EN3WL3WL7ANCNFSM6AAAAAASGEAOLE].
You are receiving this because you were mentioned.Message ID: ***@***.***>
[31e14b4b-28c3-4714-8081-803278962750]<|||||>@younesbelkada hey - was trying to get the tests to pass and evaluate further but unfortunately the machine I _do_ have access to a GPU on and can work this was running into some install issues with the `dev` dependencies for `pytest` etc
If you're willing to finish this, that would probably be easiest 😅 I'll add the line for accelerate as you suggested and rebase as per the contrib guidelines, feel free to take whatever you find useful :) <|||||>Thanks a lot @pszemraj for your great efforts, will have a look ASAP ;) this is definitely in my TODO list<|||||>thanks so much! I see you pushed so I will leave you to it (but feel free to let me know if questions or you need me to change anything on my end)
then we can get [this bad boi](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) usable on free Colab runtimes :) <|||||>Thanks for taking it home @younesbelkada! and thanks for the review @sgugger. Happy to help :) |
transformers | 20,340 | closed | [FLAX] Add dtype to embedding for bert/bart/opt/t5 | ## What does this PR do?
This PR is the follow-up of #18462. It adds dtype to `nn.Embed` for more common Flax models, including bert, bart, opt, t5, and their copies.
This dtype is necessary for mixed precision training.
## Who can review?
@patrickvonplaten, @LysandreJik, @sanchit-gandhi | 11-21-2022 00:53:45 | 11-21-2022 00:53:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi @younesbelkada @ArthurZucker here<|||||>Can we merge this @ArthurZucker ? How to check whether the slow tests pass?<|||||>Thanks @merrymercy !
You can run slow tests for `t5` for example by running: `RUN_SLOW=1 pytest tests/models/t5/test_modeling_flax_t5.py` - regarding what has been suggested by @sanchit-gandhi , you can just add a new test that initializes a model let's say in `bf16` and tests if the generated sequence is the same than the one that is expected:
```
@slow
def test_small_generation_bf16(self):
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small", dtype=jnp.bfloat16)
EXPECTED_OUTPUT = "XXXX"
self.assertTrue(model.params["shared"]["embedding"].dtype == jnp.bfloat16)
model.config.max_length = 8
model.config.num_beams = 1
model.config.do_sample = False
tokenizer = T5Tokenizer.from_pretrained("t5-small")
input_ids = tokenizer("summarize: Hello there", return_tensors="np").input_ids
sequences = model.generate(input_ids).sequences
output_str = tokenizer.batch_decode(sequences, skip_special_tokens=True)[0]
self.assertTrue(output_str == EXPECTED_OUTPUT)
```<|||||>A bf16 test case is added as suggested by @younesbelkada . I checked that slow tests are also passed because most slow tests are in fp32 and this PR does not change the behavior of any fp32 tests.
However, I do notice some slow tests fail on my machine (V100, jax=0.3.25, flax=0.6.2). I think this is not related to my PR, because they fail even with the transformers main branch and transformers v4.24.0. Fixing them is outside the scope of this PR. I can confirm that all tests passed on the main branch can still be passed after my PR.
Since we get two approvals and the test case is added, can we merge it now?<|||||>Thanks again for your contribution! |
transformers | 20,339 | closed | Add Spanish translation of pr_checks.mdx | # What does this PR do?
Add the Spanish translation for `pr_checks.mdx` as part of the #15947 issue.
Changes include the Spanish version of the original document and the updated `_toctree.yml` file.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1321245149)_.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 11-20-2022 21:32:03 | 11-20-2022 21:32:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @osanseviero. Can you help review this PR, please? <|||||>Only minor typos were spotted during the first review. I already addressed them with my latest commit.
@sgugger, if you agree, we can skip a second review from [osanseviero](https://github.com/osanseviero) and merge this PR.
Thanks! |
transformers | 20,338 | closed | TrOCR Encoder &Decoder Replacement | Hi @NielsRogge thanks for the great tutorial For TrOCR
1- can I get the whole datasets for IAM as the format (processed) used during fine-tuning because I only can see the test set ?
2- I want to replace the Encoder with **Vit or Swin or Diet** and the Decoder with **Bert** or **GPT-2** or another beast decoder In the original TrOCR or at least modify the decoder part
but I got a huge CER= 76% Please can you suggest what a possibility to reach a beast result and later on wanna fine-tune with specific language
import fun
[# modifying the tokenizer
processor.tokenizer = fun.AutoTokenizer.from_pretrained('bert-base-uncased')
def trocr_model_config(model):
# set decoder config to causal lm
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
# set special tokens used for creating the decoder_input_ids from the labels
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
assert model.config.decoder_start_token_id == processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
# make sure vocab size is set correctly
model.config.vocab_size = model.config.decoder.vocab_size
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 128
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
return model
def main():
df = load_dataset()
print(df.head(4))
train_dataset, eval_dataset = create_datasets(df)
print("Number of training examples:", len(train_dataset))
print("Number of validation examples:", len(eval_dataset))
encoding = train_dataset[0]
for k,v in encoding.items():
print(k, v.shape)
labels = encoding['labels']
labels[labels == -100] = processor.tokenizer.pad_token_id
label_str = processor.decode(labels, skip_special_tokens=True)
print(label_str)
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch16-384','bert-base-uncased')
# setting model configuration
configured_model = trocr_model_config(model)
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
learning_rate=2e-5,
num_train_epochs=12,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
fp16=True,
output_dir= f'./models/vit_bert_IAM{datetime.now().strftime("%Y%m%d%H%M%S")}',
logging_steps=100,
save_steps=1000,
eval_steps=500,
)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=configured_model,
tokenizer= processor.feature_extractor,
args=training_args,
compute_metrics= compute_metrics,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator= default_data_collator,
)
trainer.train()
if __name__ == '__main__':
main()
| 11-20-2022 20:33:50 | 11-20-2022 20:33:50 | @sgugger #Bert #Good_new_ssue<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge ping on this one.<|||||>> can I get the whole datasets for IAM as the format (processed) used during fine-tuning because I only can see the test set
The TrOCR authors have released the dataset [here](https://github.com/microsoft/unilm/tree/master/trocr), so I'd recommend taking a look at the unilm issues and perhaps ask the TrOCR authors to release it
> I want to replace the Encoder with Vit or Swin or Diet and the Decoder with Bert or GPT-2 or another beast decoder In the original TrOCR or at least modify the decoder part
Note that if you replace the encoder/decoder, you'll need to fine-tune the model on additional (image, text) pairs. For that I'd recommend checking out [this thread](https://github.com/huggingface/transformers/issues/15823). Also check my [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR) for tutorials on fine-tuning TrOCR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,337 | closed | perpexity | null | 11-20-2022 15:29:25 | 11-20-2022 15:29:25 | |
transformers | 20,336 | closed | Fix issue 19904 | Fixes issue #19904
| 11-20-2022 08:34:43 | 11-20-2022 08:34:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Please tag the right persons for reviewing this PR . |
transformers | 20,335 | closed | Add MobileNetV1 | ### Model description
MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Paper](https://arxiv.org/abs/1704.04861)
Hi there, I wonder whether MobileNetV1 is going to be added or not, I see there's already a dedicated [card](https://huggingface.co/google/mobilenet_v1_1.0_224) on model's hub.
@hollance | 11-20-2022 01:45:46 | 11-20-2022 01:45:46 | There's an open PR for it that is (almost) ready to merge. I just need to rebase it because it conflicts with MobileNetV2 that was recently added. I'll probably get around to this later this week.
See the PR: https://github.com/huggingface/transformers/pull/17799<|||||>Oh, thanks. Closing this issue.<|||||>Just a FYI: it has been merged now with the main branch of Transformers. :-) |
transformers | 20,334 | open | longformer_content | ### Model description
This model will generate a content score for a summary given the context and the summary itself.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 11-19-2022 15:57:10 | 11-19-2022 15:57:10 | Which new model are you talking about? Please fill the template properly otherwise no one will be able to help. |
transformers | 20,333 | closed | Use tiny ONNX models for text modality | # What does this PR do?
Partially addresses https://github.com/huggingface/transformers/issues/18819
This PR uses the new tiny random models from @ydshieh to speed up the ONNX tests for the **text** modality. This brings the slow ONNX tests down to about 1.5h. The other modalities will be added once we have tiny models that can run a forward pass in general.
Note that the following text models didn't work when running
```
RUN_SLOW=1 pytest -v tests/onnx/test_onnx_v2.py -x -k "model_arch"
```
so have been excluded for now:
**hf-internal-testing/tiny-random-IBertModel**
* Cannot export model to ONNX
**hf-internal-testing/tiny-random-LayoutLMv3Model**
* Not strictly a text model, but caught this in my testing:
```
tests/onnx/test_onnx_v2.py:423: in test_pytorch_export
self._onnx_export(test_name, name, model_name, feature, onnx_config_class_constructor)
feature = 'default'
model_name = 'hf-internal-testing/tiny-random-LayoutLMv3Model'
name = 'layoutlmv3'
onnx_config_class_constructor = functools.partial(<bound method OnnxConfig.from_model_config of <class 'transformers.models.layoutlmv3.configuration_layoutlmv3.LayoutLMv3OnnxConfig'>>, task='default')
self = <tests.onnx.test_onnx_v2.OnnxExportTestCaseV2 testMethod=test_pytorch_export_091_layoutlmv3_default>
test_name = 'layoutlmv3_default'
tests/onnx/test_onnx_v2.py:351: in _onnx_export
self.fail(f"{name}, {feature} -> {e}")
E AssertionError: layoutlmv3, default -> The size of tensor a (12545) must match the size of tensor b (5) at non-singleton dimension 1
```
**hf-internal-testing/tiny-random-LongformerModel**
* Might be related to #20292
```
self = <onnxruntime.capi.onnxruntime_inference_collection.InferenceSession object at 0x7fc37b5d0a90>, output_names = ['last_hidden_state', 'pooler_output']
input_feed = {'attention_mask': array([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1...put_ids': array([[0, 3, 3, 3, 3, 3, 3, 3, 2],
[0, 3, 3, 3, 3, 3, 3, 3, 2],
[0, 3, 3, 3, 3, 3, 3, 3, 2]])}
run_options = None
def run(self, output_names, input_feed, run_options=None):
"""
Compute the predictions.
:param output_names: name of the outputs
:param input_feed: dictionary ``{ input_name: input_value }``
:param run_options: See :class:`onnxruntime.RunOptions`.
:return: list of results, every result is either a numpy array,
a sparse tensor, a list or a dictionary.
::
sess.run([output_name], {input_name: x})
"""
num_required_inputs = len(self._inputs_meta)
num_inputs = len(input_feed)
# the graph may have optional inputs used to override initializers. allow for that.
if num_inputs < num_required_inputs:
raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs))
if not output_names:
output_names = [output.name for output in self._outputs_meta]
try:
> return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_2943' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{3,4,2,5}, requested shape:{3,1,9,5}
```
**hf-internal-testing/tiny-random-BlenderbotSmallModel**
* Problem with tokenizer vocab
```
self = <[AttributeError("'ByteLevelBPETokenizer' object has no attribute '_tokenizer'") raised in repr()] ByteLevelBPETokenizer object at 0x7fc2a041e610>
vocab = '/Users/lewtun/.cache/huggingface/hub/models--hf-internal-testing--tiny-random-BlenderbotSmallModel/snapshots/1eff1bdb5f97b473480b1ec8af85f58439e88906/vocab.json'
merges = '/Users/lewtun/.cache/huggingface/hub/models--hf-internal-testing--tiny-random-BlenderbotSmallModel/snapshots/1eff1bdb5f97b473480b1ec8af85f58439e88906/merges.txt'
add_prefix_space = False, lowercase = False, dropout = None, unicode_normalizer = None, continuing_subword_prefix = None, end_of_word_suffix = None, trim_offsets = True
def __init__(
self,
vocab: Optional[Union[str, Dict[str, int]]] = None,
merges: Optional[Union[str, Dict[Tuple[int, int], Tuple[int, int]]]] = None,
add_prefix_space: bool = False,
lowercase: bool = False,
dropout: Optional[float] = None,
unicode_normalizer: Optional[str] = None,
continuing_subword_prefix: Optional[str] = None,
end_of_word_suffix: Optional[str] = None,
trim_offsets: bool = False,
):
if vocab is not None and merges is not None:
tokenizer = Tokenizer(
> BPE(
vocab,
merges,
dropout=dropout,
continuing_subword_prefix=continuing_subword_prefix or "",
end_of_word_suffix=end_of_word_suffix or "",
)
)
E Exception: Error while initializing BPE: Token `_</w>` out of vocabulary
```
**hf-internal-testing/tiny-random-MarianModel**
* Problem with config
```
self = <[AttributeError("'Embedding' object has no attribute 'padding_idx'") raised in repr()] Embedding object at 0x7fd700e63d00>, num_embeddings = 99, embedding_dim = 16
padding_idx = 58100, max_norm = None, norm_type = 2.0, scale_grad_by_freq = False, sparse = False, _weight = None, device = None, dtype = None
def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None,
max_norm: Optional[float] = None, norm_type: float = 2., scale_grad_by_freq: bool = False,
sparse: bool = False, _weight: Optional[Tensor] = None,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(Embedding, self).__init__()
self.num_embeddings = num_embeddings
self.embedding_dim = embedding_dim
if padding_idx is not None:
if padding_idx > 0:
> assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'
E AssertionError: Padding_idx must be within num_embeddings
```
**hf-internal-testing/tiny-random-MBartModel**
* Problem with embedding config
```
def embedding(
input: Tensor,
weight: Tensor,
padding_idx: Optional[int] = None,
max_norm: Optional[float] = None,
norm_type: float = 2.0,
scale_grad_by_freq: bool = False,
sparse: bool = False,
) -> Tensor:
if has_torch_function_variadic(input, weight):
return handle_torch_function(
embedding,
(input, weight),
input,
weight,
padding_idx=padding_idx,
max_norm=max_norm,
norm_type=norm_type,
scale_grad_by_freq=scale_grad_by_freq,
sparse=sparse,
)
if padding_idx is not None:
if padding_idx > 0:
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
elif padding_idx < 0:
assert padding_idx >= -weight.size(0), "Padding_idx must be within num_embeddings"
padding_idx = weight.size(0) + padding_idx
else:
padding_idx = -1
if max_norm is not None:
# Note [embedding_renorm contiguous]
# `embedding_renorm_` will call .contiguous() on input anyways, so we
# call it here and take advantage of the improved locality in the
# `embedding` call below too.
input = input.contiguous()
# Note [embedding_renorm set_grad_enabled]
# XXX: equivalent to
# with torch.no_grad():
# torch.embedding_renorm_
# remove once script supports set_grad_enabled
_no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
E IndexError: index out of range in self
``` | 11-19-2022 14:17:13 | 11-19-2022 14:17:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@lewtun Thank you for working on this 💯
I force pushed a commit to resolve a conflict on `main`.
Thank you for sharing the models that are not working. Keep in mind that there are some difficulty in the tiny model creation :
- no model tester
- can't convert tokenizer/processor correctly
- no super easy to to set some config attributes correctly for a few particular models
- etc.
We might need a few more iterations to make things more stable (i.e. need to create new tiny models), but I will take care of the failing ONNX tests if the new tiny models break the tests.<|||||>Let me merge later today once the tests pass with the new set of tiny models (which is currently being built)<|||||>> Let me merge later today once the tests pass with the new set of tiny models (which is currently being built)
Sounds great! Let me know if you need any help 🙏 <|||||>Running on CPU with only the new tiny random mdoels
> 293 passed, 2 skipped, 23628 warnings in 760.95s (0:12:40)
🚀 🚀 🚀 🚀 <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20333). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,332 | closed | OPTForCausalLM - ValueError: The following `model_kwargs` are not used by the model: ['new_doc'] | ### System Info
Hello. I am using this model : https://huggingface.co/facebook/galactica-6.7b
The example is like this pretty straight
```
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
**new GALACTICA model supports the below inputs but I am getting error. This is the code I run**
```
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b")
input_text = "The benefits of deadlifting\n\n"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids,new_doc=True, top_p=0.7, max_length=2000)
print(tokenizer.decode(outputs[0]))
```

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
test
### Expected behavior
test | 11-19-2022 13:15:57 | 11-19-2022 13:15:57 | @FurkanGozukara I do not see the new_doc parameter accepted neither in code nor in example scripts ? Can you point me to where you got example scripts with new_doc parameter ? <|||||>> @FurkanGozukara I do not see the new_doc parameter accepted neither in code nor in example scripts ? Can you point me to where you got example scripts with new_doc parameter ?
https://github.com/paperswithcode/galai

<|||||>@FurkanGozukara I think there is an mix up, the huggingface api so far does not support new_doc parameter, the new_doc parameter is present in galai library released by paperwithcode .
https://github.com/paperswithcode/galai/blob/f6d9b0a5b35a0eda53597a5ea7d51963bfc05de1/galai/model.py#L88<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,331 | closed | Fix a typo in BLOOM model docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`BigSicence` -> `BigScience`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-19-2022 12:18:27 | 11-19-2022 12:18:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,330 | closed | Models cannot be loaded if they have dot "." in name | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.31
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@NielsRogge @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Try to use `google/mobilenet_v2_1.0_224`:
```
model = AutoModelForImageClassification.from_pretrained(
"google/mobilenet_v2_1.0_224",
num_labels=2,
ignore_mismatched_sizes=True,
)
```
I get:
```
Traceback (most recent call last):
File "train.py", line 217, in <module>
model = AutoModelForImageClassification.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 434, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 796, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 503, in __getitem__
raise KeyError(key)
KeyError
: 'mobilenet_v2'
```
I suspect that this happens due to line 790 in `configuration_auto.py`:
```
module_file, class_name = class_ref.split(".")
```
Since this model has a dot "." in name, it is splitted here and library is looing for `mobilenet_v2` instead, which does not exist.
### Expected behavior
Load a model from HuggingFace Hub. | 11-19-2022 10:36:51 | 11-19-2022 10:36:51 | Hi, @j-adamczyk. The problem here is that `transformers` version that includes `MobileNetV2` model has not yet been released to [PyPi](https://pypi.org/project/transformers/#history) etc., hence there's no `'mobilenet_v2'` key.
Though you can install the `transformers` from source, and start using `MobileNetV2` right after.<|||||>To install from [source](https://huggingface.co/transformers/v3.5.1/installation.html#installing-from-source), clone the repository and install with the following commands:
```
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as this issue seems resolved. |
transformers | 20,329 | closed | cannot import electratokenizer or model. It works on colabnotebook but not in python | ### System Info
i have tensorflow version 2.11.0 and transformer version 4.24.0
and i cannot import TFElectraModel. It works for me in google colab but not in my pc.
from transformers import TFElectraModel
gives me the following error
RuntimeError: Failed to import transformers.models.electra.modeling_tf_electra because of the following error (look up to see its traceback):
No module named 'keras.saving.hdf5_format'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
in windows install python 3.10
install tensorflow
install numpy
install scikit-learn
install transformers
import tensorflow as tf
import numpy as np
import transformers
from transformers import ElectraTokenizer
from transformers import TFElectraModel
### Expected behavior
It should load the TFElectraModel, i have loaded this model a million times ins google colab and it still works there even now. | 11-19-2022 08:35:55 | 11-19-2022 08:35:55 | Transformers is not currently compatible with TensorFlow 2.11 (which broke a lot of things compared to the previous versions). It should be fixed on main so you can either do:
- an install of Transformers from source
- or downgrade your TensorFlow to 2.10<|||||>Hi There! I figured as much so is downgraded to tensorflow 2.10 and
installed transformers from the source
On Mon, Nov 21, 2022 at 8:29 PM Sylvain Gugger ***@***.***>
wrote:
> Transformers is not currently compatible with TensorFlow 2.11 (which broke
> a lot of things compared to the previous versions). It should be fixed on
> main so you can either do:
>
> - an install of Transformers from source
> - or downgrade your TensorFlow to 2.10
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/20329#issuecomment-1322165802>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AYGQ4SMR5SK5XB45Y5KRPYDWJODDNANCNFSM6AAAAAASFGNBJE>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,328 | closed | MAE Noise Was NOT Removed In Prediction For TFViTMAEModel / ViTMAEModel | ### System Info
transformers version: v4.24.0 (latest)
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
from transformers import AutoFeatureExtractor, TFViTMAEModel
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/vit-mae-base")
model = TFViTMAEModel.from_pretrained("facebook/vit-mae-base")
inputs = feature_extractor(images=image, return_tensors="tf")
outputs1 = model(**inputs)
outputs2 = model(**inputs)
res1 = outputs1.last_hidden_state.numpy()
res2 = outputs2.last_hidden_state.numpy()
np.allclose(res1, res2, rtol=1e-04, atol=1e-05) # This returns False as noises are introduced.
```
### Expected behavior
It should return True.
It returns False as noises are added in `random_masking()` which hasn't been removed for prediction. There are no options to do that. | 11-19-2022 03:21:12 | 11-19-2022 03:21:12 | Hi,
ViTMAE generates a random mask internally to mask out patches. Hence forwarding the same inputs twice through the model will result in different hidden states. If you want to ensure reproducability, you can pass the `noise` argument. See [here](https://github.com/huggingface/transformers/blob/8503cc755050c6ed5bc771e3244c29b71be1841e/tests/models/vit_mae/test_modeling_vit_mae.py#L317) for an example.<|||||>Thanks @NielsRogge for the reply. I'm using this model to extract image features and I'm using some hacks to remove noise and masks for inference.
After reading the [ViTMAE code](https://github.com/facebookresearch/mae/blob/main/FINETUNE.md), I found that the trained ViTMAE model can be loaded as a regular ViT model directly. Just tested with `TFViTMAEModel.from_pretrained("facebook/vit-mae-base")` and it works. Could you help confirm is this the correct approach to use pre-trained MAE model? <|||||>Yes, that's also shown in the docs. Note that `TFViTMAEModel.from_pretrained("facebook/vit-mae-base")` will just load the base Encoder, which does not include the decoder used during pre-training. For that you would need to load `TFViTMAEForPreTraining`.
However, since you want to use the model for image features, it's indeed adviced to just load the Transformer encoder. |
transformers | 20,327 | closed | Mismatch in torch size | I am trying to fine-tune an NER model using "malduwais/distilbert-base-uncased-finetuned-ner" model, in the dataset I am finetuning on there are only 5 features as opposed to 9 in the one this model was trained on. I get this error when calling trainer.train()
```
RuntimeError: Error(s) in loading state_dict for DistilBertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([9, 768]) from checkpoint, the shape in current model is torch.Size([5, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([5]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
is there a way to adjust the size when finetuning? Thank you | 11-18-2022 23:12:57 | 11-18-2022 23:12:57 | As the error message told you, load your model with `ignore_mismatched_sizes=True`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,326 | closed | [WIP] Update: ignore padding support for TransfoXL training when n_clusters==0 | # What does this PR do?
This PR solves [an issue](https://github.com/huggingface/transformers/issues/17446) I raised about TransformerXL. As @sgugger mentioned in [another issue](https://github.com/huggingface/transformers/issues/19914) I raised, he [says](https://github.com/huggingface/transformers/issues/19914#issuecomment-1293656206)
> I don't think TransformerXL supports FP16 as this is an old model with very specific code for the softmax layer. This won't be an issue we will fix ourselves given that Transformer-XL is not very used anymore, but if someone wants to make a PR, we'll review!
I'm using TransformerXL in a [research project](https://github.com/StefanHeng/Symbolic-Music-Generation) and disabling the adaptive softmax is an option I would like to explore. So here I am.
In the `n_clusters==0` branch, the current TransformerXL implementation does not work with padding (-100), it beaks at `.gather(1, labels)`.
This PR solves that bug.
I tested with my research data and confirmed my implementation is working. It's able to overfit the training data to up to 99% next-token prediction on multiple hyper-parameter setups, for #samples from 8 to 48, batch size from 48 to 64, epochs from 128 to 512.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
[Issue 19914](https://github.com/huggingface/transformers/issues/19914)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
@patrickvonplaten
@thomwolf
| 11-18-2022 22:24:56 | 11-18-2022 22:24:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20326). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your comments!
> Thanks a lot for working on this! Can you make sure to run `make style` on your branch? You will also need to rebase on main to fix all the TensorFlow-related tests (they broke with last TensorFlow release).
Sure will do. <|||||>> Though it doesn't work the examples where we encourage labels=inputs["input_ids"]
@sgugger Can you clarify? Not sure what do you mean <|||||>Please use -100 for padding in the labels as @patrickvonplaten indicated.<|||||>Sure, do we keep the 2 branches (line 113), i.e. support a case where we don't filter out -100?
I'm fine either way. I added padding as an option because creating that extra mask token would consume extra memory. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>comment to reactivate thread
|
transformers | 20,325 | closed | Add LayerScale to NAT/DiNAT |
# What does this PR do?
This follows PR #20219 .
I completely dropped the ball on LayerScale in the original PR.
This is just an optional argument in both models, and is only activated for larger variants in order to provide training stability.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @NielsRogge . | 11-18-2022 22:01:53 | 11-18-2022 22:01:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,324 | closed | Added Luke Doctests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds improved Doctests for LUKE
Issue: https://github.com/huggingface/transformers/issues/16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh @patrickvonplaten @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-18-2022 21:26:27 | 11-18-2022 21:26:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh maybe :-) <|||||>The checkpoints in hf-internal-testing are only there for our tests, they shouldn't be used. Please use real checkpoints in the doc examples.<|||||>@Tegzes You can search on the Hub to find if there are luke based models for the corresponding tasks. If most tasks lack such a checkpoint, we will have to skip Luke for doctest unfortunately.<|||||>> The checkpoints in hf-internal-testing are only there for our tests, they shouldn't be used. Please use real checkpoints in the doc examples.
I only used the hf-internal-testing checkpoint because these were mentioned in the comment section of the issue. But I will change them accordingly.<|||||>@ydshieh I've tried most of the luke model checkpoints and unfortunately the only ones that exhibit a fully reproducible behavior are the ones from hf-internal-testing.
If it would be acceptable to you, I would propose the merge as it is, backed by the fact that these internal types of checkpoints were used in the past for multiple such doctests improvements, as suggested in the comments from the issue (https://github.com/huggingface/transformers/issues/16292).<|||||>@Tegzes Thank you for the effort to check the checkpoints.
We have an internal discussion, and @sgugger strongly suggests we should not have used those tiny checkpoints in the first places, and we will try to figure out what changes we should proceed.
So unfortunately, we won't merge this PR. Sorry about this. It's possible to rework on the Luke docstrings in the future after some decision is made regarding the doctest + missing checkpoints.
However, thank you again for the contribution!<|||||>@ydshieh Thank you for the response. The situation is much clearer now
<|||||>Close as we have to decide what to do with doctest when a real checkpoint is missing |
transformers | 20,323 | closed | Enhance HfArgumentParser functionality and ease of use | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
I have enhanced the `HfArgumentParser` for my own work and decided to contribute my changes back to `transformers`.
The changes are:
- Allow aliases for command line flags to be created (by providing `aliases` inside the metadata dict)
- Enable specification of one or multiple config files via a customizable command line flag (e.g. `--cfg ./path/to/basic-config.txt --cfg ./path/to/special.txt`). For my own use, I set the customizable command line flag to `"--cfg"` by default. I omitted this here to keep 100% backwards compatibility but a sensible default might make sense as well.
- Enable use of the `Literal` type for easy ad-hoc specification of choices without the need to define an `Enum` (inspired by [typed-argument-parser](https://github.com/swansonk14/typed-argument-parser)):
```python
@dataclass
class Args:
literal_arg: Literal["the meaning of life", 42, "hitchhiking"] = 42
```
- Allow use of mixed types for `Enum` similar to `Literal`
- Created `HfArg` helper for a more concise syntax when creating data class fields:
```python
@dataclass
class Args:
field_arg: int = field(default=42, metadata={"aliases": ["--arg", "-a"], "help": "This is a bit verbose."})
hf_arg: int = HfArg(default=42, aliases=["--arg", "-a"], help="This is more concise.")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 11-18-2022 21:15:02 | 11-18-2022 21:15:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,322 | closed | Cannot load a model that saved locally | ### System Info
```
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
model_name = "twmkn9/albert-base-v2-squad2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_dir="./models/"+model_name
_ = model.save_pretrained(model_dir)
_ = tokenizer.save_pretrained(model_dir)
model = AutoModelForQuestionAnswering.from_pretrained("model_dir")
tokenizer = AutoTokenizer.from_pretrained("model_dir")
```
### Expected behavior
I would expect that after successfully saving the model locally I'm able to load it. Unfortunately, I'm getting the following error:
```
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?17b3235e-945e-4cb8-b7e0-e53f106f5fad)
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:213, in hf_raise_for_status(response, endpoint_name)
212 try:
--> 213 response.raise_for_status()
214 except HTTPError as e:
File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/model_dir/resolve/main/config.json
The above exception was the direct cause of the following exception:
RepositoryNotFoundError Traceback (most recent call last)
File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
407 try:
408 # Load from URL or cache if already cached
--> 409 resolved_file = hf_hub_download(
410 path_or_repo_id,
411 filename,
412 subfolder=None if len(subfolder) == 0 else subfolder,
413 revision=revision,
414 cache_dir=cache_dir,
...
434 f"'https://huggingface.co/{path_or_repo_id}' for available revisions."
435 )
OSError: model_dir is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
``` | 11-18-2022 20:58:52 | 11-18-2022 20:58:52 | You need to remove the quotes around model_dir:
```
model = AutoModelForQuestionAnswering.from_pretrained(model_dir)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
```<|||||>Thanks. Really embarrassing! |
transformers | 20,321 | closed | Safetensors offload | # What does this PR do?
This PRs make offload to disk more efficient for models that have a checkpoint in safetensors format: instead of re-saving everything as Numpy memory-mapped array, it uses directly the fact we can access the tensor in the checkpoint without loading the rest.
Goes with https://github.com/huggingface/accelerate/pull/873 | 11-18-2022 20:44:30 | 11-18-2022 20:44:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,320 | closed | Loading model OOMs with more GPUS | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.10.135-122.509.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.5
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
Hi all,
I am modifying an arbitrary HF text model for reinforcement learning reward modeling by appending a scalar output head and overriding the forward method. As part of this procedure I'd prefer to retain the flexibility of using any model without committing to a particular model class (e.g. GPT2). I have not found a way to inherit the PreTrainedModel class while also retaining this flexibility so the result is just a nn.Module class.
I find when I try to torch.load to continue training a reward model fine-tuned using GPTNeo2.7B as a base I OOM when with >6 gpus (A100). This is counter-intuitive to me as I would expect OOM issues in the opposite direction.
To train the reward model I am using HF's deepspeed integration. Tagging @stas00 as deepspeed integration point of contact.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
import pandas as pd
import torch
from torch.utils.data import Dataset, random_split
from transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel
import json
import deepspeed
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Model, PreTrainedModel, AutoModelForCausalLM, GPT2PreTrainedModel, GPT2Model
from transformers.modeling_outputs import ModelOutput
from torch import nn
from torch.nn import Identity
import torch.nn.functional as F
import torch
from dataclasses import dataclass
from typing import Optional, Tuple
class GPTRewardModel(nn.Module):
def __init__(self, config):
super().__init__()
model = AutoModelForCausalLM.from_pretrained(config)
self.config = model.config
# gpt-neo models have hidden_size instead of n_embd
self.config.n_embd = self.config.hidden_size if hasattr(self.config, "hidden_size") else self.config.n_embd
self.transformer = model.transformer
self.v_head = nn.Linear(self.config.n_embd, 1, bias=False)
def forward(
self,
input_ids=None,
past_key_values=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
mc_token_ids=None,
lm_labels=None,
mc_labels=None,
return_dict=False,
output_attentions=False,
output_hidden_states=False,
):
loss=None
transformer_outputs = self.transformer(
input_ids,
past_key_values=past_key_values,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
hidden_states = transformer_outputs[0]
rewards = self.v_head(hidden_states).squeeze(-1)
return rewards
model = GPTRewardModel("EleutherAI/gpt-neo-2.7B")
if torch.distributed.get_rank() == 0:
torch.save(model.state_dict(), "model_fp16.pt")
model.load_state_dict(torch.load('model_fp16.pt'))
```
```yaml
{
"train_batch_size": 8,
"fp16": {
"enabled": "auto",
"min_loss_scale": 1,
"loss_scale_window": 1000,
"hysteresis": 2,
"initial_scale_power": 32
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"offload_param": {
"device": "none"
},
"offload_optimizer": {
"device": "none"
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"contiguous_gradients": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-08
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": "auto",
"warmup_num_steps": 100
}
}
}
```
To launch run `deepspeed --num_gpus=7 test_pretrained.py --deepspeed ds_config_gpt_2.json `
### Expected behavior
No OOM with more gpus | 11-18-2022 19:55:26 | 11-18-2022 19:55:26 | It's a bit hard to follow your Issue
Is loading working when you use <=6 gpus?
I can't quite see from your example of the model itself how you run it - I suppose some modified version of the HF Trainer example program? unless what you run is what you shared here.
What you have shown doesn't use Deepspeed, you're just using the `deepspeed` launcher and the args are ignored since you're not parsing them. So this program simply runs this script you have shown on each gpu separately - no deepspeed.
Also have a look at the size of the saved model - to ensure that it was saved in half-precision or full precision, which could be a 2x multiplier if you aren't doing it correctly. <|||||>To use the HF Deepspeed integration you need to adapt one of the examples or write a new program following the examples as the guide. https://github.com/huggingface/transformers/tree/main/examples/pytorch
The integration is inside the HF Trainer, so once you switch to using the HF Trainer you will get the DS integration.<|||||>Ah my apologies this is confusing. My training script is below. I'm only using the HF Trainer
```python
import os
import pandas as pd
import torch
from torch.utils.data import Dataset, random_split
from transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel
import json
from reward_model import GPTRewardModel
import deepspeed
class PairwiseTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
# forward pass
rewards = model(**inputs)
rewards_chunked = rewards.view((2, -1))
chosen_rewards = rewards_chunked[0]
rejected_rewards = rewards_chunked[1]
# compute pairwise loss
loss = -torch.log(torch.sigmoid(chosen_rewards - rejected_rewards)).mean()
return (loss, outputs) if return_outputs else loss
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
tokenizer.pad_token = tokenizer.eos_token
training_args = TrainingArguments(output_dir='./results', num_train_epochs=4, logging_steps=100, save_strategy=IntervalStrategy.NO,
per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=100,
weight_decay=0.01, logging_dir='./logs', fp16=True, bf16=False, learning_rate=5e-6, deepspeed='./ds_config_gpt_2.json')
# gptneo trained in jaxh
model = GPTRewardModel("EleutherAI/gpt-neo-2.7B")
load_checkpoint = True
if load_checkpoint:
model.load_state_dict(torch.load('ckpts/single_context_pairwise/model_fp16.pt'))
#model.cuda()
data = []
dataset_name = "single_context_pairwise"
with open(dataset_name + ".jsonl", "r") as f:
lines = f.readlines()
for line in lines:
loaded_line = json.loads(line)
data.append(loaded_line)
#data.append(loaded_line["prompt"] + loaded_line["response"])
print("Len data: ", len(data))
max_length = 1024
#max_length = max([max(len(tokenizer.encode(text["chosen"])), len(tokenizer.encode(text["rejected"]))) for text in data])
print("Max length: {}".format(max_length))
class PairwiseDataset(Dataset):
def __init__(self, pairs, tokenizer, max_length):
self.chosen_input_ids = []
self.chosen_attn_masks = []
self.rejected_input_ids = []
self.rejected_attn_masks = []
for pair in pairs:
chosen, rejected = pair["chosen"], pair["rejected"]
chosen_encodings_dict = tokenizer('<|startoftext|>' + chosen + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
rejected_encodings_dict = tokenizer('<|startoftext|>' + rejected + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
self.chosen_input_ids.append(chosen_encodings_dict['input_ids'])
self.chosen_attn_masks.append(chosen_encodings_dict['attention_mask'])
self.rejected_input_ids.append(rejected_encodings_dict['input_ids'])
self.rejected_attn_masks.append(rejected_encodings_dict['attention_mask'])
def __len__(self):
return len(self.chosen_input_ids)
def __getitem__(self, idx):
return self.chosen_input_ids[idx], self.chosen_attn_masks[idx], self.rejected_input_ids[idx], self.rejected_attn_masks[idx]
def data_collator(data):
return {'input_ids': torch.stack([f[0] for f in data] + [f[2] for f in data]),
'attention_mask': torch.stack([f[1] for f in data] + [f[3] for f in data])}
dataset = PairwiseDataset(data, tokenizer, max_length=max_length)
train_size = int(0.9 * len(dataset))
train_dataset, val_dataset = random_split(dataset, [train_size, len(dataset) - train_size])
PairwiseTrainer(model=model, args=training_args, train_dataset=train_dataset,
eval_dataset=val_dataset, data_collator=data_collator).train()
if torch.distributed.get_rank() == 0:
print("SAVING MODEL")
dir_path = os.path.join("ckpts", dataset_name)
if not os.path.isdir(dir_path):
os.mkdir(dir_path)
torch.save(model.state_dict(), os.path.join(dir_path, "model_fp16_8.pt"))
```
Yes loading works <= 6 gpus.
Good point about saving in the wrong precision. I will check<|||||>much better.
Also try first with a normal model of the same size? If it works just fine then it'd point to something being added with your code.
If there is problem with normal model then it's a different story..
One other thing to consider, is that if you resume from a saved deepspeed checkpoint, you can't change topology on fly, as it'll try to resume using the same sharded layout as the checkpoint was saved from. But if you were to try to change the topology on the existing DS checkpoint it'd normally fail to resume.
So typically in changing topology you need to extract the non-sharded weights and then start a new using those instead of using resume. Here since it appears you use zero-stage2 it's trivial, it's just the saved weights file as weights were never sharded in the first place (they do under stage3). so to test on topology change I'd move your `output_dir` elsewhere and simply pass the weights file as the `model_name_or_path`
I am concerned that I'm wrote above is confusing, I'm just trying to guess what might be going wrong for you.<|||||>Update: Indeed I was saving and loading fp16 weights when I meant to be saving/loading fp32. (Although I still do not understand why loading fp16 in the manner I do throws an OOM error).
In any case thanks for your help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,319 | closed | Pin TF 2.10 in docker file | # What does this PR do?
Pin TF 2.10 in docker file
We have to rebuild the push-ci image manually once this PR is approved. | 11-18-2022 16:47:23 | 11-18-2022 16:47:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thansk!<|||||>Merge now quickly to make push CI green |
transformers | 20,318 | closed | Fix flakey no_trainer test with seed | # What does this PR do?
This PR adds a seed param to the squad no trainer test to ensure reproducibility. Ran 5x times on single and multi GPU to ensure that the test will pass.
Fixes # (issue)
Closes https://github.com/huggingface/transformers/issues/19733
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 11-18-2022 16:08:54 | 11-18-2022 16:08:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,317 | closed | TF: future proof our keras imports | # What does this PR do?
On the release notes for [TF 2.11](https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0), we can read:
```
tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and
will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and
use the public API with `from tensorflow import keras` or `import tensorflow as tf; tf.keras`.
```
On top of that, `hdf5_format` was moved from `keras.saving` to `keras.saving.legacy` in v2.11 (this one is not in the patch notes).
This PR prepares us for v2.11 and beyond, while keeping retrocompatibility through gated imports.
Note: I've tested the v2.11 imports locally :) (the CI is running against v2.10) | 11-18-2022 15:13:21 | 11-18-2022 15:13:21 | Other than changing `save_attributes_to_hdf5_group` to `hdf5_format.save_attributes_to_hdf5_group`, no changes are needed now -- but they will be in v2.12, so might as well 🤷 <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,316 | closed | Pin TF 2.10.1 | # What does this PR do?
Pin TF 2.10.1 | 11-18-2022 13:40:27 | 11-18-2022 13:40:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20316). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,315 | closed | Pin TF 2.10.0 | # What does this PR do?
Pin TF 2.10.0 | 11-18-2022 13:34:37 | 11-18-2022 13:34:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20315). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,314 | closed | nested_detach of trainer fails in evaluation_loop for labels formatted for YOLOs model through feature extractor. | ### System Info
@NielsRogge, @sgugger
nested_detach of trainer fails in evaluation_loop for labels formatted for YOLOs model through feature extractor.
OD label has dict format like
'labels': {
'boxes': tensor([[0.4575, 0.5120, 0.6450, 0.2726]]),
'class_labels': tensor([0]),
'image_id': tensor([8081]),
'area': tensor([172346.9688]),
'iscrowd': tensor([0]),
'orig_size': tensor([1086, 600]),
'size': tensor([1332, 736])
}
nested_detach fails with error dict don't have detach method while taking back up of labels.
### Who can help?
@NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Defined above.
### Expected behavior
Evalution loop should run properly. | 11-18-2022 12:50:07 | 11-18-2022 12:50:07 | Thanks for reporting. There is nothing we can do without a code reproducer however. The `nested_detach` function is build to handle dictionaries of tensors, so if I pass it the example you give me above, I don't get an error.<|||||>Thanks for reply. I can see latest PR managed this scenario and it's working with new version.<|||||>@sgugger @vanpelt
Facing anotjer issue while using COCO formatted data (please check actual sample above) with transformer trainer in eval_loop.
It is occurring while running the training job on multi gpu (DDP) set up.
However, job succeeded on single gpu (without DDP) set up. Attaching call stack:
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 1501, in train
return inner_training_loop(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 1841, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2089, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2796, in evaluate
output = eval_loop(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2986, in evaluation_loop
labels = self._nested_gather(labels)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 3112, in _nested_gather
tensors = distributed_concat(tensors)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 191, in
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 193, in distributed_concat
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 193, in
output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]
AttributeError: 'numpy.ndarray' object has no attribute 'clone'<|||||>There is nothing we can do without a small reproducible example.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,313 | closed | Pin TensorFlow as new release breaks `pip install transformers["all"]` | # What does this PR do?
This PR pins TensorFlow for now, as it seems there is no version of `tensorflow-text` compatible with it. | 11-18-2022 11:51:52 | 11-18-2022 11:51:52 | Merging as it unlocks the styling checks, will further fix the rest of the failing tests in a separate PR.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20313). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,312 | closed | Add eval and test for the example run_ner_no_trainer.py | ### Feature request
Hi, thanks for the great work. I just go through the example "transformers/examples/pytorch/token-classification/", I found that run_ner.py has eval and test. However, for run_ner_no_trainer.py, there is no eval or test. I just wonder if it is possible to add these components?
### Motivation
Enhance the example code
### Your contribution
I can help to test the code. | 11-18-2022 11:32:56 | 11-18-2022 11:32:56 | Evaluation is done at every epoch in this script, not sure what more you need. The `no_trainer` scripts are more barebone on purpose, to make it easy for users to customize them.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Evaluating NER performance with subword tokenization is complicated. It would be very instructive if you could include code that outputs one prediction for each token in the original corpus. This would have to address the behavior of tokenizers, such as splitting words with hyphens and not prepending them with ##, etc.
As I understand it, the script currently performs evaluation on the subword tokens and not the restored versions. |
transformers | 20,311 | closed | Seq2Seq question answering example script not compatible with the latest Seq2SeqTrainer class. | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-4.18.0-372.16.1.el8.cyf.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.6
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
```
python question-answering/run_seq2seq_qa.py --model_name_or_path google/mt5-base --dataset_name squad --context_column context --question_column question --answer_column answers --do_train --do_eval --evaluation_strategy steps --eval_steps 100 --learning_rate 1e-4 --num_train_epochs 4 --max_seq_length 384 --doc_stride 128 --eval_accumulation_steps 1 --predict_with_generate --output_dir output --per_device_train_batch_size 14
```
During evaluation, the following error occurs:
```
Traceback (most recent call last):
File "/net/people/plgrid/plgapohl/mt5-classification/question-answering/run_seq2seq_qa.py", line 720, in <module>
main()
File "/net/people/plgrid/plgapohl/mt5-classification/question-answering/run_seq2seq_qa.py", line 656, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 1517, in train
return inner_training_loop(
File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 1842, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 2105, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/net/people/plgrid/plgapohl/mt5-classification/transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 59, in evaluate
output = eval_loop(
File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 2990, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 174, in prediction_step
gen_kwargs = self._gen_kwargs.copy()
AttributeError: 'QuestionAnsweringSeq2SeqTrainer' object has no attribute '_gen_kwargs'
```
### Expected behavior
There should be no error in evaluation.
The reason for this error is the fact, that the `evaluate` method in the `QuestionAnsweringSeq2SeqTrainer`:
1. Does not accept `gen_kwargs`.
2. The value of that (missing) attribute is not assigned to the instance variable `self._gen_kwargs`.
Yet in the `prediction_step` (line 174) in `Seq2SeqTrainer` it is expected that the value of that instance variable is set.
The reason for that is the fact, that the `evaluate` method in `Seq2SeqTrainer` copies the `gen_kwargs` to the instance variable.
BTW both `predict` and `evaluate` methods in `Seq2SeqTrainer` mention `max_length` and `num_beams` in the documentation of these methods, yet these arguments are no longer available. | 11-18-2022 11:18:08 | 11-18-2022 11:18:08 | To clarify the last comment (about the documentation) - these arguments in `predict` and `evaluate` are extracted from `gen_kwargs`, but they are not formal arguments of these methods.<|||||>OK. It seems, I have not updated the code in the examples directory. The latest implementation of the trainer is valid.
Closing the issue. |
transformers | 20,310 | closed | Sentence-transformer: No such file or directory error | ### System Info
Using the sentence-transformer widget leads to the following error at https://huggingface.co/NbAiLab/nb-sbert:
`[Errno 2] No such file or directory: '/data/NbAiLab_nb-sbert/1_Pooling/config.json'`
I have checked all the config files, and can not find any references to this.
I am able to load the model locally (from the HF repo), and do valid calculations. The only thing that does not work is the widget.
I found this on the discussion forum: https://discuss.huggingface.co/t/sentence-similarity-demo-not-working/8711
Any idea about what is happening here?
### Who can help?
@Narsil @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Open https://huggingface.co/NbAiLab/nb-sbert
2. In the widget, use Example 1 or fill in sentences
3. Press "compute"
### Expected behavior
Widget providing sentence similarities | 11-18-2022 10:23:43 | 11-18-2022 10:23:43 | Fixed now.
The culprit is either the way the api works or sentence-transformers, but `sentence-transformers` is not great at dealing with missing files (it looks only for 1 file and assumes the rest is there, leading to the failure you are seeing).
Since the API uses massive storage to handle all the models on the hub, there is periodical cleaning of unused files, and sometimes it happens that sentence-transformers doesn't access all the files at the same time, leading to only partial cleanup of sentence-transformers model files and to this error.
Hope you understand a bit better what happened.
For the fix we'll see about maybe adding something upstream, but can't make any promises.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,309 | closed | Enable BAAI/AltCLIP model to better handle Chinese prompts | ### Feature request
We already have our bilingual CLIP model here: https://huggingface.co/BAAI/AltCLIP (currently private)
Would you like to collaborate with us to merge it into transformers?
Thanks a lot!
### Motivation
Want to enable our bilingual ALtCLIP model to handle Chinese prompts as well as English;
### Your contribution
We can collaborate with you to work on it. We are familiar with our model, but is not clear how to merge in into transformers | 11-18-2022 07:15:24 | 11-18-2022 07:15:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,308 | closed | RAG performance on WebQuestion dataset lower than expected | Hi, I recently fine-tuned the Rag model (based on Rag-token-base) in WebQuestion, but the EM score is only 28. The performance in the paper is 45. Do you have any saved models that can match the performance, or does there any tricks during the fine-tuning? | 11-17-2022 23:38:27 | 11-17-2022 23:38:27 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. |
transformers | 20,307 | closed | Remove double brackets | Fixes a small typo in the pipeline docs where there were two brackets. | 11-17-2022 19:40:39 | 11-17-2022 19:40:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ooops !
Thank you for the fix ! |
transformers | 20,306 | closed | Organize pipelines | As suggested by @NielsRogge, this PR organizes the individual task-specific pipelines according to their modality since the list is getting quite long now. | 11-17-2022 19:28:42 | 11-17-2022 19:28:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,305 | closed | Implement Roberta PreLayerNorm | # What does this PR do?
This PR implements Roberta PreLayerNorm as used in https://arxiv.org/abs/2202.08005, code provided at https://github.com/princeton-nlp/DinkyTrain. The model is equivariant to using the `--encoder-normalize-before` flag in fairseq. The addition of this model was discussed in https://github.com/huggingface/transformers/issues/19877.
Note that checkpoints provided by https://arxiv.org/abs/2202.08005, such as https://huggingface.co/princeton-nlp/efficient_mlm_m0.40 are not valid as they assume an hacked RoBERTa model provided in https://github.com/princeton-nlp/DinkyTrain. Additionally, the checkpoints contain extra weights which are never used due to [a bug](https://github.com/princeton-nlp/DinkyTrain/blob/main/huggingface/modeling_roberta_prelayernorm.py#L185) in their convention code. The conversion script provided in this PR fixes those issues. I'm not sure what the appropriate migration is for potentially fixing those checkpoints. For now, I have uploaded the corrected checkpoints to https://huggingface.co/andreasmadsen/efficient_mlm_m0.40.
Fixes #19877
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
/cc @sgugger who commented on the original issue https://github.com/huggingface/transformers/issues/19877
| 11-17-2022 18:46:38 | 11-17-2022 18:46:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ArthurZucker, thanks for looking at this. I fixed minor issues reported by the GitHub testing runners. However, the CircleCI job appears to have some configuration issues that I don't think I'm responsible for. Any insight would be appreciated :)<|||||>While we're waiting for @ArthurZucker to come back from vacation next week, could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>@sgugger Thanks for the tip! I refreshed the OAuth and rebased the branch.<|||||>@sgugger thanks for the help. I fixed the remaining issues, all tests appear to be passing now.<|||||>I fixed everything that was mentioned. However, I couldn't remember/find what command to run to update the documentation files. I assume a test will fail and tell me.<|||||>Lol, there appears to be some issue with a Linux distribution repository. I will make a force push later to refresh the CI. <|||||>Thanks for the fixes! Could you also resolve the merge conflicts! Will then ask for a final review from @sgugger <|||||>@ArthurZucker I rebased it. However, some of the README diffs contain some space ` ` changes. If you want me to fix that please let me know the `make` command to update the README files. <|||||>Normally `make fixup` or just `make style` should do the trick 😉 <|||||>Let's just remove the code to have the correct slice in the different tests and we can merge!<|||||>@ArthurZucker Thanks, I appreciate it.<|||||>@ArthurZucker As I mentioned before, similar comments were included in the RoBERTa testfile which is why I kept them. However, I have removed them now as you suggested.<|||||>Appears to be an unrelated error:
```
FAILED examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer - AssertionError: 0.7 not greater than or equal to 0.8
```
I will try to rebase.<|||||>Cool thanks for the addition! 🚀 |
transformers | 20,304 | closed | fix device issue | # What does this PR do?
When this block is run
```
if isinstance(target_sizes, List):
img_h = torch.Tensor([i[0] for i in target_sizes])
img_w = torch.Tensor([i[1] for i in target_sizes])
```
`scale_fct` (defined a few line below) is always on `cpu`. We need to put it on the proper device. | 11-17-2022 17:48:16 | 11-17-2022 17:48:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@alaradirik Only for those post-processing with this block
```python
if isinstance(target_sizes, List):
img_h = torch.Tensor([i[0] for i in target_sizes])
img_w = torch.Tensor([i[1] for i in target_sizes])
```
have to be fixed.
If there is only `img_h, img_w = target_sizes.unbind(1)`, I think it is fine (will be on the correct device already).
Do you have a link to a place that you think need to be fixed but not done in this PR yet?<|||||>> @alaradirik Only for those post-processing with this block
>
> ```python
> if isinstance(target_sizes, List):
> img_h = torch.Tensor([i[0] for i in target_sizes])
> img_w = torch.Tensor([i[1] for i in target_sizes])
> ```
>
> have to be fixed.
@ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?<|||||>
> @ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?
Yes, as so far we don't have CI failure from that one, so we should be good.<|||||>
> @ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?
Yes, as so far we don't have CI failure from that one, so we should be good. |
transformers | 20,303 | closed | BLOOM past_key_values bug | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yers
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModel, MT5ForConditionalGeneration
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'bigscience/bloomz-7b1'
model = AutoModelForCausalLM.from_pretrained( model_name, device_map='auto')
print(model.config)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = 'this is a'
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(input_ids=encoded_input['input_ids'].cuda())
print([x.size() for x in output.past_key_values[0]])
### Expected behavior
As per the documentation and generation interface (https://huggingface.co/docs/transformers/main_classes/output) past_key_values should have tuples of 4-dimensional tensors with shape (batch_size, num_heads, sequence_length, embed_size_per_head).
The BLOOM models returns 3-dimensional tensors, for example the first tuple in the example code has tensors of shape ( torch.Size([32, 128, 3]), torch.Size([32, 3, 128]) ).
This means that the code crashes when you try to use some decoding algorithms, for example the new contrastive decoding algorithm. | 11-17-2022 16:45:37 | 11-17-2022 16:45:37 | Closing (fixed by https://github.com/huggingface/transformers/pull/20213) |
transformers | 20,302 | closed | remove two tokens that should not be suppressed | # What does this PR do?
As mentionned in #20123, two tokens `'` and `-` were supressed. This was probably from the [late commit](https://github.com/openai/whisper/commit/8cf36f3508c9acd341a45eb2364239a3d81458b9) that appeared after I started working on it. | 11-17-2022 16:13:04 | 11-17-2022 16:13:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,301 | closed | Fix blender bot missleading doc | # What does this PR do?
Fixes #19938, indeed the example is not correct. | 11-17-2022 15:54:51 | 11-17-2022 15:54:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,300 | closed | [bnb] Simplifies slow test | # What does this PR do?
This PR simplifies a slow test for `bnb`. In fact, you can easily retrieve the devices of the model using `set(model.hf_device_map.values())`
cc @sgugger
Thanks!
| 11-17-2022 14:42:29 | 11-17-2022 14:42:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,299 | closed | Bug in contrastive search with GPT2-decoders crossattention using batches | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Load the models and prepare example inputs:
```python
import torch
from transformers import EncoderDecoderModel, AutoTokenizer
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
"bert-base-uncased", "gpt2"
)
in_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
out_tokenizer = AutoTokenizer.from_pretrained("gpt2")
model.config.decoder_start_token_id = out_tokenizer.bos_token_id
inputs = in_tokenizer(["This is a simple test","This is a simple test"], return_tensors="pt")
input_ids = inputs['input_ids']
```
Calling the generate method:
```python
model.generate(input_ids, top_k=4, penalty_alpha=0.6)
```
Leads to a Runtime Error in GPTs crossattention layer:
```
[...]
File "[...]/transformers/generation_utils.py", line 1511, in generate
return self.contrastive_search(
[...]
File "[..]/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward
outputs = block(
File "[..]/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "[..]/transformers/models/gpt2/modeling_gpt2.py", line 417, in forward
cross_attn_outputs = self.crossattention(
File "[...]/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "[...]/transformers/models/gpt2/modeling_gpt2.py", line 182, in _attn
attn_weights = torch.matmul(query, key.transpose(-1, -2))
RuntimeError: The size of tensor a (8) must match the size of tensor b (2) at non-singleton dimension 0
```
# Notes:
I have tried some combinations. The error appears only with plain gpt as decoder. And only when using batches greater than 1 .
Beam search works although it also sends batches of size batch_size*num_beams to the gpt model. Therefore the error is probably in the contrastive search.
To produce the same Error without calling generate:
```python
encoder_output = (torch.rand(2, 7, 768), None) # shape: (batch_size, seq_len, hidden_dim)
encoder_attention_mask = torch.ones(8, 7) # shape: (batch_size*top_k, seq_len)
decoder_input_ids = torch.tensor([[0],[0]]) # shape: (batch_size, 1)
model(decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_output, attention_mask=encoder_attention_mask)
```
### Expected behavior
Calling contrastive search method with batches should return batches of generated ids like beam search. | 11-17-2022 14:39:20 | 11-17-2022 14:39:20 | Hi @josh-oo
I believe that issue has been fixed in the latest version as you can see in [](https://colab.research.google.com/drive/1q25nXzjuvaYHMTtosH2VVzDuSk877Ta6)
Also I believe in the reproduction of the same error without generate, you shouldn't have `batch_size*top_k`, instead the following should work
```
encoder_output = (torch.rand(2, 7, 768), None) # shape: (batch_size, seq_len, hidden_dim)
encoder_attention_mask = torch.ones(2, 7) # shape: (batch_size, seq_len)
decoder_input_ids = torch.tensor([[0],[0]]) # shape: (batch_size, 1)
model(decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_output, attention_mask=encoder_attention_mask)
```<|||||>Hey @josh-oo 👋
Since the release of v4.24 we have standardized a few internals of contrastive search, which may have fixed existing bugs (like @kiansierra pointed out, thank you for pitching in!)<|||||>Hi @kiansierra and @gante , thanks for the quick reply. The new version seems to solve the problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,298 | closed | Deal with `ImageProcessor` | # What does this PR do?
As @amyeroberts add `ImageProcessor`, we need an update in tiny model creation script, otherwise, it won't be returned by `convert_processors`. | 11-17-2022 14:32:40 | 11-17-2022 14:32:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,297 | closed | Add Transformers textbox changes | null | 11-17-2022 14:16:47 | 11-17-2022 14:16:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20297). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,296 | closed | [Table Transformer] Add resources | # What does this PR do?
This PR adds resources for Table Transformer. | 11-17-2022 13:49:08 | 11-17-2022 13:49:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,295 | closed | Add GIT (GenerativeImage2Text) | # What does this PR do?
This PR implements GIT, short for GenerativeImage2Text. The model is a decoder-only Transformer conditioned on CLIP patch tokens + text tokens for tasks like image captioning and VQA.
To do:
- [x] add support for user-provided attention_mask
- [x] fix model_input_names of processor, see #20549
- [x] fix issue where GitProcessor seems to instantiate a `VideoMAEImageProcessor` by default
- [x] transfer checkpoints to `microsoft` | 11-17-2022 13:40:41 | 11-17-2022 13:40:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger PR is ready for a final review :) only remaining issue is that we need to update `from_pretrained` in `ProcessorMixin` to handle multiple image processors<|||||>Hi, is it possible to finetune git with transformers? |
transformers | 20,294 | closed | Fixing the doctests failures. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-17-2022 10:58:21 | 11-17-2022 10:58:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,293 | closed | Add slack report button for Example test | # What does this PR do?
Add slack report button for Example test.
- We just need to specify device in the artifact file names
| 11-17-2022 10:52:47 | 11-17-2022 10:52:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,292 | closed | Fix longformer onnx broken export | This PR fixes the ONNX export of longformer, that was **silently** broken for several cases:
* the export registers `padding_len > 0` as a constant equal to `True`, hence during inference in the dynamic case `padding_len == 0`, we would still go through the path `padding_len > 0` that would then contain negative indexing making some ONNX nodes fail (gather). This PR fixes the negative indexes.
* the export registers `hidden_states.size(1) == window_overlap * 2:` as a constant equal `True` during the export, hence using the converted ONNX model was failing when the `input_ids` length was strictly greater than `attention_window` (case where the `else` path should be taken). This PR removes the path `hidden_states.size(1) == window_overlap * 2:`, since the other path can handle this case as well.
Had to run `make fix-copies` than modified led model as well.
@michaelbenayoun @lewisbails Where should I add tests for this? Optimum?
## Before submitting
- [ ] Did you write any new necessary tests? | 11-17-2022 10:46:01 | 11-17-2022 10:46:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, I would add the support for Longformer in `optimum`, and the tests there as well.<|||||>Thanks for iterating @fxmarty !
Since @ydshieh has also approved, gently pinging @sgugger for final approval :)<|||||>Hello @sgugger @lewtun @ ydshieh @fxmarty : I am stuck with the same issue. Thanks for the fix. This may be a noob question. But wanted to check when would this change be reflected in the PyPi package ? Is it during the next release ? If so do we know when would that be happening ?<|||||>@adithya1111 Somewhere in the next week if I remember correctly.<|||||>Hello @adithya1111 yes this will be in the next release, feel free to try the main branch meanwhile. In any case, I would advise you to be very careful with the exported ONNX model, and to check that the outputs are on par with PyTorch for your target sequence length. You can possibly edit the ONNX export code if you want to use the exported ONNX with a different sequence length, as explained in my messages above.
For reference: https://github.com/huggingface/optimum/issues/503<|||||>Thanks a lot for the comments @fxmarty @ydshieh . Another question. I used the default parameters when training. So would that mean the Global Attention Mask is None ? I see that we are now setting the global_attention_mask[:, ::2] = 1 . I assume here we are making every second token global. Could this lead to a discrepancy ?
My original models results are `logits=tensor([[[ 0.6209, 0.0719, 0.1107, -0.5316],
[ 3.0321, -0.2787, -0.6460, -2.5359],
[ 2.6904, 0.1169, -0.7495, -2.8346],
[ 0.6474, 0.0761, 0.1041, -0.5438]]]`
And my ONNX converted predictions are `[array([[[ 0.49600145, 0.08062335, 0.12902021, -0.4010917 ],
[ 3.0400352 , -0.34643874, -0.6276542 , -2.444679 ],
[ 2.158992 , 0.02124629, -0.5462518 , -2.094074 ],
[ 0.6290194 , 0.06919068, 0.10753635, -0.5197539 ]]],
dtype=float32)]`
Its close but there are some discrepancies . PFB my config file for my model
`{
"_name_or_path": "/opt/ml/input/data/model-base",
"architectures": [
"LongformerForTokenClassification"
],
"attention_mode": "longformer",
"attention_probs_dropout_prob": 0.1,
"attention_window": [
512,
512,
512,
512,
512,
512,
512,
512,
512,
512,
512,
512
],
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3"
},
"ignore_attention_mask": false,
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 4098,
"model_type": "longformer",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"sep_token_id": 2,
"torch_dtype": "float32",
"transformers_version": "4.9.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
`<|||||>@adithya1111 Could you open an issue in https://github.com/huggingface/optimum/issues with a reproducible code? ONNX export through `transformers.onnx` will eventually depend on `optimum.exporters` so we can track the issue there.
Thanks!<|||||>Created a new issue. Thanks<|||||>@sgugger @fxmarty @ydshieh @lewtun : Does the latest release fix this issue ? <|||||>With the release you will have no error at the export & running the ONNX model. However, following the discussion above (see the closed comments), and as well in https://github.com/huggingface/transformers/issues/20275 , https://github.com/huggingface/optimum/issues/503 , https://github.com/huggingface/optimum/issues/505 , you can expect to have non-meaningful output running the ONNX model with sensibly different sequence length than the example provided to `torch.onnx.export` during the conversion.
This is WIP to add options to customize more the export, refer to https://github.com/huggingface/optimum/pull/522 |
transformers | 20,291 | closed | [wip: testing doc-builder] | null | 11-17-2022 10:44:45 | 11-17-2022 10:44:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,290 | closed | Memory issue with OPT models when given long input sequences | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-4.18.0-305.45.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @patrickvonplaten @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When the input sequence is long, `opt-30b` and `opt-66b` gave the following memory error:
```
Traceback (most recent call last):
File "/home/x/script.py", line 189, in <module>
main()
File "/home/x/script.py", line 186, in main
compute_logprobs(int(sys.argv[3]), int(sys.argv[4]), sys.argv[2], model_name=sys.argv[1])
File "/home/x/script.py", line 170, in compute_logprobs
logits = model(input_ids).logits
File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 929, in forward
outputs = self.model.decoder(
File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 693, in forward
layer_outputs = decoder_layer(
File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 346, in forward
hidden_states = self.fc1(hidden_states)
File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 256, in forward
out = bnb.matmul(x, self.weight, state=self.state)
File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 410, in matmul
return MatMul8bitLt.apply(A, B, out, state)
File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 328, in forward
out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
File "/home/nickhu/hf/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1332, in igemmlt
out, Sout = get_transform_buffer(
File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/functional.py", line 294, in get_transform_buffer
return init_func((rows, cols), dtype=dtype, device=device), state
RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 44.37 GiB total capacity; 42.44 GiB already allocated; 40.50 MiB free; 43.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
To reproduce, simply call `model(input)` on some long input sequence.
Additional information:
- My total GPU memory: `2*42 GB` for `opt-30b` and `4*42 GB` for `opt-66b`.
- Also, the same inputs did not cause errors in the smaller opt models.
### Expected behavior
No `CUDA out of memory` error.
How can I fix this? Any help is appreciated! | 11-17-2022 06:41:40 | 11-17-2022 06:41:40 | ```
compute_logprobs(int(sys.argv[3]), int(sys.argv[4]), sys.argv[2], model_name=sys.argv[1])
```
What are those values, and what do they correspond to ?<|||||>Hi @xiaoyangnickhu 👋 This seems business as usual for large models and a forum question for the PyTorch experts.
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>My bad... Thanks! |
transformers | 20,289 | closed | set the default cache_enable to True, aligned with the default value … | …in pytorch cpu/cuda amp autocast
Signed-off-by: Wang, Yi A <[email protected]>
- trainer: @sgugger
| 11-17-2022 02:17:12 | 11-17-2022 02:17:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,288 | closed | Added FAN Models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17234
Implements the FAN Models described in this [paper](https://arxiv.org/pdf/2204.12451.pdf) and available in the following [github repo](https://github.com/NVlabs/FAN), Additionally this repo has some of the weights available as described in their README file.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Tasks Completed
From those provided in the [add new model](https://huggingface.co/docs/transformers/add_new_model) contribute section
- [X] (Optional) Understood the model’s theoretical aspects
- [X] Prepared 🤗 Transformers dev environment
- [X] Set up debugging environment of the original repository
- [X] Created script that successfully runs the forward() pass using the original repository and checkpoint (Available in Demo Colab)
- [X] Successfully added the model skeleton to 🤗 Transformers
- [X] Successfully converted original checkpoint to 🤗 Transformers checkpoint
- [X] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint
- [X] Finished model tests in 🤗 Transformers
- [X] Successfully added tokenizer in 🤗 Transformers
- [X] Run end-to-end integration tests
- [X] Finished docs
- [X] Uploaded model weights to the Hub
- [X] Submitted the pull request
- [X] (Optional) Added a demo notebook [](https://colab.research.google.com/drive/10aCWtEPpRD2X251EiCNemjhvxmsQhGCl)
## Model files migration
Ideally I believe the different model files for these architectures should be hosted under the [NVIDIA](https://huggingface.co/nvidia) organization, instead of my own [personal space](https://huggingface.co/ksmcg).
## Thank you note
It has been a very enriching experience to migrate these models. I've learned a lot while developing under the constraints imposed by this library that provide such a great consistent user experience, from the tests, PreTrainedModel class, the cookiecuter template. | 11-17-2022 01:17:51 | 11-17-2022 01:17:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @NielsRogge , thanks for your review.
I believe the current tests that are failing are not due to this PR (mostly because the failure makes reference to a file not modified by this PR) (torch tests passed in a previous run).
Please let me know if there any additional tasks to complete |
transformers | 20,287 | closed | Flan-T5-XXL generates non-sensical text when load_in_8bit=True | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running the English to German example:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
produces expected output:
```
<pad> Wie alt sind Sie?</s>
```
Loading in 8-bit and running:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
results in output more nonsensical than I'd expect:
```
<pad> How old are</s>
```
### Expected behavior
I expected close or approximate output between the original output and the 8-bit output. This was the provided INT8 code snippet so expected output to be sensible for task. | 11-16-2022 20:01:26 | 11-16-2022 20:01:26 | cc @younesbelkada <|||||>Hi @jimmy-marmalade
Thanks a lot for raising this point. Note that `int8` quantization is done in 2 stages, it first converts the model in `float16` and uses the `fp16` model to quantize it in `8bit`. If you try to load and run the model in `fp16` you also get gibberish output:
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("./flan-t5-xxl")
model = T5ForConditionalGeneration.from_pretrained("./flan-t5-xxl", torch_dtype=torch.float16, device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=512)
print(tokenizer.decode(outputs[0]))
>>> <pad> How old are die Sie? Ihr Mutter?tat?ztlich, rezult Interesse: restriction = = = = ...
```
I suspect there is something wrong with `bf16` to `fp16` conversion for this specific model and for `xl` model too.
@stas00 do have you any intuition on why the int8 conversion (so the underlying fp16 conversion) worked well for bloom-176 and not here? 🙏
Thanks! <|||||>Did the bf16 model weights have large values resulting in overflow when used under fp16? bf16 to fp16 conversion is a huge issue with almost every model we have seen - .e.g all large t5 and derivative models. Seeing that this is a T5 derivative it's almost certainly related, you can see the rich discussion here: https://github.com/huggingface/transformers/pull/10956 and possible workarounds to try.
Probably talk to @TimDettmers and ask if perhaps there could be bf16/int8 variation for bf16 models in BNB?
<|||||>T5 doesn't work in FP16 because the softmaxes in the attention layers are not upcast to float32. @younesbelkada if you remember the fixes done in BLOOM/OPT I suspect similar ones would fix inference in FP16 for T5 :-)<|||||>but why are we even going through FP16 to do quanitization of a bf16 model? why can't this be done directly in the original dtype?
note: interestingly deepspeed-inference also converts the model to fp16 to do quantization. <|||||>Thank you very much for all the pointers
> T5 doesn't work in FP16 because the softmaxes in the attention layers are not upcast to float32. @younesbelkada if you remember the fixes done in BLOOM/OPT I suspect similar ones would fix inference in FP16 for T5 :-)
I think that T5 [already upcasts the softmax to `fp32`](https://github.com/huggingface/transformers/blob/6c2be845dd384829610897b63e6dcbe911e9e811/src/transformers/models/t5/modeling_t5.py#L539). I suspected that the overflow might come from the addition to positional bias in the line before but did not helped. I also tried to upcast the lm_logits and the hidden_states before the lm_head to `fp32` but did not helped too.
In addition, I also printed the hidden states at every stage, checking whether it contains any `nan` or `inf`. This was always set to `False`.
I will investigate more by reading in deep details @stas00 PR
I think the most efficient solution is to try to see where the overflow comes, and force the operation to be done in `fp32` in this operation. <|||||>@stas00 using your tool `detect_overflow` here, and I am flagging an overflow starting from a certain layer (from layer `6` - because of `inf`). (btw I also tried the `autocast` solution but seems to not work for inference, I have seen on the issue that it might work for some situations for inference, but sadly not in this case :/ )
Then the hidden states gets constantly overflowed and clamped at each layer. I guess that the accumulation of clamping at various stages introduces these errors.
I am wondering if we can find a proper workaround for that, is clamping the right solution? My intuition is that the model gets confused at some point since clamping `n` times the hidden states will yield to completely different results than the `bf16` hidden states.
Also, let's say you have flagged the layer responsible of the overflow. You can then do the operation there in `fp32`. But the under/overflow will be still present since you need to cast the results back in `fp16` right? <|||||>There was one more scaling hack posted here if you want to try it. https://github.com/huggingface/transformers/issues/14189#issuecomment-961571628
In general bf16-pretrained models ought to run under bf16 or fp32 regimes, as fp16 and bf16 are very incompatible dtypes. It's not as bad if you were to go from fp16 to bf16 as you'd only lose precision, and it'd only impact quality, but not the other way around (overflow).
So we should raise this question with the BNB and perhaps deepspeed-inference developers, at least to have an understanding of why both require fp16 and won't support bf16.
@TimDettmers, @RezaYazdaniAminabadi - is there a way for your libraries to work with bf16 dtype, so that the bf16-pretrained models won't overflow during inference? Thank you.
<|||||>Thanks everyone for the discussion and work!
Are there any possible workarounds that I could implement as an end user?<|||||>> There was one more scaling hack posted here if you want to try it. [#14189 (comment)](https://github.com/huggingface/transformers/issues/14189#issuecomment-961571628)
>
> In general bf16-pretrained models ought to run under bf16 or fp32 regimes, as fp16 and bf16 are very incompatible dtypes. It's not as bad if you were to go from fp16 to bf16 as you'd only lose precision, and it'd only impact quality, but not the other way around (overflow).
>
> So we should raise this question with the BNB and perhaps deepspeed-inference developers, at least to have an understanding of why both require fp16 and won't support bf16.
>
> @TimDettmers, @RezaYazdaniAminabadi - is there a way for your libraries to work with bf16 dtype, so that the bf16-pretrained models won't overflow during inference? Thank you.
The main reason the weights are converted to half on DeepSpeed-side is that the kernels are only working with fp16 values. However, we are looking into some of these limitations and will resolve them soon.
The other part is that we can quantize from the original bf16 checkpoint and resolve some of the overflow issue due to different data-precision of fp16 vs bf16.
<|||||>Wonderful!
So it looks like @jimmy-marmalade can try out your solution (Deepspeed-Inference) once you have something working in bf16/int8, Reza, and hopefully this will unblock them.<|||||>Thanks @RezaYazdaniAminabadi is there an issue I can watch to keep track of progress.<|||||>@younesbelkada (cc @thomwolf who gave the inspiration for the workaround; cc @stas00 , @sgugger ):
@navjotts and myself had a look at this and found a workaround.
As already concluded in ticket above, the bf16->fp16 conversion is generally incompatible. We ran the `detect_overflow` as well (great tool, thanks @stas00 !) and found generally we got overflows in the dense part of layer 7 in the encoder, specifically in the `wo` operation of `T5DenseGatedActDense`.
We implemented a hacky workaround to keep `wo` in fp32, cast its input to fp32 and then leave it in fp32 until after the `T5LayerNorm`. At the end of the norm we cast back to fp16. All fp16 linear modules (i.e. everything except the `wo`) can then use the 8-bit quantization. The cast back to fp16 is not lossless ofcourse, but we've generally found it to perform equivalent. We haven't spotted any difference in output so far.
We made 3 changes:
1. `T5DenseGatedActDense.forward`:
```py
hidden_gelu = self.act(self.wi_0(hidden_states))
hidden_linear = self.wi_1(hidden_states)
hidden_states = hidden_gelu * hidden_linear
hidden_states = self.dropout(hidden_states)
hidden_states = self.wo(
hidden_states.to(torch.float32)
) # PATCH: Cast to float32, as self.wo is casted to float32 w/ patch 3 below
return hidden_states
```
2. `T5LayerNorm.forward`
```py
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
# convert into half-precision if necessary
if self.weight.dtype in [torch.float16, torch.bfloat16]:
hidden_states = hidden_states.to(self.weight.dtype)
return (self.weight * hidden_states).to(
torch.float16
) # PATCH: Cast back to float16 for compatibility w/ next layer. This is not lossless.
```
3. `_load_state_dict_into_meta_model`
```py
if param_name.endswith(
'.wo.weight'
): # PATCH: For the wo weights of the dense layers, keep them in float32, others will get converted to float16 as this is a requirement for the LLM 8-bit quantization.
param = param.to(torch.float32) # PATCH
else: # PATCH
# We convert floating dtypes to the `dtype` passed.We want to keep the buffers/params
# in int/uint/bool and not cast them.
if dtype is not None and torch.is_floating_point(param):
param = param.to(dtype)
# For compatibility with PyTorch which loads float16/bfloat16 weights in fp32
if is_safetensors and dtype is None and torch.is_floating_point(param):
param = param.to(torch.float32)
```
and then when instantiating the model from pretrained we set:
```py
load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']
```
I'm not sure what a good way would be to get this into `transformers` though / if that would even be a good idea given this is quite hacky, curious for your thoughts. For patch 3, if we could add an option to specify an exclude_list for the conversion to float16, that would remove the need for that patch. Then the layers can be adapted at model-level.<|||||>Hi @larsmennen (and cc @thomwolf )
Thanks for this great investigation and amazing fix - I also believe that this approach is the best fix so far for this problem. Thanks so much for working on this as it will enable using these modeis in a more accessible way.
I see 2 workaround for that
1- The fix should be applied for 8bit models only, in this case, I think that we can perfectly have an additional flag `load_in_8bit_fp32_modules = ["wo"]` and apply a patch similar to your point `3.`. For the points `2` and `1` we can probably have a hot fix as you suggested but I would wait for @sgugger and/or @stas00 to hear what they think about that
2- We add an additional flag, regardless if the model is loaded in 8bit or no, since this could fix the issue with T5-fp16 too, with a flag similar than above `keep_in_fp32_modules=["wo"]` that is activated only for half precision models (and 8bit models too). But again we'll probably need the hotfixes from `1`&`2`.
Few questions:
- When using `load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']` note that with your fix `decoder` and `lm_head` will be kept to their native `dtype`. In the case you are calling `load_in_8bit=True`, we first cast all the weights in fp16 therefore `load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']` is probably not needed as `wo` is "force-casted" in fp32 and `lm_head` is always detected to be kept in its original precision. Can you double check that? 🙏
- Does the same fix applies for T5-XL ?
<|||||>1. and 2. are easy patches to integrate. I don't anticipate any difficulty to have merged as is. For 3 we need a solution that is not too ad-hoc in the code of modeling_utils. I like the proposed 2- in @younesbelkada comment, since I also believe this should also fix the T5 evaluation problem in FP16.
Thanks a lot for the clear explanations and the deep dive!<|||||>what about users on bf16 hardware that don't want to waste memory casting to fp32 since the modeling code works just fine when using bf16 mixed precision?
I think if this is done it should only be done for fp16 mixed precision.
--------------
Also please note that we automatically use `apex`'s faster layernorm when it's found, so `T5LayerNorm.forward` workaround will apply only if it's not found. i.e. you may want to disable the swap-in of the faster version.<|||||>> I think if this is done it should only be done for fp16 mixed precision.
Yes indeed that's a very good point! (cc @younesbelkada) <|||||>Also what about users pre-training their own model in fp16, the proposed change will negatively impact them as well, as the current modeling code should work just fine for them.
IMHO, the safest approach would be to leave the current code alone and have a flag that activates workaround solutions for those who need them.
Additionally, I remind you that there were other workarounds proposed that don't use any additional memory and use a scaling factor instead that moves the weights into a safe-to-fp16 numerical range. https://github.com/huggingface/transformers/pull/10956#issuecomment-961030728<|||||>> Also what about users pre-training their own model in fp16, the proposed change will negatively impact them as well, as the current modeling code should work just fine for them.
The main goal of having T5 in the library is to support the corresponding pretrained models as best as possible. All of T5, FlanT5 and T0 checkpoints have been trained in bfloat16, so changing the code to support fp16 inference is for the better for the larger community. If this slows down an edge case, the user can just adapt the line of code in the modeling file to suit their need (that's why the library is not modular and with a strict one file per model policy after all :-) ).<|||||>One could argue that this breaks backward compatibility since suddenly the model requires more memory to operate than when it was originally released.
If the belief is that the majority X% of users will benefit from such BC breaking change I think it'd at least be nice to have a flag for the small Y% to be able to retain what they have been using w/o needing to manually change the code.
Might be much simpler to clone this to `models/t5-bf162fp16` apply all the proposed patches and thus have 2 versions - one for normal use and one for the originally unintended bf162fp16 use.<|||||>Thanks for the quick replies all!
@younesbelkada to answer your questions:
> When using load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo'] note that with your fix decoder and lm_head will be kept to their native dtype. In the case you are calling load_in_8bit=True, we first cast all the weights in fp16 therefore load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo'] is probably not needed as wo is "force-casted" in fp32 and lm_head is always detected to be kept in its original precision. Can you double check that? 🙏
Regarding need for `wo`: if we don't pass it in, then it is not ignored from the conversion of the linear layers to 8bit, and an autocast is applied:
```
/root/venv/lib/python3.7/site-packages/bitsandbytes/autograd/_functions.py:231: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
```
and then resulting model:
```
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedActDense(
(wi_0): Linear8bitLt(in_features=4096, out_features=10240, bias=False)
(wi_1): Linear8bitLt(in_features=4096, out_features=10240, bias=False)
--> (wo): Linear8bitLt(in_features=10240, out_features=4096, bias=False) <--
(dropout): Dropout(p=0.1, inplace=False)
(act): NewGELUActivation()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
```
So that one is required.
For `decoder` and `lm_head`, I included those because of this line:
https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/src/transformers/modeling_utils.py#L2319
For this model `get_keys_to_not_convert` returns `['decoder', 'lm_head']`. So I didn't want to change this behavior.
Note that the `decoder` doesn't actually seem to do anything, because in `replace_8bit_linear`:
https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/src/transformers/utils/bitsandbytes.py#L113-L126
this actually only checks the last part of the module name (e.g. `wo`), but `decoder` itself is not a linear layer. Not sure if this behavior is intended, or is this a separate bug that `replace_8bit_linear` should check the full module name?
> Does the same fix applies for T5-XL ?
Yes. I ran the same test internally; can confirm fp32 quality == 8bit-with-fix quality != 8bit-without-fix quality for XL.
Thanks!<|||||>hi @larsmennen
Thanks so much for your detailed answer, everything is clear on my side now. Regarding your point about `get_keys_not_convert` it is probably a bug, let's fix this in a separate PR later .
#20683 is in a good shape IMO. Can you checkout from this branch, apply the patches mentioned in 1&2 and let us know if it works as expected? 🙏 <|||||>@younesbelkada moving back to this thread:
> Would you mind opening a PR addressing your suggestions (patch 1 & 2 from the discussion at https://github.com/huggingface/transformers/issues/20287 )?
Yes, happy to. will get that in today or tomorrow. |
transformers | 20,286 | closed | Fix no trainer summarization script test failure | # What does this PR do?
This PR solves the test failure in the nightly CI for the summarization no trainer script. Also potentially https://github.com/huggingface/transformers/issues/18189 too, just waiting for verification first otherwise a followup PR will be made
`gather_for_metrics` was being called twice instead of once, along with a syntax error, which is not good! :)
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 11-16-2022 19:39:19 | 11-16-2022 19:39:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20286). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,285 | closed | Transformer cannot tokenize Chinese texts into correct words | ### System Info
Linux Debian,
bert-base-multilingual-cased,
bert-base-chinese
### Who can help?
@SaulLu
@LysandreJik
@ArthurZucker
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from minicons import scorer
import torch
from torch.utils.data import DataLoader
import numpy as np
import json
model = scorer.MaskedLMScorer('bert-base-multilingual-uncased', 'cpu')
sentences = ["我 昨天 下午 我 就 是 直接 买 了 一份 那个 凉菜", "他们 那边 都 是 小 小葱 包括 重庆 那边"]
model.token_score(sentences, surprisal = True, base_two = True)
### Expected behavior
Hi everyone,
A package called "minicons" (https://github.com/kanishkamisra/minicons) can extract word representations from contextualized word embeddings, and compute word probability in context. The package is based on Transformer language models. The transformer models, like "bert-base-multilingual-uncased" could be introduced in "minicons" help compute taken surprisal or probabilities for different languages if we have a text of this language as an input. This can be applied in English, German, Spanish and some alphabet-based languages.
However, it doesn't seem to work in Chinese. As you know, Chinese will be pre-processed with word segmentation (word segmentation by space). Despite this, if the input Chinese text is the one with word segments (two-character, or three and more character combination)), the output is still on the surprisal with an invididual Chinese character in the text rather than Chinese words. This is very different from the outputs of English, German, Spanish and other alphabet-based languages.
I have checked with the code of the package "minicon". The package uses "self.tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = True)" to tokenize the input text in one given language. I think that this tokenization method is fine because some Chinese transformer language models use the similar one, like "bert-chinese-base" (https://huggingface.co/bert-base-chinese?text=%E7%94%9F%E6%B4%BB%E7%9A%84%E7%9C%9F%E8%B0%9B%E6%98%AF%5BMASK%5D%E3%80%82). The following is an example.
```python
from minicons import scorer
import torch
from torch.utils.data import DataLoader
import numpy as np
import json
```
```python
model = scorer.MaskedLMScorer('bert-base-chinese', 'cpu')
```
```python
sentences = ["他边 都 是 小 小葱 包括 重庆 那边"]
```
```python
model.token_score(sentences, surprisal = True, base_two = True)
```
```python
This is the real output.
[[('他', 12.03427505493164),
('边', 15.405355453491211),
('都', 2.9198732376098633),
('是', 0.4283633828163147),
('小', 4.383847236633301),
('小', 8.884271621704102),
('葱', 19.068641662597656),
('包', 0.0020940606482326984),
('括', 8.549302101135254),
('重', 0.4292893409729004),
('庆', 3.0496609210968018),
('那', 3.522364377975464),
('边', 13.743260383605957)]]
```
```
The following is the desirable result.
[[('他边', 12.03427505493164),
('都', 2.9198732376098633),
('是', 0.4283633828163147),
('小', 4.383847236633301),
('小葱', 19.068641662597656),
('包括', 8.549302101135254),
('重庆', 3.0496609210968018),
('那边', 13.743260383605957)]]
```
This means that transformer seems not tokenize the input text properly. What I want to get is the probability of each word ("小葱",“那边” etc.) rather than a single character (“小”, “葱”, “那”, “边” etc.).
It will be greatly appreciated if you could kindly solve this problem.
Best,
Kevin
The following is the desirable result.
[[('他边', 12.03427505493164),
('都', 2.9198732376098633),
('是', 0.4283633828163147),
('小', 4.383847236633301),
('小葱', 19.068641662597656),
('包括', 8.549302101135254),
('重庆', 3.0496609210968018),
('那边', 13.743260383605957)]]
``` | 11-16-2022 19:15:55 | 11-16-2022 19:15:55 | cc @ArthurZucker <|||||>> cc @ArthurZucker
Many thanks!<|||||>Hey!
For starters, would be great if you could have provided the example with `transformers` only code because we are not really aware of what might be going on inside `minicons`. Does it use the `fast` or `slow` tokenizer? What arguments are modified behind it etc..
My question here would be "What is the expected behavior of the `bert-base-chinese` model. The problem might just come from the tokenizer that is used, it might not correspond to your needs. In this case, it is expected that the characters are tokenized one by one.
However if you use `tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese', use_fast = False, tokenize_chinese_chars =False)`, you should get the expected results.
```python
>>> tokenizer.batch_decode(tokenizer(sentences).input_ids)
['[CLS] 他边 都 是 小 小葱 包括 重庆 那边 [SEP]']
```
You should either create a copy model with that parameter set or see with `minicons`<|||||>Many thanks, Arthur! @ArthurZucker
Even after changing the parameters in tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese', use_fast = False, tokenize_chinese_chars =False), the result is not desirable. The word segments are not correct in the output.
```python
>>>import scorer
>>> model = scorer.MaskedLMScorer('bert-base-chinese', 'cpu')
>>> model.token_score(sentences, surprisal = True, base_two = True)
[[('他', 8.213873863220215), ('##边', 22.977821350097656), ('##都', 22.392602920532227), ('##是', 21.245899200439453), ('##小', 21.8975830078125), ('##小', 21.818450927734375), ('##葱', 21.52490997314453), ('##包', 22.13797950744629), ('##括', 22.856788635253906), ('##重', 22.35895347595215), ('##庆', 21.193708419799805), ('##那', 21.345863342285156), ('##边', 27.09543800354004)]]
```
The following is the code for "scorer.py". I am not sure what caused the problem on word tokenizations.
Thanks again!
```python
from logging import log
from typing import Iterable, Union, List, Dict, Optional, Callable, Tuple, Any
import torch
from transformers import (
AutoModelForCausalLM, AutoModelForMaskedLM,
AutoModelForSeq2SeqLM,
AutoTokenizer
)
from transformers.utils.logging import set_verbosity_error
from collections import defaultdict
from itertools import chain
from re import sub
import warnings
set_verbosity_error()
class LMScorer:
"""
Base LM scorer class intended to store models and tokenizers along
with methods to facilitate the analysis of language model output scores.
"""
def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:
"""
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
self.tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = True)
self.device = device
self.vocab = defaultdict(list)
# {self.vocab[x.strip()].append(i) for x, i in [(self.tokenizer.decode([i]), i) for i in range(self.tokenizer.vocab_size)]}
for i in range(self.tokenizer.vocab_size):
decoded = [(self.tokenizer.decode(i), i)]
for x, j in decoded:
self.vocab[x.strip()].append(j)
def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:
raise NotImplementedError
def distribution(self, batch: Iterable) -> torch.Tensor:
raise NotImplementedError
def topk(self, distribution: torch.Tensor, k: int = 1) -> Tuple:
top_k = distribution.topk(k)
probs = top_k.values.squeeze(1).exp().tolist()
if k == 1:
tokens = self.decode(top_k.indices.squeeze(1))
else:
tokens = [self.decode(x) for x in top_k.indices.squeeze(1)]
return tokens, probs
def query(self, distribution: torch.Tensor, queries: List[str]) -> Tuple:
# this will be self.vocab tho
query_ids = [self.vocab[a] for a in queries]
maxlen = max(map(len, query_ids))
query_ids = [q + [self.tokenizer.pad_token_id] * (maxlen - len(q)) if len(q) < maxlen else q for q in query_ids]
current_batch_size = distribution.shape[0]
probs = distribution[torch.arange(current_batch_size)[:, None], query_ids].max(1).values.exp().tolist()
inv_ranks = distribution.argsort().argsort() + 1
ranks = distribution.shape[1] - inv_ranks + 1
token_ranks = ranks[torch.arange(current_batch_size)[:, None], query_ids].min(1).values.tolist()
return probs, token_ranks
def logprobs(self, batch: Iterable, rank: bool = False) -> Union[float, List[float]]:
warnings.warn(
"logprobs is deprecated, use compute_stats instead",
DeprecationWarning
)
raise NotImplementedError
def compute_stats(self, batch: Iterable, rank: bool = False) -> Union[Union[float, int], List[Union[float, int]]]:
raise NotImplementedError
def prepare_text(self, text: Union[str, List[str]]) -> Union[str, List[str]]:
raise NotImplementedError
def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:
raise NotImplementedError
def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:
'''
For every input sentence, returns a list of tuples in the following format:
`(token, score)`,
where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.
:param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.
:param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.
:param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.
:param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)
:param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)
:return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.
:rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``
'''
raise NotImplementedError
def score(self, batch: Union[str, List[str]], pool: Callable = torch.mean, *args) -> Union[float, List[float]]:
'''
DEPRECATED as of v 0.1.18. Check out ``sequence_score`` or ``token_score`` instead!
Pooled estimates of sentence log probabilities, computed by the
language model. Pooling is usually done using a function that
is passed to the method.
:param batch: a list of sentences that will be passed to the
language model to score.
:type batch: Union[str, List[str]]
:param pool: Pooling function, is selected to be
`torch.mean()` by default.
:type pool: Callable
:return: Float or list of floats specifying the log
probabilities of the input sentence(s).
:rtype: Union[float, List[float]]
'''
warnings.warn(
"score is deprecated, use sequence_score or token_score instead",
DeprecationWarning
)
result = self.logprobs(self.prepare_text(batch))
logprob, _ = list(zip(*result))
pooled = list(map(lambda x: pool(x, *args).tolist(), logprob))
return pooled
def adapt_score(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]], pool: Callable = torch.mean, *args) -> None:
"""
DEPRECATED as of v 0.1.18. Check out ``partial_score`` instead!
"""
warnings.warn(
"adapt_score is deprecated, use partial_score or token_score instead",
DeprecationWarning
)
def partial_score(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]], reduction: Callable = lambda x: x.mean(0).item(), **kwargs) -> List[float]:
'''
Pooled estimates of sequence log probabilities (or some modification of it), given a preamble. Pooling is usually done using a function that is passed to the method.
:param preamble: a batch of preambles or primes passed to the
language model. This is what the sequence is conditioned on, and the model ignores the word probabilities of this part of the input in estimating the overall score.
:type preamble: ``Union[str, List[str]]``
:param stimuli: a batch of sequences (same length as preamble)
that form the main input consisting of the sequence whose
score you want to calculate.
:type stimuli: ``Union[str, List[str]]``
:param reduction: Reduction function, is selected to be
``lambda x: x.mean(0).item()`` by default, which stands for the avg. log-probability per token for each sequence in the batch.
:type reduction: Callable
:param kwargs: parameters for the ``compute_stats`` call --
* `prob` (`bool`): Whether the returned value should be a probability (note that the default reduction method will have to be changed to `lambda x: x.prod(0).item()` to get a meaningful return value)
* `base_two` (`bool`): whether the returned value should be in base 2 (only works when `prob = False`)
* `surprisal` (`bool`): whether the returned value should be a surprisal (does not work when `prob = True`)
:return: List of floats specifying the desired score for the stimuli part of the input, e.g., P(stimuli | preamble).
:rtype: ``List[float]``
'''
result = self.compute_stats(self.prime_text(preamble, stimuli), **kwargs, return_tensors = True)
logprob = result
reduced = list(map(reduction, logprob))
return reduced
def encode(self, text: Union[str, List[str]], manual_special: bool = True, return_tensors: Optional[str] = 'pt') -> Dict:
"""
Encode a batch of sentences using the model's tokenizer.
Equivalent of calling `model.tokenizer(input)`
:param ``Union[str, List[str]]`` text: Input batch/sentence to
be encoded.
:param manual_special: Specification of whether special tokens
will be manually encoded.
:type manual_special: bool
:param return_tensors: returned tensor format. Default `'pt'`
:type manual_special: str
:return: Encoded batch
:rtype: ``Dict``
"""
sentences = [text] if isinstance(text, str) else text
if manual_special:
# manually add special tokens
sentences = self.add_special_tokens(sentences)
if return_tensors:
tokens = self.tokenizer.batch_encode_plus(sentences, add_special_tokens = False, padding = 'longest', return_attention_mask = True, return_tensors = return_tensors)
else:
# mostly for masked LMs
tokens = self.tokenizer.batch_encode_plus(sentences, padding = 'longest', return_attention_mask = True)
return tokens
def decode(self, idx: List[int]):
"""
Decode input ids using the model's tokenizer.
:param ``List[int]`` idx: List of ids.
:return: Decoded strings
:rtype: List[str]
"""
return [self.tokenizer.decode([x]).strip() for x in self.tokenizer.convert_tokens_to_ids(self.tokenizer.convert_ids_to_tokens(idx))]
class MaskedLMScorer(LMScorer):
"""
Class for Masked Langauge Models such as BERT, RoBERTa, etc.
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:
"""
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
super(MaskedLMScorer, self).__init__(model_name, device)
self.model = AutoModelForMaskedLM.from_pretrained(model_name, return_dict = True)
self.model.to(self.device)
self.model.eval()
# define CLS and SEP tokens
self.bos_token_id = self.tokenizer.cls_token_id
self.eos_token_id = self.tokenizer.sep_token_id
self.cls_token_id = self.tokenizer.cls_token_id
self.sep_token_id = self.tokenizer.sep_token_id
self.mask_token_id = self.tokenizer.mask_token_id
self.pad_token_id = self.tokenizer.pad_token_id
def add_special_tokens(self, text: Union[str, List[str]]) -> List[str]:
"""
Reformats input text to add special model-dependent tokens.
:param text: single string or batch of strings to be
modified.
:type text: ``Union[str, List[str]]``
:return: Modified input, containing special tokens as per
tokenizer specification
:rtype: ``List[str]``
"""
sentences = [text] if isinstance(text, str) else text
sentences = [self.tokenizer.cls_token + " " + sentence + " " + self.tokenizer.sep_token for sentence in sentences]
return sentences
def mask(self, sentence_words: Union[Tuple[str, str], List[Tuple[str, str]]]) -> Tuple[str, str, int]:
"""
Processes a list of (sentence, word) into input that has the
word masked out of the sentence.
Note: only works for masked LMs.
:param ``Union[Tuple[str], List[Tuple[str]]]`` sentence_words:
Input consisting of `[(sentence, word)]`, where sentence
is an input sentence, and word is a word present in the
sentence that will be masked out.
:return: Tuple `(sentence, word, length)`
"""
sentence_words = [sentence_words] if isinstance(sentence_words[0], str) else sentence_words
sentences, words = list(zip(*sentence_words))
words = list(words)
length = len(words)
sentences = [sub(rf'(?<![\w\/-])({word})(?=[^\w\/-])', self.tokenizer.mask_token, sentence) for sentence, word in sentence_words]
return (sentences, words, length)
def cloze(self, sentence_words: Union[Tuple[str, str], List[Tuple[str, str]]]) -> torch.Tensor:
"""
Runs inference on masked input.
Note: only works for masked LMs.
:param ``Union[Tuple[str], List[Tuple[str]]]`` sentence_words:
Input consisting of `[(sentence, word)]`, where sentence
is an input sentence, and word is a word present in the
sentence that will be masked out and inferred.
:return: A tensor with log probabilities for the desired word
in context
"""
sentences, words, length = self.mask(sentence_words)
encoded = self.tokenizer(sentences, return_tensors='pt')
encoded = encoded.to(self.device)
idx = torch.nonzero(encoded['input_ids'] == self.tokenizer.mask_token_id, as_tuple=False)[:,1].unsqueeze(1)
word_idx = self.tokenizer(words, add_special_tokens=False)['input_ids']
with torch.no_grad():
masked_logits = self.model(**encoded).logits[torch.arange(length)[:, None], idx].squeeze().detach()
if len(sentences) > 1:
logprobs = masked_logits - masked_logits.logsumexp(1).unsqueeze(1)
masked_logprobs = logprobs[torch.arange(len(sentences))[:, None], word_idx].exp().squeeze()
else:
logprobs = masked_logits - masked_logits.logsumexp(0)
masked_logprobs = logprobs[word_idx].exp().squeeze()
return masked_logprobs
def prepare_text(self, text: Union[str, List[str]]) -> Iterable[Any]:
"""
Prepares a batch of input text into a format fit to run MLM
scoring on.
Borrows preprocessing algorithm from Salazar et al. (2020), and
modifies code from the following github repository by simonpri:
https://github.com/simonepri/lm-scorer
:param text: batch of sentences to be prepared for scoring.
:return: Batch of formatted input that can be passed to `logprob`
"""
# converts input text to batch of tensors with every position except the cls and sep token masked
sentences = [text] if isinstance(text, str) else text
# idea is to tokenize and then create batches of tokenized instances,
# but with each token in the sequence replaced by the mask token.
encoded = self.encode(sentences, manual_special = False)
token_idx = encoded['input_ids']
attention_masks = encoded['attention_mask']
masked_tensors = [] # token ids, attention masks, lengths
for token_ids, attention_mask in zip(token_idx, attention_masks):
token_ids = torch.tensor(token_ids)
# final_lengths = len(token_ids) - 2
attention_mask = torch.tensor(attention_mask)
token_ids_masked_list = []
attention_masked_list = []
effective_token_ids = [token for token in token_ids if token != self.pad_token_id and token != self.cls_token_id and token != self.sep_token_id]
effective_length = len(effective_token_ids)
mask_indices = []
mask_indices = [[mask_pos] for mask_pos in range(effective_length+2)]
# We don't mask the [CLS], [SEP] for now for PLL
mask_indices = mask_indices[1:-1]
mask_token_id = self.mask_token_id
for mask_set in mask_indices:
token_ids_masked = token_ids.clone()
token_ids_masked[mask_set] = mask_token_id
attention_masked = attention_mask.clone()
attention_masked_list.append(attention_masked)
token_ids_masked_list.append(token_ids_masked)
masked_tensors.append((torch.stack(token_ids_masked_list), torch.stack(attention_masked_list), effective_token_ids, len(mask_indices), 1))
return masked_tensors
def prime_text(self, preamble: Union[str, List[str]] , stimuli: Union[str, List[str]]) -> Iterable[Any]:
"""
Prepares a batch of input text into a format fit to run LM
scoring on.
Borrows preprocessing algorithm from Salazar et al. (2020), and
modifies code from the following github repository by simonpri:
https://github.com/simonepri/lm-scorer
:param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.
:param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.
:return: Batch of formatted input that can be passed to
``compute_stats``
"""
preamble_text = [preamble] if isinstance(preamble, str) else preamble
preamble_encoded = self.encode(preamble_text, False)['input_ids']
preamble_lens = []
for preamble_tokens in preamble_encoded:
preamble_lens.append(len([token for token in preamble_tokens if token != self.pad_token_id and token != self.sep_token_id]))
sentences = [preamble + " " + stimuli] if isinstance(preamble, str) else [p + " " + s for p, s in list(zip(preamble, stimuli))]
# idea is to tokenize and then create batches of tokenized instances,
# but with each token in the sequence replaced by the mask token.
encoded = self.encode(sentences, manual_special = False)
token_idx = encoded['input_ids']
attention_masks = encoded['attention_mask']
masked_tensors = [] # token ids, attention masks, lengths
for i, (token_ids, attention_mask) in enumerate(zip(token_idx, attention_masks)):
token_ids = torch.tensor(token_ids)
# final_lengths = len(token_ids) - 2
attention_mask = torch.tensor(attention_mask)
token_ids_masked_list = []
attention_masked_list = []
effective_token_ids = [token for j, token in enumerate(token_ids) if token != self.pad_token_id and token != self.cls_token_id and token != self.sep_token_id and j >= preamble_lens[i]]
effective_length = len(effective_token_ids) + preamble_lens[i]
mask_indices = []
mask_indices = [[mask_pos] for mask_pos in range(preamble_lens[i], effective_length+1)]
# We don't mask the [CLS], [SEP] for now for PLL
mask_indices = mask_indices[:-1]
mask_token_id = self.mask_token_id
for mask_set in mask_indices:
token_ids_masked = token_ids.clone()
token_ids_masked[mask_set] = mask_token_id
attention_masked = attention_mask.clone()
attention_masked_list.append(attention_masked)
token_ids_masked_list.append(token_ids_masked)
masked_tensors.append((torch.stack(token_ids_masked_list), torch.stack(attention_masked_list), effective_token_ids, len(mask_indices), preamble_lens[i]))
return masked_tensors
def distribution(self, batch: Iterable) -> torch.Tensor:
"""
Returns a distribution over the vocabulary of the model.
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:return: Tensor consisting of log probabilies over vocab items.
"""
# takes in prepared text and returns scores for each sentence in batch
token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))
token_ids = torch.cat(token_ids)
attention_masks = torch.cat(attention_masks)
token_ids = token_ids.to(self.device)
attention_masks = attention_masks.to(self.device)
effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])
indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))
with torch.no_grad():
output = self.model(token_ids, attention_mask = attention_masks)
logits = output.logits[torch.arange(sum(lengths)), indices].detach()
logprob_distribution = logits - logits.logsumexp(1).unsqueeze(1)
return logprob_distribution
def cloze_distribution(self, queries: Iterable) -> torch.Tensor:
'''
Accepts as input batch of [(s_i, bw_i)] where s_i is a prompt with an
abstract token (bw_i) representing a blank word and returns a distribution
over the vocabulary of the model.
:param `Iterable` queries: A batch of [(s_i, bw_i)] where s_i is a prompt with an abstract token (bw_i) representing a blank word
:return: Tensor contisting of log probabilities over vocab items.
'''
queries = [queries] if isinstance(queries[0], str) else queries
prompts, words = list(zip(*queries))
modified_prompts = self.add_special_tokens(prompts)
splits = [prompt.split(word) for prompt, word in zip(modified_prompts, words)]
splits = [[x.strip() for x in s] for s in splits]
pre, post = list(zip(*splits))
pre_idx = self.tokenizer(list(pre), add_special_tokens = False, padding=False)['input_ids']
mask_idx = [len(item) for item in pre_idx]
masked = [m.replace(w, self.tokenizer.mask_token) for m, w in zip(modified_prompts, words)]
with torch.no_grad():
encoded = self.tokenizer(masked, add_special_tokens = False, return_tensors='pt', padding = True)
encoded = encoded.to(self.device)
logits = self.model(**encoded)
presoftmax = logits.logits[torch.arange(len(queries)), mask_idx]
if 'cuda' in self.device:
presoftmax = presoftmax.detach().cpu()
else:
presoftmax = presoftmax.detach()
logprobs = presoftmax - presoftmax.logsumexp(1).unsqueeze(1)
return logprobs
def logprobs(self, batch: Iterable, rank = False) -> Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]:
"""
Returns log probabilities
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:param rank: Specifies whether to also return ranks of words.
:type rank: bool
:return: List of MLM score metrics and tokens.
:rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]
"""
warnings.warn(
"logprobs is deprecated, use compute_stats instead",
DeprecationWarning
)
token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))
token_ids = torch.cat(token_ids)
attention_masks = torch.cat(attention_masks)
token_ids = token_ids.to(self.device)
attention_masks = attention_masks.to(self.device)
effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])
sent_tokens = list(map(lambda x: self.tokenizer.convert_ids_to_tokens(x.tolist()), effective_token_ids.split(lengths)))
indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))
with torch.no_grad():
output = self.model(token_ids, attention_mask = attention_masks)
logits = output.logits[torch.arange(sum(lengths)), indices]
if self.device == 'cuda:0' or self.device == "cuda:1":
logits.detach()
sent_log_probs = logits - logits.logsumexp(1).unsqueeze(1)
if rank:
shape = sent_log_probs.shape
# inv_ranks = (sent_log_probs).argsort().argsort() + 1
# ranks = shape[1] - inv_ranks + 1
ranks = (-1.0 * sent_log_probs).argsort().argsort() + 1
word_ranks = ranks[torch.arange(shape[0]), effective_token_ids].split(lengths)
sent_log_probs = sent_log_probs[torch.arange(sum(lengths)), effective_token_ids].type(torch.DoubleTensor).split(lengths)
# print(sent_log_probs)
# sentence_scores = list(map(lambda x: x.sum().tolist(), logprobs))
# outputs.append((logprobs, sent_tokens))
if rank:
return list(zip(sent_log_probs, sent_tokens, word_ranks))
return list(zip(sent_log_probs, sent_tokens))
def compute_stats(self, batch: Iterable, rank: bool = False, prob = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:
'''
Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.
:param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.
:param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).
:param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.
:param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.
:param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.
:return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.
:rtype: ``Union[Tuple[List[float], List[float]], List[float]]``
'''
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))
token_ids = torch.cat(token_ids)
attention_masks = torch.cat(attention_masks)
token_ids = token_ids.to(self.device)
attention_masks = attention_masks.to(self.device)
effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])
indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))
with torch.no_grad():
output = self.model(token_ids, attention_mask = attention_masks)
logits = output.logits.detach()[torch.arange(sum(lengths)), indices]
logprob_distribution = logits - logits.logsumexp(1).unsqueeze(1)
if base_two:
logprob_distribution = logprob_distribution/torch.tensor(2).log()
if prob:
logprob_distribution = logprob_distribution.exp()
if rank:
shape = logprob_distribution.shape
'''
Double argsort trick:
first argsort returns idxes of values that would return a sorted tensor,
second argsort returns ranks (0 indexed)
Proof: https://www.berkayantmen.com/rank.html
TODO: Try to implement ranking in linear time but across arbitrary dimensions:
https://stackoverflow.com/a/5284703
'''
word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1
word_ranks = word_ranks[torch.arange(shape[0]), effective_token_ids].split(lengths)
word_ranks = [wr.tolist() for wr in word_ranks]
scores = logprob_distribution[torch.arange(sum(lengths)), effective_token_ids].type(torch.DoubleTensor).split(lengths)
scores = [s for s in scores]
if not return_tensors:
scores = [s.tolist() for s in scores]
if rank:
return scores, word_ranks
else:
return scores
def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False):
'''
TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?
'''
tokenized = self.prepare_text(batch)
scores = self.compute_stats(tokenized, rank = False, base_two = base_two, return_tensors = True)
reduced = list(map(reduction, scores))
return reduced
def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:
'''
For every input sentence, returns a list of tuples in the following format:
`(token, score)`,
where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.
:param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.
:param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.
:param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.
:param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)
:param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)
:return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.
:rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``
'''
assert not (surprisal and prob), "cannot both evaluate probability and surprisal at the same time!"
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
tokenized = self.prepare_text(batch)
if rank:
scores, ranks = self.compute_stats(tokenized, rank = rank, prob = prob, base_two = base_two, return_tensors=True)
else:
scores = self.compute_stats(tokenized, prob = prob, base_two = base_two, return_tensors=True)
if surprisal:
scores = [-1.0 * s for s in scores]
scores = [s.tolist() for s in scores]
indices = [[i.item() for i in indexed if i.item() != self.tokenizer.pad_token_id] for indexed in list(zip(*tokenized))[2]]
tokens = [self.decode(idx) for idx in indices]
if rank:
assert len(tokens) == len(scores) == len(ranks)
else:
assert len(tokens) == len(scores)
res = []
if rank:
for t, s, r in zip(tokens, scores, ranks):
res.append(list(zip(t, s, r)))
# return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]
else:
for t, s in zip(tokens, scores):
res.append(list(zip(t, s)))
return res
class IncrementalLMScorer(LMScorer):
"""
Class for Autoregressive or Incremental (or left-to-right) language models such as GPT2, etc.
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:
"""
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
super(IncrementalLMScorer, self).__init__(model_name, device)
self.model = AutoModelForCausalLM.from_pretrained(model_name, return_dict = True)
# define CLS and SEP tokens
if self.tokenizer.pad_token is None:
self.tokenizer.add_special_tokens({"additional_special_tokens": ["<|pad|>"]})
self.tokenizer.pad_token = "<|pad|>"
if self.tokenizer.bos_token is None:
self.tokenizer.add_special_tokens({"additional_special_tokens": ["<|bos|>"]})
self.tokenizer.bos_token = "<|bos|>"
self.model.resize_token_embeddings(len(self.tokenizer))
self.model.to(self.device)
self.model.eval()
def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:
"""
Reformats input text to add special model-dependent tokens.
:param text: single string or batch of strings to be
modified.
:type text: Union[str, List[str]]
:return: Modified input, containing special tokens as per
tokenizer specification
:rtype: Union[float, List[float]]:
"""
sentences = [text] if isinstance(text, str) else text
sentences = [self.tokenizer.bos_token + sentence for sentence in sentences]
return sentences
def encode(self, text: Union[str, List[str]]) -> dict:
text = [text] if isinstance(text, str) else text
return self.tokenizer(text, return_tensors='pt', padding = True)
def prepare_text(self, text: Union[str, List[str]]) -> Tuple:
"""
Prepares a batch of input text into a format fit to run LM
scoring on.
:param text: batch of sentences to be prepared for scoring.
:return: Batch of formatted input that can be passed to
``compute_stats``
"""
encoded = self.encode(text)
offsets = [0] * len(encoded['input_ids'])
return encoded, offsets
def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:
"""
Prepares a batch of input text into a format fit to run LM
scoring on.
:param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.
:param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.
:return: Batch of formatted input that can be passed to
``compute_stats``
"""
preamble_text = [preamble] if isinstance(preamble, str) else preamble
preamble_encoded = self.tokenizer(preamble_text)['input_ids']
preamble_lens = []
for preamble_tokens in preamble_encoded:
preamble_lens.append(len([token for token in preamble_tokens if token != self.tokenizer.pad_token_id and token != self.tokenizer.sep_token_id]) - 1)
sentences = [preamble + " " + stimuli] if isinstance(preamble, str) else [p + " " + s for p , s in list(zip(preamble, stimuli))]
return self.encode(sentences), preamble_lens
def distribution(self, batch: Iterable) -> torch.Tensor:
"""
Returns a distribution over the vocabulary of the model.
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:return: Tensor consisting of log probabilies over vocab items.
"""
batch, offsets = batch
ids = batch["input_ids"]
ids = ids.to(self.device)
attention_masks = batch["attention_mask"]
attention_masks = attention_masks.to(self.device)
nopad_mask = ids != self.tokenizer.pad_token_id
with torch.no_grad():
outputs = self.model(ids, attention_mask=attention_masks)
logits = outputs.logits
if self.device == 'cuda:0' or self.device == "cuda:1":
logits.detach()
outputs = []
for sent_index in range(len(ids)):
sent_nopad_mask = nopad_mask[sent_index]
# len(tokens) = len(text[sent_index]) + 1
sent_tokens = [
tok
for i, tok in enumerate(batch.tokens(sent_index))
if sent_nopad_mask[i] and i > offsets[sent_index] + 1
]
# sent_ids.shape = [len(text[sent_index]) + 1]
# ignore first token (<|eos|>)
sent_ids = ids[sent_index, sent_nopad_mask][1:]
# logits.shape = [len(text[sent_index]) + 1, vocab_size]
sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]
sent_logits[:, self.tokenizer.pad_token_id] = float("-inf")
outputs.append(sent_logits[-1])
return torch.stack(outputs, 0)
def next_word_distribution(self, queries: List, surprisal: bool = False):
'''
Returns the log probability distribution of the next word.
'''
encoded = self.encode(queries)
encoded = encoded.to(self.device)
query_ids = [[j for j, i in enumerate(instance) if i != self.tokenizer.pad_token_id][-1] for instance in encoded['input_ids'].tolist()]
logits = self.model(**encoded).logits.detach()
logits[:, :, self.tokenizer.pad_token_id] = float("-inf")
logits = logits[torch.arange(len(query_ids)), query_ids]
logprobs = logits - logits.logsumexp(1).unsqueeze(1)
if surprisal:
logprobs = -1.0 * logprobs
return logprobs
def compute_stats(self, batch: Iterable, rank: bool = False, prob: bool = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:
'''
Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.
:param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.
:param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).
:param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.
:param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.
:param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.
:return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.
:rtype: ``Union[Tuple[List[float], List[int]], List[float]]``
'''
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
encoded, offsets = batch
encoded = encoded.to(self.device)
ids = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in encoded['input_ids'].tolist()]
## Ignore the probabilities of the first token.
effective_ids = [id[1:] for id in ids]
with torch.no_grad():
logits = self.model(**encoded).logits.detach()
logits[:, :, self.tokenizer.pad_token_id] = float("-inf")
logits = logits.split([1]*len(offsets))
## Set up storage variables
scores = []
if rank:
ranks = []
for logit, idx, offset in zip(logits, effective_ids, offsets):
length = len(idx)
logit = logit.squeeze(0)[:, :-1][torch.arange(offset, length),]
logprob_distribution = logit - logit.logsumexp(1).unsqueeze(1)
query_ids = idx[offset:]
if base_two:
'''
Log_2(X) = log_e(X)/log_e(2) (broadcasted)
'''
score = (logprob_distribution[torch.arange(length - offset), query_ids] / torch.tensor(2).log()).tolist()
else:
if prob:
score = logprob_distribution[torch.arange(length - offset), query_ids].exp().tolist()
else:
score = logprob_distribution[torch.arange(length - offset), query_ids].tolist()
if rank:
# shape = logprob_distribution.shape
'''
Double argsort trick:
first argsort returns idxes of values that would return a sorted tensor,
second argsort returns ranks (0 indexed)
Proof: https://www.berkayantmen.com/rank.html
TODO: Try to implement ranking in linear time but across arbitrary dimensions:
https://stackoverflow.com/a/5284703
'''
word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1
# inv_ranks = logprob_distribution.argsort().argsort() + 1
# word_ranks = shape[1] - inv_ranks + 1
word_ranks = word_ranks[torch.arange(length - offset), query_ids].tolist()
ranks.append(word_ranks)
scores.append(score)
if return_tensors:
scores = [torch.tensor(l) for l in scores]
if rank:
return scores, ranks
else:
return scores
def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False):
'''
TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?
'''
tokenized = self.prepare_text(batch)
scores = self.compute_stats(tokenized, rank = False, base_two = base_two, return_tensors = True)
reduced = list(map(reduction, scores))
return reduced
def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:
'''
For every input sentence, returns a list of tuples in the following format:
`(token, score)`,
where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.
:param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.
:param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.
:param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.
:param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)
:param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)
:return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.
:rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``
'''
assert not (surprisal and prob), "cannot both evaluate probability and surprisal at the same time!"
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
tokenized = self.prepare_text(batch)
if rank:
scores, ranks = self.compute_stats(tokenized, rank = rank, prob = prob, base_two = base_two, return_tensors=True)
else:
scores = self.compute_stats(tokenized, prob = prob, base_two = base_two, return_tensors=True)
if surprisal:
scores = [-1.0 * s for s in scores]
scores = [s.tolist() for s in scores]
indices = [[i for i in indexed if i != self.tokenizer.pad_token_id] for indexed in tokenized[0]['input_ids'].tolist()]
tokens = [self.decode(idx) for idx in indices]
if rank:
assert len(tokens) == len(scores) == len(ranks)
else:
assert len(tokens) == len(scores)
res = []
if rank:
for t, s, r in zip(tokens, scores, ranks):
if len(t) > len(s):
diff = len(t) - len(s)
sc = [0.0]*diff + s
ra = [0]*diff + r
res.append(list(zip(t, sc, ra)))
else:
res.append(list(zip(t, sc, ra)))
# return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]
else:
for t, s in zip(tokens, scores):
if len(t) > len(s):
diff = len(t) - len(s)
sc = [0.0]*diff + s
res.append(list(zip(t, sc)))
else:
res.append(list(zip(t, sc)))
return res
def logprobs(self, batch: Iterable, rank = False) -> Union[float, List[float]]:
"""
Returns log probabilities
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:param rank: Specifies whether to also return ranks of words.
:type rank: bool
:return: List of LM score metrics (probability and rank)
and tokens.
:rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]
"""
warnings.warn(
"logprobs is deprecated, use compute_stats instead",
DeprecationWarning
)
batch, offsets = batch
ids = batch["input_ids"]
ids = ids.to(self.device)
attention_masks = batch["attention_mask"]
attention_masks = attention_masks.to(self.device)
nopad_mask = ids != self.tokenizer.pad_token_id
with torch.no_grad():
outputs = self.model(ids, attention_mask=attention_masks)
logits = outputs.logits
if self.device == 'cuda:0' or self.device == "cuda:1":
logits.detach()
outputs = []
for sent_index in range(len(ids)):
sent_nopad_mask = nopad_mask[sent_index]
# len(tokens) = len(text[sent_index]) + 1
sent_tokens = [
tok
for i, tok in enumerate(batch.tokens(sent_index))
if sent_nopad_mask[i] and i > offsets[sent_index]
]
# sent_ids.shape = [len(text[sent_index]) + 1]
# ignore first token (<|eos|>)
sent_ids = ids[sent_index, sent_nopad_mask][1:]
# logits.shape = [len(text[sent_index]) + 1, vocab_size]
sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]
sent_logits[:, self.tokenizer.pad_token_id] = float("-inf")
# ids_scores.shape = [seq_len + 1]
# select only the ids present in the sentence out of all vocab items (as a 2d array)
sent_ids_scores = sent_logits.gather(1, sent_ids.unsqueeze(1)).squeeze(1)
# log_prob.shape = [seq_len + 1]
sent_log_probs = sent_ids_scores - sent_logits.logsumexp(1)
sent_log_probs = sent_log_probs.type(torch.DoubleTensor)
sent_log_probs = sent_log_probs[offsets[sent_index]:]
lengths = len(sent_log_probs)
if rank:
shape = sent_logits.shape
inv_ranks = (sent_logits).argsort().argsort() + 1
ranks = shape[1] - inv_ranks + 1
word_ranks = ranks[list(range(shape[0]))[offsets[sent_index]:], sent_ids[offsets[sent_index]: ].tolist()].split(lengths)
word_ranks = [x[0] for x in word_ranks]
outputs.append((sent_log_probs, sent_tokens, word_ranks))
else:
outputs.append((sent_log_probs, sent_tokens))
# output = (sent_log_probs.sum(), sent_ids, sent_tokens)
# outputs.append(output)
return outputs
class Seq2SeqScorer(LMScorer):
"""
Class for Autoregressive or Incremental (or left-to-right) language models such as GPT2, etc.
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:
"""
:param model_name: name of the model, should either be a path
to a model (.pt or .bin file) stored locally, or a
pretrained model stored on the Huggingface Model Hub.
:type model_name: str
:param device: device type that the model should be loaded on,
options: `cpu or cuda:{0, 1, ...}`
:type device: str, optional
"""
super(Seq2SeqScorer, self).__init__(model_name, device)
self.model = AutoModelForSeq2SeqLM.from_pretrained(
model_name, return_dict = True
)
# define CLS and SEP tokens
if self.tokenizer.pad_token is None:
self.tokenizer.add_special_tokens({"additional_special_tokens": ["<|pad|>"]})
self.tokenizer.pad_token = "<|pad|>"
if self.tokenizer.bos_token is None:
self.tokenizer.add_special_tokens({"additional_special_tokens": ["<|bos|>"]})
self.tokenizer.bos_token = "<|bos|>"
self.model.resize_token_embeddings(len(self.tokenizer))
self.model.to(self.device)
self.model.eval()
def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:
"""
Reformats input text to add special model-dependent tokens.
:param text: single string or batch of strings to be
modified.
:type text: Union[str, List[str]]
:return: Modified input, containing special tokens as per
tokenizer specification
:rtype: Union[float, List[float]]:
"""
sentences = [text] if isinstance(text, str) else text
sentences = [self.tokenizer.bos_token + sentence for sentence in sentences]
return sentences
def encode(self, text: Union[str, List[str]]) -> dict:
text = [text] if isinstance(text, str) else text
return self.tokenizer(text, return_tensors='pt', padding = True)
def prepare_text(self, text: Union[str, List[str]]) -> Tuple:
"""
Prepares a batch of input text into a format fit to run LM
scoring on.
:param text: batch of sentences to be prepared for scoring.
:return: Batch of formatted input that can be passed to
``compute_stats``
"""
encoded = self.encode(text)
offsets = [0] * len(encoded['input_ids'])
return encoded, offsets
def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:
"""
Prepares a batch of input text into a format fit to run LM
scoring on.
:param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.
:param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.
:return: Batch of formatted input that can be passed to
``compute_stats``
"""
preamble_text = [preamble] if isinstance(preamble, str) else preamble
preamble_encoded = self.tokenizer(preamble_text)['input_ids']
preamble_lens = []
for preamble_tokens in preamble_encoded:
preamble_lens.append(len([token for token in preamble_tokens if token != self.tokenizer.pad_token_id and token != self.tokenizer.sep_token_id]) - 1)
sentences = [preamble + " " + stimuli] if isinstance(preamble, str) else [p + " " + s for p , s in list(zip(preamble, stimuli))]
return self.encode(sentences), preamble_lens
def distribution(self, batch: Iterable) -> torch.Tensor:
"""
Returns a distribution over the vocabulary of the model.
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:return: Tensor consisting of log probabilies over vocab items.
"""
batch, offsets = batch
ids = batch["input_ids"]
ids = ids.to(self.device)
attention_masks = batch["attention_mask"]
attention_masks = attention_masks.to(self.device)
nopad_mask = ids != self.tokenizer.pad_token_id
with torch.no_grad():
outputs = self.model(ids, attention_mask=attention_masks)
logits = outputs.logits
if self.device == 'cuda:0' or self.device == "cuda:1":
logits.detach()
outputs = []
for sent_index in range(len(ids)):
sent_nopad_mask = nopad_mask[sent_index]
# len(tokens) = len(text[sent_index]) + 1
sent_tokens = [
tok
for i, tok in enumerate(batch.tokens(sent_index))
if sent_nopad_mask[i] and i > offsets[sent_index] + 1
]
# sent_ids.shape = [len(text[sent_index]) + 1]
# ignore first token (<|eos|>)
sent_ids = ids[sent_index, sent_nopad_mask][1:]
# logits.shape = [len(text[sent_index]) + 1, vocab_size]
sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]
sent_logits[:, self.tokenizer.pad_token_id] = float("-inf")
outputs.append(sent_logits[-1])
return torch.stack(outputs, 0)
def next_word_distribution(self, queries: List, surprisal: bool = False):
'''
Returns the log probability distribution of the next word.
'''
encoded = self.encode(queries)
encoded = encoded.to(self.device)
query_ids = [[j for j, i in enumerate(instance) if i != self.tokenizer.pad_token_id][-1] for instance in encoded['input_ids'].tolist()]
logits = self.model(**encoded).logits.detach()
logits[:, :, self.tokenizer.pad_token_id] = float("-inf")
logits = logits[torch.arange(len(query_ids)), query_ids]
logprobs = logits - logits.logsumexp(1).unsqueeze(1)
if surprisal:
logprobs = -1.0 * logprobs
return logprobs
def compute_stats(self, batch: Iterable, source: Iterable, rank: bool = False, prob: bool = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:
'''
Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.
:param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.
:param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).
:param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.
:param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.
:param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.
:return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.
:rtype: ``Union[Tuple[List[float], List[int]], List[float]]``
'''
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
source_encoded, source_offsets = source
target_encoded, target_offsets = batch
source_ids = source_encoded['input_ids'].to(self.device)
target_ids = target_encoded['input_ids'].to(self.device)
source_ids_list = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in source_encoded['input_ids'].tolist()]
target_ids_list = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in target_encoded['input_ids'].tolist()]
## Ignore the probabilities of the first token.
source_effective_ids = [id[1:] for id in source_ids_list]
target_effective_ids = [id[1:] for id in target_ids_list]
with torch.no_grad():
logits = self.model(input_ids=source_ids, labels=target_ids).logits.detach()
logits[:, :, self.tokenizer.pad_token_id] = float("-inf")
logits = logits.split([1]*len(target_offsets))
## Set up storage variables
scores = []
if rank:
ranks = []
for logit, idx, offset in zip(logits, target_effective_ids, target_offsets):
length = len(idx)
logit = logit.squeeze(0)[:, -4:-1][torch.arange(offset, length),]
logprob_distribution = logit - logit.logsumexp(1).unsqueeze(1)
query_ids = idx[offset:]
if base_two:
'''
Log_2(X) = log_e(X)/log_e(2) (broadcasted)
'''
score = (logprob_distribution[torch.arange(length - offset), query_ids] / torch.tensor(2).log()).tolist()
else:
if prob:
score = logprob_distribution[torch.arange(length - offset), query_ids].exp().tolist()
else:
score = logprob_distribution[torch.arange(length - offset), query_ids].tolist()
if rank:
# shape = logprob_distribution.shape
'''
Double argsort trick:
first argsort returns idxes of values that would return a sorted tensor,
second argsort returns ranks (0 indexed)
Proof: https://www.berkayantmen.com/rank.html
TODO: Try to implement ranking in linear time but across arbitrary dimensions:
https://stackoverflow.com/a/5284703
'''
word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1
# inv_ranks = logprob_distribution.argsort().argsort() + 1
# word_ranks = shape[1] - inv_ranks + 1
word_ranks = word_ranks[torch.arange(length - offset), query_ids].tolist()
ranks.append(word_ranks)
scores.append(score)
if return_tensors:
scores = [torch.tensor(l) for l in scores]
if rank:
return scores, ranks
else:
return scores
def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False,
source_format = 'blank', source = None):
'''
TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?
'''
if source is not None:
assert len(source) == len(batch)
source_format = "custom"
tokenized = self.prepare_text(batch)
if source_format == 'blank':
source = [""] * len(batch)
elif source_format == 'copy':
source = batch
source = self.prepare_text(source)
scores = self.compute_stats(tokenized, source, rank = False, base_two = base_two, return_tensors = True)
reduced = list(map(reduction, scores))
return reduced
def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False, source_format: str = 'blank') -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:
'''
For every input sentence, returns a list of tuples in the following format:
`(token, score)`,
where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.
:param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.
:param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.
:param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.
:param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)
:param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)
:return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.
:rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``
'''
assert not (surprisal and prob), "cannot both evaluate probability and surprisal at the same time!"
assert not (base_two and prob), "cannot both use base (which is for a log), and a probability measure at the same time!"
tokenized = self.prepare_text(batch)
if source_format == 'blank':
source = [""] * len(batch)
elif source_format == 'copy':
source = batch
source = self.prepare_text(source)
if rank:
scores, ranks = self.compute_stats(tokenized, source, rank = rank, prob = prob, base_two = base_two, return_tensors=True)
else:
scores = self.compute_stats(tokenized, source, prob = prob, base_two = base_two, return_tensors=True)
if surprisal:
scores = [-1.0 * s for s in scores]
scores = [s.tolist() for s in scores]
indices = [[i for i in indexed if i != self.tokenizer.pad_token_id] for indexed in tokenized[0]['input_ids'].tolist()]
tokens = [self.decode(idx) for idx in indices]
if rank:
assert len(tokens) == len(scores) == len(ranks)
else:
assert len(tokens) == len(scores)
res = []
if rank:
for t, s, r in zip(tokens, scores, ranks):
if len(t) > len(s):
diff = len(t) - len(s)
sc = [0.0]*diff + s
ra = [0]*diff + r
res.append(list(zip(t, sc, ra)))
else:
res.append(list(zip(t, sc, ra)))
# return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]
else:
for t, s in zip(tokens, scores):
if len(t) > len(s):
diff = len(t) - len(s)
sc = [0.0]*diff + s
res.append(list(zip(t, sc)))
else:
res.append(list(zip(t, sc)))
return res
def logprobs(self, batch: Iterable, rank = False, source_format: str = 'blank') -> Union[float, List[float]]:
"""
Returns log probabilities
:param `Iterable` batch: A batch of inputs fit to pass to a
transformer LM.
:param rank: Specifies whether to also return ranks of words.
:type rank: bool
:return: List of LM score metrics (probability and rank)
and tokens.
:rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]
"""
warnings.warn(
"logprobs is deprecated, use compute_stats instead",
DeprecationWarning
)
batch, offsets = batch
ids = batch["input_ids"]
ids = ids.to(self.device)
attention_masks = batch["attention_mask"]
attention_masks = attention_masks.to(self.device)
nopad_mask = ids != self.tokenizer.pad_token_id
with torch.no_grad():
outputs = self.model(ids, attention_mask=attention_masks)
logits = outputs.logits
if self.device == 'cuda:0' or self.device == "cuda:1":
logits.detach()
outputs = []
for sent_index in range(len(ids)):
sent_nopad_mask = nopad_mask[sent_index]
# len(tokens) = len(text[sent_index]) + 1
sent_tokens = [
tok
for i, tok in enumerate(batch.tokens(sent_index))
if sent_nopad_mask[i] and i > offsets[sent_index]
]
# sent_ids.shape = [len(text[sent_index]) + 1]
# ignore first token (<|eos|>)
sent_ids = ids[sent_index, sent_nopad_mask][1:]
# logits.shape = [len(text[sent_index]) + 1, vocab_size]
sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]
sent_logits[:, self.tokenizer.pad_token_id] = float("-inf")
# ids_scores.shape = [seq_len + 1]
# select only the ids present in the sentence out of all vocab items (as a 2d array)
sent_ids_scores = sent_logits.gather(1, sent_ids.unsqueeze(1)).squeeze(1)
# log_prob.shape = [seq_len + 1]
sent_log_probs = sent_ids_scores - sent_logits.logsumexp(1)
sent_log_probs = sent_log_probs.type(torch.DoubleTensor)
sent_log_probs = sent_log_probs[offsets[sent_index]:]
lengths = len(sent_log_probs)
if rank:
shape = sent_logits.shape
inv_ranks = (sent_logits).argsort().argsort() + 1
ranks = shape[1] - inv_ranks + 1
word_ranks = ranks[list(range(shape[0]))[offsets[sent_index]:], sent_ids[offsets[sent_index]: ].tolist()].split(lengths)
word_ranks = [x[0] for x in word_ranks]
outputs.append((sent_log_probs, sent_tokens, word_ranks))
else:
outputs.append((sent_log_probs, sent_tokens))
# output = (sent_log_probs.sum(), sent_ids, sent_tokens)
# outputs.append(output)
return outputs
```
```python
```
<|||||>Hi, I am sorry but this goes outside the scope of `transformers`. I believe the issue was fixed as the * chinese text is correctly tokenized* in transformers!
The expected behaviour was matched in the example script I provided you with, and I have no idea what post processing might be done here that prevents the correct prediction. I think the issue should be open on their repo! |
transformers | 20,284 | closed | Transformer cannot tokenize Chinese words | null | 11-16-2022 18:34:32 | 11-16-2022 18:34:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>There is nothing we can do without a code reproducer and, in particular, knowing which model you're using.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,283 | closed | Optimizes DonutProcessor token2json method for speed | # What does this PR do?
Speeds up the `token2json` method in `DonutProcessor` by calling `self.tokenizer.get_added_vocab()` once and reusing the `added_vocab` results in subsequent recursive calls.
Fixes # (issue)
[Issue #20238](https://github.com/huggingface/transformers/issues/20238)
The `self.tokenizer.get_added_vocab()` call is somewhat expensive. It takes 50 - 70ms to complete. In the initial implementation of the `token2json` method, `self.tokenizer.get_added_vocab()` is called for every XML tag. If there are 10 XML tags, it would take 500 - 700ms in total.
This PR makes the `token2json` method run time constant at 50 - 70ms, regardless of the number of XML tags.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
[Issue #20238](https://github.com/huggingface/transformers/issues/20238)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
| 11-16-2022 18:27:09 | 11-16-2022 18:27:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20283). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @NielsRogge . Do I need to make any changes to this PR? If not, can I please get an approval so we can merge the PR.<|||||>Hi @sgugger and @NielsRogge , after updating the `DONUT_PRETRAINED_MODEL_NAME` value and committing, the `check_repository_consistency` was failing on CircleCI so I decided to run `git fetch upstream` and `git rebase upstream/main` by following these [instructions](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request). I may have botched the next steps. The git message said my branch diverged. I did a `git pull` and `git config pull.rebase true`, then another `git pull`. Finally, a `git push`. I did not use the `--force` flag. I'm thinking now that I should have since my PR is already open. Now, my PR includes everyone else's changes from the rebase. Do I leave these new changes in my PR or remove them? I only know the basics of Git so I'm not sure what to do next. Also, a couple of tests are now failing that are not related to my changes. Not sure how to resolve those.<|||||>Can you fix the last conflict and force-push this time? It might work and remove the huge diff. Otherwise you'll need to close this PR and open a fresh one. |
transformers | 20,282 | closed | [bnb] Let's warn users when saving 8-bit models | # What does this PR do?
This PR warns users that try to save 8-bit loaded models
closes #20247
closes #19480
cc @sgugger
| 11-16-2022 16:54:29 | 11-16-2022 16:54:29 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.<|||||>Ah ah made a typo in one of my suggestions, sorry. `getattr` has less `t`s ;-)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for implementing this. This fix would absolutely have prevented the confusion explained in: [#20247](https://github.com/huggingface/transformers/issues/20247).<|||||>Thank you very much @peregilk ! 🙏 |
transformers | 20,281 | closed | [bnb] We should be able to run 8-bit models on CPU & GPU | # What does this PR do?
This PR adds the possibility of using a custom device map containing CPU and GPU devices when loading and running 8-bit models. This is useful in the context of large models, if someone wants to offload part of the model on `cpu` or on the `disk`.
Added also slow tests to test this feature, let me know if you think that I am missing any corner case.
cc @sgugger
closes #19090 | 11-16-2022 16:34:16 | 11-16-2022 16:34:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20281). All of your documentation changes will be reflected on that endpoint.<|||||>Yes, this is true, maybe I can add a warning telling the user about the underlying behaviour? (weights offloaded on CPU will remain in their native precision)<|||||>Or we could just leave the error?<|||||>Closing this PR as it will bring confusion to users, we should probably wait until `bitsandbytes` supports weights offloading in 8-bit to add this feature
Thanks!<|||||>Currently we can pass `load_in_8bit_skip_modules` into `model_kwargs`. This will allow to not convert certain layers/modules/weights into 8-bit.
However, there's a problem here:
https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/utils/bitsandbytes.py#L113-L117
The `name` is not the full path to the module, e.g. for `transformer.h.0.ln1` it can be `0` or `ln1`, etc, depending on the recursion level. So currently it's impossible to ignore a specific layer or a group of layers. For example, if `transformer.h.0` is on CPU, then I don't want it (and any of its sub-layers) to be converted to 8-bit, but I can't specify this layer in `load_in_8bit_skip_modules`. Furthermore, even specifying `0` won't help, because child sub-layers (e.g. `*.0.ln1`) are processed first.
Now the thing is - this PR actually almost solves this problem. It modifies `replace_8bit_linear` so that it can handle ignoring modules by the full path, not just by the immediate name.
So, what I propose: convert this PR into a different one that will allow specifying a full path in `load_in_8bit_skip_modules`. This will allow to manually ignore non-GPU layers when needed and it will not confuse the users.<|||||>So, if anyone is still interested, this is how I implemented the above solution in my project (via monkey-patching):
https://github.com/alkatrazstudio/neodim-server/blob/93e4819d3633841ca4f42246f51e28f355ed6cf5/src/bnb_override.py#L9-L37
```python
# This is a modified version of replace_8bit_linear in transformers/utils/bitsandbytes.py
# The following changes were made:
# 1. modules_to_not_convert can contain full module paths instead of just immediate names
# 2. the default value for modules_to_not_convert is effectively a list instead of a string
# 3. "model" is renamed to "parent_module" to not confuse it with the actual model
# 4. removed redundant check for len(modules)
def replace_8bit_linear(parent_module, threshold=6.0, modules_to_not_convert=None, parent_layer_path=""):
modules_to_not_convert = ["lm_head"] if modules_to_not_convert is None else modules_to_not_convert
parent_layer_prefix = "" if parent_layer_path == "" else parent_layer_path + "."
for name, module in parent_module.named_children():
layer_path = parent_layer_prefix + name
if layer_path in modules_to_not_convert:
continue
replace_8bit_linear(module, threshold, modules_to_not_convert, layer_path)
if isinstance(module, nn.Linear) and name not in modules_to_not_convert:
with bitsandbytes.init_empty_weights():
parent_module._modules[name] = bnb.nn.Linear8bitLt(
module.in_features,
module.out_features,
module.bias is not None,
has_fp16_weights=False,
threshold=threshold,
)
return parent_module
```
I implemented it a little bit differently than @younesbelkada did, though, and also applied some other small modifications.
Then here's the code that gets the layers that need to be ignored:
https://github.com/alkatrazstudio/neodim-server/blob/93e4819d3633841ca4f42246f51e28f355ed6cf5/src/dev_map.py#L142-L147
```python
def get_modules_to_skip_for_int8(device_map: DeviceMap) -> Optional[list[str]]:
layer_paths = [path for path, device in device_map.items() if device == DEVICE_CPU]
# adding lm_head based on comment from get_keys_to_not_convert in transformers/utils/bitsandbytes.py
# which says "for CausalLM modules we may want to keep the lm_head in full precision"
return layer_paths + ["lm_head"]
```
In my case I only offload to CPU, not disk.
These layers will then be passed as ` load_in_8bit_skip_modules` to the `from_pretrained` method.
I tested everything and it works well. The only thing I'm not sure about is if the new `replace_8bit_linear` actually backwards-compatible with the old version. It's compatible when `modules_to_not_convert=["lm_head"]`, but I'm not sure about the generic use-case.<|||||>@younesbelkada good job!!! I used your PR + @z80maniac tips and code samples and I managed to load a big model and run 8bit inference using some gpus and big amount of cpu RAM. I think your PR must be merged or at least mainatained in a separate branch into HF transfomers because I don't believe `bitsandbytes` will ever implement CPU offloading in their project. I read such opinions among the issues list in their project....
Everything works great although the inference was kinda slow which is expected when using both GPUs and CPU RAM?
I have 2 ideas on how to speed things up a little:
1. It looks like the fp16 weights (offloaded to the CPU RAM) get copied back and forth on every pass into the VRAM of the first? GPU to do the calcultions? If true, then we may as well store those weights into 8bit on the CPU RAM from the start in order to avoid converting from fp16 into int8 and to also decrease the CPU RAM requirements by half?
2. Another approach would be to perform the calculations on those fp16 weights right using the available CPU cores and thus avoid copying back and forth all of the weights into GPU VRAM on every pass?
Does any of the above make any sense?<|||||>I tried @z80maniac 's suggestions and while I didn't run into any runtime errors, weights that were supposed to be in fp32 ended up in fp16 (Flan T5 automatically keeps `wo` layers in fp32, which didn't happen after applying the monkey patches). Is this expected?<|||||>> I tried @z80maniac 's suggestions and while I didn't run into any runtime errors, weights that were supposed to be in fp32 ended up in fp16 (Flan T5 automatically keeps `wo` layers in fp32, which didn't happen after applying the monkey patches). Is this expected?
How do you know that they're in fp16 and not in fp32? What dtype did you specify if any?
BTW fp16 should be OK too. there will barely be any performance degradation especially given that most of the weights are stored in int8 anyway<|||||>I manually checked the dtypes with
```model.encoder.block[1].layer[1].DenseReluDense.wo.weight.dtype, model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype```
It stays in fp16 whether I pass in `torch_dtype=torch.float32` or leave the kwarg untouched.
Also unfortunately I don't think Flan T5 XXL can handle fp16 or 8-bit precision (see https://github.com/huggingface/transformers/pull/20683). `T5ForConditionalGeneration` has a `_keep_in_fp32_modules` attribute that's supposed to help the `wo` layers stay in fp32, but I noticed that the suggested monkey patches might be interfering.
I'll follow up a bit later because I noticed there was a bug in the code I was testing, though I don't think it should have affected whether or not the weights stayed in fp32 (in fact the bug, if tripped, would have just caused an OOM error instead).<|||||>Ah I wasn't aware of that issue with Flan T5 though I am sure that i have loaded it with torch_dtype=torch.float16 in the past and have not noticed any serios performance degradation though the difference maybe was too subtle to notice....
What is your plan when loading it? `wo` layers in fp32 into CPU RAM and any other weights in int8 into VRAM? What does ur device map look like?
BTW i am only using the code changes by @younesbelkada from this pr
@z80maniac are a bit different though still helpful<|||||>Yeah iirc the issue is only with with XXL variant, I think the other variants should run with 16-bit/8-bit quantization just fine.
My plan is very close to what you mentioned: I was going to offload `wo` layers onto CPU RAM/disk (though I've only been experimenting with disk so far) and keep the others in int8 on GPU. I'll share my device map soon but I put myself into a bit of a situation rn. After some code changes I'm currently unable to reproduce the whole "`wo`-layers-are-offloaded-onto-disk-in-fp32 " thing, so I need to fix that first. I'll follow up once I figure that out.
I should point out though that 1) even when I did get them offloaded in fp32, I got `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, meta and cuda:0!` anyway and 2) there's a chance the problem is on my end and not with any of the monkey patches, so I really should fix a few things first.<|||||>from my experience some of those errors can be turned into just a warning (by monkey-patching the python source code) and everything will still work properly<|||||>BTW the Flan T5 XXL HF page also has examples of using fp16 and int8. It's possible that the Google team hasn't really tested its performance using quantization though...
If you give me some example prompts, I will try to reproduce the results locally?<|||||>I just tried `device_map="auto", torch_dtype=torch.float16` on multiple gpus:
`translate English to German: How old are you?`
`<pad> Wie alt sind Sie?</s>`
<|||||>> I just tried `device_map="auto", torch_dtype=torch.float16` on multiple gpus: `translate English to German: How old are you?`
>
> `<pad> Wie alt sind Sie?</s>`
Oh that's great! Personally haven't tried fp16 myself; I can attest to poor results on int8, but I was just going off of other issues/discussions regarding the performance in fp16 (e.g. https://github.com/huggingface/transformers/issues/20287#issuecomment-1317691635) (EDIT: this issue is from before the relevant patches were merged/when all the weights were in fp16).
Just curious though, could you check what dtype the fp16 Flan T5 XXL has its `wo` layers in? If I'm not mistaken, unless you manually disable it (i.e. `T5ForConditionalGeneration.__keep_fp32_modules = None`) it should set the `wo` layers to fp32. But if they're really all in fp16 now that's great!
On my end I'll still going to take a look into my issues with offloading into fp32 for completeness' sake.<|||||>I now loaded T5 XXL using int8.
`print(model.encoder.block[1].layer[1].DenseReluDense.wo.weight.dtype)
torch.float32
print(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)
torch.float32`
And again got a satisfying response
`<pad> Wie alt sind Sie?</s>`
<|||||>Yup, this is actually expected. There's no problem loading the other weights in 8-bit, as long as the `wo` layers are in fp32 as shown above.
My personal problem is that `device_map="auto"` is acting strangely for me (perhaps it's calculating on the assumption that the `wo` layers are in 8-bit when in fact they'll be loaded in 32-bit, which causes the OOM error) so I've been making custom device maps in the meantime. The farthest I've gotten is the runtime error I mentioned previously regarding the different devices (one of them being a meta device), but I've yet to recreate that because I changed my code at some point and need to figure out how to get it back to how it used to be. I thought the monkey patches here might help, but I'm starting to think the breaking changes I made were done before I tried the monkey patches, resulting the persistent fp16 offloaded weights that come up even without the monkey patches.<|||||>After running a few simple prompts, I can't see any difference in int8 output when compared to fp32 or fp16. If you have a more sophisticated prompt you wish to try, lmk<|||||>> Yup, this is ...
Are you sure you're using the latest versions of `transformers, accelerate and bnb`? Perhaps, first uninstall everything you currently have, then install the above python packages and reapply the monkey patches on top of the latest versions.
Also, what is your hardware setup like? How much GPU VRAM in total, CPU RAM? You may want to try also setting the max_memory map per GPU device (but that requires some tweaking and is card / model dependent). Also, even if you get it running, keep in mind that offloading to SSD makes things reaaaaly slow. Offloading to just CPU RAM is a bit better
<|||||>I've been installing `transformers` and `accelerate` from source, but yeah I haven't tried installing `bnb` from source, I'll try that.
I'm working with around 12.7 GB CPU RAM and 15 GB VRAM (standard Colab GPU runtime, Tesla T4). I'll play around with the kwargs to `infer_auto_device_map` but I still have a hunch that `infer_auto_device_map` has no way of knowing that the `wo` modules will end up being larger than what it currently is accounting for. And yeah you're right I definitely should offload to CPU, I've just been spending the past few days trying to get the weights to be stored in fp32 first (even before the monkey patches, I previously kept on having the `wo` weights in fp16. I fixed it earlier but then ended up breaking it again).<|||||>Ah sorry after rereading my comments I think I've been unclear with what I meant by performance degradation in int8.
When we say that Flan T5 XXL can't handle 8-bit, we mean that we can't quantize every single parameter to 8-bit precision the way we traditionally would with a standard T5 (a little misleading to say that since I think `lm_head` layers also can't be in 8-bit); doing so leads to poor performance. The solution `transformers` implemented was to do standard 8-bit quantization everywhere except the `wo` layers, since those were the only layers that needed to be in 32-bit. If you do that, you get the expected full performance, as you've demonstrated with your examples.
My personal problem is that my VRAM isn't large enough to host the 8-bit quantized non-`wo` modules alongside the 32-bit `wo` modules (if the entire model was 8-bit quantized, I could though). Because of that I need to offload some weights. As you probably already know, the main branches of `bitsandbytes` and `transformers` are currently a little weird when it comes to using 8-bit quantization alongside offloading. You can offload but the offloaded weights won't be in 8-bit. That's why I decided to just offload the `wo` layers only, since they shouldn't be in 8-bit anyway.
Two further issues arise from this:
1. `auto_device_map=True` doesn't work well because when `infer_auto_device_map` receives `dtype=torch.int8`, it calculates device allocation as if everything will be in int8, consequently underestimating how much space a `wo` module will take up, which ultimately leads to memory errors.
2. We can avoid the above problem if we define our own device map. The furthest I've gotten with this however is the runtime error about the different devices.
On a side note, while writing this I tried loading the model with a device map that put the `wo` layers on CPU instead of the disk. Surprisingly it fit, and unsurprisingly the session crashed (out of RAM) when I tried to use the model. On a somewhat more positive note though, before it crashed I noticed that it was back in fp32, which was nice. Still unsure what's causing the jump back-and-forth between fp32 and fp16, but I'm still looking into it.<|||||>got it. thanks.
In this PR there is a piece of code which specifies which modules to skip. Just specify `lm_head` and the `wo` layers there (+ any others as needed) or use a custom device_map. Then set `max_memory` for your single(?) GPU to the GPU VRAM - ~1.3-2.2GB (it will take a few attempts to get this amount to an optimal point).
From what I gather, you should be able to host around 13.4 GB of int8 weights on the Tesla GPU and the rest (in fp32) onto CPU RAM + SSD.
Flan XXL may turn out to be too big for your setup though - meaning that at least 4-5 GB will likely end up on the SSD<|||||>Thanks for this! I'll take a look into it. I appreciate your help with all of this.<|||||>Actually I was trying to use mT0 XXL for a different research project a while back, but I had difficulties just trying to load it into memory. But thanks for prompting me to take another look; I was reviewing my notebook to try to refresh my memory on what I tried and I'm only now seeing I never set `load_in_8bit` to `True`, so I'll try again soon. Thanks again!<|||||>FYI here is what the model card of `mt0-xxl` states:
```
Prompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it.
For example, the prompt "Translate to English: Je t'aime" without the full stop (.) at the end, may result in the model trying to continue the French sentence.
Better prompts are e.g. "Translate to English: Je t'aime.", "Translate to English: Je t'aime. Translation:" "What is "Je t'aime." in English?", where it is clear for the model when it should answer.
Further, we recommend providing the model as much context as possible.
For example, if you want it to answer in Telugu, then tell the model, e.g. "Explain in a sentence in Telugu what is backpropagation in neural networks.".
```<|||||>cc @Muennighoff if you have any idea what might be wrong here 🙏 <|||||>@younesbelkada BTW did you read my comment above with some questions / suggestions? What do you think?
https://github.com/huggingface/transformers/pull/20281#issuecomment-1409894311<|||||>@alexconstant9108 Do you also get the same pad output for mT0 in FP32?<|||||>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>
> checkpoint = "bigscience/mt0-xxl"
>
> tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
>
> inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs)
That's very odd - Do you get the same with mt0-small? Here's what I get:
```python
!pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-small"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto", offload_folder="./")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
<pad> I love you.</s>
```<|||||>@alexconstant9108 Just wanted to let you know I'm still having trouble loading mT0 XXL into memory. Maybe the shard sizes are too big? Not sure; sorry I can't verify your results<|||||>> Yeah, mt0-small seems to work fine: `<pad> I love you.</s>` I will check the hashes of the downloaded weights of mt0-xxl when I have some time
I'm getting the same with `mt0-xxl`:
```
Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> checkpoint = "bigscience/mt0-xxl"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto", offload_folder="./")
Downloading (…)model.bin.index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50.7k/50.7k [00:00<00:00, 916kB/s]
Downloading (…)00001-of-00006.bin";: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.94G/9.94G [00:52<00:00, 191MB/s]
Downloading (…)00002-of-00006.bin";: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.87G/9.87G [04:33<00:00, 36.1MB/s]
Downloading (…)00003-of-00006.bin";: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.87G/9.87G [00:50<00:00, 194MB/s]
Downloading (…)00004-of-00006.bin";: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.0G/10.0G [00:51<00:00, 195MB/s]
Downloading (…)00005-of-00006.bin";: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.0G/10.0G [00:52<00:00, 191MB/s]
Downloading (…)00006-of-00006.bin";: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.11G/6.11G [00:30<00:00, 198MB/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:35<00:00, 5.96s/it]
>>> inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
>>> outputs = model.generate(inputs)
>>> print(tokenizer.decode(outputs[0]))
<pad> I love you.</s>
```
Can you double-check your environment?
Mine looks like:
```
accelerate-0.16.0
transformers-4.26.0
tokenizers-0.13.2
pytorch-1.13.1
CUDA Version: 11.6 (A100 80GB)
```
<|||||>It looks like your `pytorch_model-00003-of-00006.bin` has a different sha256 than the uploaded one: https://huggingface.co/bigscience/mt0-xxl/blob/main/pytorch_model-00003-of-00006.bin
Mine & the uploaded ones are:
```
295a276775d79359cfd243bd93c9e2c408a8e33718e5bee1d05625f026af6175
c21533a6182886bec48cd0190952b3c5e71224873234135c2754f7c81d02ac82
62cc874eb7f5cfa6fcbde4a19bab7de1f7bf8b47f0f01c45713927115c85a153
36b2a5945f7c037b99eaf5ed891fc158b23a791b92042861a5298b0c8ec224be
3f769732a1c4ba3a9cbd9ea1c2701ade3cdf2a35f73e75ac77d0c26788a5d88f
92679f99746d0e1082d7407091cb7f2a588d49b9bf13724f706e8912f86c5786
```<|||||>> After running a few simple prompts, I can't see any difference in int8 output when compared to fp32 or fp16. If you have a more sophisticated prompt you wish to try, lmk
Sorry to bump this thread @alexconstant9108 but would you mind running this prompt?:
```
Answer the following question by reasoning step-by-step.\nThe cafeteria had 23 apples. If they used 20 for lunch and bought 6 more, how many apples do they have?
```
Splitting the model strictly between GPU and CPU (no disk involved) seemed to fix my problems in terms of getting the `wo` layers in fp32. While I was able to get the expected output for `translate English to German: How old are you?`, my output for the above was unfortunately
```
<pad> The cafeteria has 23 - 20 = 3 apples left. They have 3 + 6 = 7 apples. Therefore, the answer is 7.</s>
```<|||||>Hi @alexconstant9108 ,
Thanks for your interest in this feature. I propose to slightly refactor the API in https://github.com/huggingface/transformers/pull/21579 and enable the feature you have asked for! Feel free to share any thoughts there<|||||>@ryan-caesar-ramos after loading all weights as fp32, I also get the same output as you:
`print(tokenizer.decode(outputs[0]))
<pad> The cafeteria has 23 - 20 = 3 apples left. They have 3 + 6 = 7 apples. Therefore, the answer is 7.</s>`
LLMs generally suck even at basic Math(arithmetic included) so the above error is not surprising especially for a small model like flan-t5-xxl. I think the only remedy for this issue is using a different architecture (not transformer based) or adding the ability to the model to call external tools e.g. a calculator app. I haven't tried yet but I suspect that even ChatGPT will mess up a puzzle like the above<|||||>@ryan-caesar-ramos you may want to give FlexGen a try when loading big models: https://github.com/FMInference/FlexGen<|||||>Thanks! Will check it out |
transformers | 20,280 | closed | [Proposal] Breaking change `zero-shot-object-detection` for improved consistency. | # What does this PR do?
This is a proposal to modify the output of `zero-shot-object-detection`
to provide better alignment with other pipelines.
The output is now strictly the same as `object-detection` whereas before
it would output lists of lists.
The name `candidate_labels` is used throughout for consistency with
other `zero-shot` pipelines.
The pipeline is changed to `ChunkPipeline` to support batching cleanly.
This removes all the lists and list of lists shenanigans, it's now a
matter of the base pipeline handling all this not this specific one.
### **Breaking change**
It did remove complex calls potentials `pipe(images = [image1, image2],
text_queries=[candidates1, candidates2])` to support only
`pipe([{"image": image1, "candidate_labels": candidates1}, {"image": image2, "candidate_labels": candidates2}])`
when dealing with lists and/or datasets.
We could keep them, but it will add a lot of complexity to the code
base, since the pipeline is rather young, I'd rather break to keep the
code simpler, but we can revert this.
### **Breaking change**
The name of the argument is now `image` instead of
`images` since it expects by default only 1 image. This is revertable
like the previous one.
### **Breaking change**
The types is now simplified and flattened:
`pipe(inputs) == [{**object1}, {**object2}]`
instead of the previous
`pipe(inputs) == [[{**object1}, {**object1}], [{**object2}]]`
Where the different instances would be grouped by candidate labels
within lists.
IMHO this is not really desirable, since it would output empty lists and
is only adding superflous indirection compared to
`object-detection`.
It is relatively change free, meaning the results are the same for large models.
It does change computation however since now the batching is handled by the pipeline
itself. It **did** change the results for the small models so there
seems to be a real difference in how the models handles this. Since it didn't affect any of the large tests I think it's acceptable.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 15:30:14 | 11-16-2022 15:30:14 | @sahamrit Happy to hear your thoughts on this since you are the original implementor.
I'm proposing many breaking changes here which mostly are due to me being not enforcing them enough during the PR (well I think it was better to get the pipeline out sooner with some flaws rather than too late).
Since I started rewriting some docs I found the inconsistencies quite harmful in how to describe pipelines to users imo.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I will wait a few days to give @sahamrit a chance to give his opinion before merging. |
transformers | 20,279 | closed | [SegFormer] Add support for segmentation masks with one label | # What does this PR do?
This PR makes it possible to fine-tune SegFormer in case you have a mask containing only a single value, i.e. your mask could look like [[255, 0], [0, 255]]. In this case, config.num_labels = 1 and the ignore index is 255.
If this works fine, then we can add it to any other model supported by `AutoModelForSemanticSegmentation`.
| 11-16-2022 15:21:14 | 11-16-2022 15:21:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>How soon will this PR be done? And why using BCEWithLogitsLoss not Dice loss?<|||||>when is this going to be closed. Our team would like to use it<|||||>when this becomes active, how would one use it.
thank you nielsrogge for sending for review.<|||||>hello @NielsRogge
hope I didnt disturb
I tried to peel off the classifier in pytorch and change the output channels to one, then manually compute the loss instead of getting it from the segformer huggingface object, so basically I just got the output and then did a dice loss myself.
So i tried to write bin seg myself kinda, and I started to get a bunch of negative values.
Any idea why that happened, and how I could fix it? I mean, I replaced the last conv2d layer.<|||||>Is this now automatically using dice loss when we set `num_labels = 1`? Maybe I missed it but it seems the documentation doesn't explain it.<|||||>HHi @aegonwolf,
When config.num_labels = 1, the binary cross-entropy loss is used, as can be seen here: https://github.com/huggingface/transformers/blob/1689aea73346816b936b84932e12b774974e61a6/src/transformers/models/segformer/modeling_segformer.py#L813-L817. |
transformers | 20,278 | closed | Image transforms functionality used instead | # What does this PR do?
* Removes reimplementations of `center_to_corners` format
* Removes `# Copied from` statements and imports directly instead
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 11-16-2022 15:02:49 | 11-16-2022 15:02:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20278). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20278). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,277 | closed | Generate: general TF XLA constrastive search are now slow tests | # What does this PR do?
TF's XLA contrastive search tests were time-consuming because of the conversion to XLA, so this PR moves them to more powerful slow tests.
Making these tests faster would imply creating smaller model configs for each model, which seems like overkill. | 11-16-2022 14:45:20 | 11-16-2022 14:45:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,276 | closed | Fix result saving errors of pytorch examples | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20079. This PR fixed potential `KeyError`s on saving results in most PyTorch examples, by prefixing metric keys instead of directly accessing metrics by key.
In addition, this PR replaced the argument `--max_length` with `--max_seq_length` in `run_swag_no_trainer.py`, to make the no-trainer version consistent with the trainer version and the README instructions.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/20079
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-16-2022 14:30:24 | 11-16-2022 14:30:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20276). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,275 | closed | Convert LongT5 to ONNX | ### System Info
transformers-cli env
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- onnxruntime-gpu: 1.13.1
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
ONNX model conversion: @morgan
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This command line:
```
python -m transformers.onnx --model pszemraj/long-t5-tglobal-base-16384-book-summary --feature seq2seq-lm-with-past --preprocessor tokenizer --framework pt .
```
Gives me the following error during export validation:
```
Validating ONNX model...
Floating point exception (core dumped)
```
### Expected behavior
Having a usable and validated ONNX model. | 11-16-2022 14:07:41 | 11-16-2022 14:07:41 | cc @lewtun <|||||>Same silent issue on longformer when `global_attention_mask` is zero everywhere and running with ONNX Runtime. When at least one value is non-zero, it's fine.<|||||>Anything I can do on my side to help fixing this issue? :)<|||||>@jplu Do you have any warning during the onnx conversion? Things like `TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!`? In my case there were control flows in the model, leading to the issue.<|||||>@fxmarty Yes I do have several of these `TracerWarning`:
```
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:180: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
_global_block_ids_lower_bound = torch.tensor(-1.0, dtype=global_block_ids.dtype, device=global_block_ids.device)
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:188: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
num_globals = seq_len // global_block_size
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:190: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_globals > 0:
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:217: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
block_ids >= 0, torch.tensor(global_seq_len, dtype=block_ids.dtype, device=block_ids.device)
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:217: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
block_ids >= 0, torch.tensor(global_seq_len, dtype=block_ids.dtype, device=block_ids.device)
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:84: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.shape[dim] % block_len != 0:
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:67: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not all(x.shape):
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:86: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
num_blocks = x.shape[dim] // block_len
/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:89: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if 0 in output_shape:
/data/conda/beir/lib/python3.8/site-packages/transformers/modeling_utils.py:769: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
/data/conda/beir/lib/python3.8/site-packages/transformers/modeling_utils.py:781: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
Warning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.
```<|||||>Could be a duplicate https://github.com/huggingface/transformers/issues/19297
In my opinion, you are falling into the issue that the example input provided during the conversion for `torch.onnx.convert` takes a path different than the one you wish to use during inference with the exported ONNX.
A good reference is https://pytorch.org/docs/stable/generated/torch.jit.trace.html .
We should export along the `model.onnx` probably an `onnx_config.json` detailing which cases are supported by the exported ONNX.<|||||>OK I see. Should-I test to change the input in the export code in order to see if it goes better with a more appropriate input? How did you manage to convert and validate this model as it appears in the list of available models to be exported? You have tested on the official Google one?<|||||>Pinging @echarlaix , is longt5 the model you were dealing with? To me given the trace warnings, especially `num_globals = seq_len // global_block_size` that is a constant,if you want a quick and dirty fix you can indeed change the input length in the export code to match your input length for your use case using the .onnx.
The validation for onnx conversion is currently lacking, as it tests only on a very similar sequence length (typically 9 while the export is done with a sequence length = 8, see https://github.com/huggingface/transformers/blob/v4.24-release/src/transformers/onnx/convert.py#L382-L397 ). So if later on there is controlflow dependent on the sequence length, a single path is recorded during the export and you are screwed. We are looking for a clean solution for this.
edit: although here `global_block_size = 16`, and `seq_len = 8` then `seq_len = 9` so I would expect the path to be the same in the two warnings concerning `num_globals`. The issue could come from an other of the following warnings.<|||||>> Pinging @echarlaix , is longt5 the model you were dealing with?
Yes we had the same [issue](https://github.com/huggingface/optimum/issues/285#issuecomment-1191409005) for LongT5 model with transient-global attention (models with local attention were not causing problem). Now transferred to huggingface/transformers#18243.
<|||||>I have tried multiple `seq_len` for the input. As long as the value is not too lower than the "original" value it seems working, but if it goes too lower, I start to get a big "max absolute difference" and the validation doesn't pass. So indeed, it is not really usable and seems too unstable as you said @fxmarty. Thanks a lot anyway for your lights on this, I let the issue open, don't hesitate to ping me here if I can do something to help fixing on my side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>not stale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>not stale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>still not<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,274 | closed | Adding `zero-shot-object-detection` pipeline doctest. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 13:34:29 | 11-16-2022 13:34:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,273 | closed | [Doctest] Add configuration_deformable_detr.py | # What does this PR do?
Adds configuration_deformable_detr.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-16-2022 13:26:40 | 11-16-2022 13:26:40 | |
transformers | 20,272 | closed | Adding doctest for `zero-shot-image-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 13:15:33 | 11-16-2022 13:15:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20272). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,271 | closed | Add TF protein notebook to notebooks doc | Add a link to the new TF protein LM notebook | 11-16-2022 12:55:54 | 11-16-2022 12:55:54 | |
transformers | 20,270 | closed | Add StdScaler for time series Transformer model | # What does this PR do?
- [x] Add `loc` and `scale` outputs from current scalers and use both as static real-valued covariates
- [ ] double check training/inference works as usual
- [ ] add the StdScaler
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-16-2022 11:50:11 | 11-16-2022 11:50:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>thanks! will add... moving this to draft for now as i have to check this before adding<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20270). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20270). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,269 | closed | cannot load bart encoder | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Failed to load BartEncoder, BartDecoder
```python
from transformers.models.bart.modeling_bart import BartEncoder, BartDecoder
```
Error
```
ImportError: cannot import name 'SAFE_WEIGHTS_INDEX_NAME' from 'transformers.utils' (/home/usr/miniconda3/envs/mkg/lib/python3.10/site-packages/transformers/utils/__init__.py)
```
### Expected behavior
Able to load BartEncoder and BartDecoder | 11-16-2022 11:31:07 | 11-16-2022 11:31:07 | @icedpanda I am not able to reproduce this error, Can you try to reproduce this in a colab notebook and share the notebook here ? <|||||>was working on the terminal but not in the notebook. Works now after reinstalling the notebook. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.