repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,853 | closed | t5: add conversion script for T5X to FLAX | Hi,
this PR adds the (long awaited) conversion script from T5X to HF FLAX, previously available in this [GIST](https://gist.github.com/stefan-it/30e4998ef159f33696e377a46f699d9f).
This conversion scripts allows to convert models that were trained with [T5X](https://github.com/google-research/t5x) to a FLAX model, so it can be used with Transformers.
Script was road-tested and performance was compared against official T5 (v1.1) checkpoint (because T5 checkpoints can be converted into T5X checkpoints). More information can be found in [this issue](https://github.com/google-research/t5x/issues/198) in T5X upstream repo. | 04-20-2022 11:51:24 | 04-20-2022 11:51:24 | /cc @patrickvonplaten @patil-suraj :hugs: <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I have just tested it with v1.0 and there's a problem with the lm head:
```
1.0: dict_keys(['decoder_norm', 'layers_0', 'layers_1', 'layers_2', 'layers_3', 'layers_4', 'layers_5', 'relpos_bias'])
1.1: dict_keys(['decoder_norm', 'layers_0', 'layers_1', 'layers_2', 'layers_3', 'layers_4', 'layers_5', 'layers_6', 'layers_7', 'logits_dense', 'relpos_bias'])
```
So `logits_dense` is missing in the 1.0 checkpoints and the conversion script can't handle it. I will try to find a solution here and post a short conversion-pipeline guidline soon.<|||||>v1.0 checkpoints can also be converted now. Here are some checks (with final evaluation on downstream task):
Requirements:
```bash
pip3 install git+https://github.com/google-research/t5x.git
pip3 install --upgrade tensorstore==0.1.13
```
Pinned `tensorstore` version fixes a strange `zarr` error when loading the checkpoints.
Then clone 1.0 and 1.1 T5X checkpoints:
```bash
gsutil -o GSUtil:parallel_composite_upload_threshold=150M -m cp -r -n gs://t5-data/pretrained_models/t5x/t5_small .
gsutil -o GSUtil:parallel_composite_upload_threshold=150M -m cp -r -n gs://t5-data/pretrained_models/t5x/t5_1_1_small .
```
Transformer configs can be downloaded from model hub:
```bash
curl --silent https://huggingface.co/t5-small/resolve/main/config.json > config_1_0.json
curl --silent https://huggingface.co/google/t5-v1_1-small/resolve/main/config.json > config_1_1.json
```
Models can be converted via:
```bash
python3 convert_t5x_checkpoint_to_flax.py --t5x_checkpoint_path ./t5_small --config_name ./config_1_0.json --flax_dump_folder_path ./t5x_1_0_exported
python3 convert_t5x_checkpoint_to_flax.py --t5x_checkpoint_path ./t5_1_1_small --config_name ./config_1_1.json --flax_dump_folder_path ./t5x_1_1_exported
```
Then I ran downstream evaluation with the original T5 models (from model hub) and the converted ones on the summarization task, e.g.:
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ./t5_1_0_original \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--num_train_epochs 1
```
and:
```bash
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path /mnt/transformers/src/transformers/models/t5/t5x_1_0_exported \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ./t5_1_0_converted \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--num_train_epochs 1
```
And compared the training losses. Training losses are identical (original vs. converted model).<|||||>Hi @stancld , thanks for adding the longt5 variant :hugs: Could you also add the patch for v1.0 checkpoints from this commit:
https://github.com/huggingface/transformers/pull/16853/commits/4f36d429fa03fd85011a0b6bd5f5b04c9077b852
Would be awesome :)<|||||>Does this script support the transformation of XL or XXL models?<|||||>> Does this script support the transformation of XL or XXL models?
I tried to generate the following files in /content/flan_t5x_xl_exported, and then I used this below code to load and happen error. How do I solve it?
```python
model = T5ForConditionalGeneration.from_pretrained("/content/flan_t5x_xl_exported", from_flax=True)
# Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in
# directory /content/flan_t5x_xl_exported.
```
/content/flan_t5x_xl_exported:
"
*model-00001-of-00002.msgpack
*model-00002-of-00002.msgpack
*model.msgpack.index.json
config.json
"<|||||>@joytianya Currently, you cannot use cross-platform loading when the large model is split into multiple files. But this feature is planned soon -- please see #19965 <|||||>@stefan-it
@stancld
Does the script support T5X converted into pytorch?
if not, Is there any other solution?
|
transformers | 16,852 | closed | facebook/detr-resnet output tensor contains all nan value | ### System Info
```shell
Ubuntu 20.04 - aarch64
Pytorch Version: 1.11.0 (from source)
Transformers: 4.18.0
Torchvision: 0.12.9 (from source)
Model: facebook/detr-resnet-50
```
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I ran the following code :
```
from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
from this page: https://huggingface.co/facebook/detr-resnet-50
But the output tensor for the bounding boxes all contains nan values:
```
tensor([[[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan],
[nan, nan, nan, nan]]], grad_fn=<SigmoidBackward0>)
```
### Expected behavior
I looked around the web and this is the closest answer I found:
https://forums.developer.nvidia.com/t/pytorch-1-7-nan-results/160462/7
It claims that the machine I'm running on (Jetson Nano) doesnβt support concurrent access of memory buffer and if the CPU and GPU try to access the same buffer at the same time(ex. adjacency layer but different processor), the return data will be undefined.
Just want to know what I'm missing and how I could debug this further. I've tested the same code example on an x86 machine with Cuda and its output is as expected. Thank you!
| 04-20-2022 11:02:56 | 04-20-2022 11:02:56 | Hi,
Sorry I can't really help here, I don't have any experience with machines like Jetson Nano :( <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,851 | closed | New features for CodeParrot training script | This PR adds some features to CodeParrot training script.
- Add TFLOPS to logging
- Use Accelerate checkpointing and tracking for Wandb and Tensorborad
- Fix gradient accumulation for DDP (https://github.com/huggingface/accelerate/pull/106)
- Scale loss approprietly for the last batch
- Fix typo in the README
cc @lvwerra @LysandreJik | 04-20-2022 10:49:10 | 04-20-2022 10:49:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,850 | closed | documentation: some minor clean up | # What does this PR do?
This cleans up some minor documentation changes unrelated to debertav2 in the PR #15529 so I'm opening up this PR just for repo history. Please do let me know if this is alright π
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/pull/15529
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 04-20-2022 10:37:25 | 04-20-2022 10:37:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you! |
transformers | 16,849 | closed | Could not load model deepset/minilm-uncased-squad2 | ### System Info
```shell
I'm trying to load the model "deepset/minilm-uncased-squad2".
On my laptop (Ubuntu 20.4 LTS), there's no problem.
This happens when I run the exact same code on a server running Linux(see version below).
Here's the output of transformer-cli env command:
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-1021-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No (I'm manually filling this in)
- Using distributed or parallel set-up in script?: No (I'm manually filling this in)
Here's the error message:
ValueError: Could not load model deepset/minilm-uncased-squad2 with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForQuestionAnswering'>, <class 'transformers.models.bert.modeling_tf_bert.TFBertForQuestionAnswering'>
On this [github issue](https://github.com/huggingface/transformers/issues/353) they point to a memory failure.
However, to solve this, I had to download the pytorch version for CPU!
```
### Who can help?
@Rocketknight1, @LysandreJik,@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model_checkpoint = "deepset/minilm-uncased-squad2"
device = -1
model_checkpoint = pipeline('question-answering', model=model_checkpoint,
tokenizer=model_checkpoint,
device=device)
### Expected behavior
```shell
No error output, and correct loading of the model.
```
| 04-20-2022 10:10:38 | 04-20-2022 10:10:38 | I suspect the cause of this is that the `deepset/roberta-base-squad2` model only exists as a PyTorch model. When you call `pipeline()`, it will select the framework (TF or PyTorch) based on what is installed on your machine. If your laptop has both TF and PyTorch installed, then it will probably select PyTorch and load the model correctly, but if the server only has TensorFlow then it will fail to load the model. To resolve this, you can either load the model in TF with `from_pt=True` and save as personal copy as a TF model with `save_pretrained` and `push_to_hub`, or you can switch to using PyTorch for the pipeline.<|||||>Exactly that.
And looking at the error
```
with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForQuestionAnswering'>, <class 'transformers.models.bert.modeling_tf_bert.TFBertForQuestionAnswering'>
```
I can tell you that for some reason your environmnent could not see `AutoModelForQuestionAnswering`(the PyTorch version of the model). So it's probably not linked to GPU vs CPU but just that the GPU install was not functional somehow.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still having this issue with model `tiiuae/falcon-40b-instruct`
I copied the sample code from the example.
ERROR:
`ValueError: Could not load model tiiuae/falcon-40b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).`<|||||>What hardware do you have ? Loading `tiiuae/falcon-40b-instruct` will not work on most GPUs.
You need to do some sharding, either using `accelerate` `pipeline(...., device_map="auto")` which should work very easily.
Or doing something a bit more fancy like TP sharding to get performance out of it.<|||||>I am on MacBook Air, Apple M2, 16GB RAM, 500GB+ disk available.
Do you have the code samples for accelerate or TP sharding?<|||||>```
pipeline(...., device_map="auto")
```
This should be enough for accelerate.
On M2 I think it's a bit tight for falcon-40b, you will most likely get a lot of offloading so quite slow inference (and TP cannot help with that)
<|||||>Thanks @Narsil. Still getting an error
```
Downloading (β¦)l-00001-of-00002.bin: 65%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ | 6.43G/9.95G [33:12<18:12, 3.23MB/s]
Traceback (most recent call last):
File "/Users/martinesdaniel/dev/yt-transcript/falcon.py", line 10, in <module>
pipeline = transformers.pipeline(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 788, in pipeline
framework, model = infer_framework_load_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/pipelines/base.py", line 278, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).
```
Here is my code:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
tokenizer.save_pretrained("./model/")
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.float32,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Could this be internet bandwidth?<|||||>> ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).
That's the issue, but I'm not sure what's happening
Can you try :
```python
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
tokenizer.save_pretrained("./model/")
model = AutoModelForCausalLM.from_pretrained(model=model, trust_remote_code=True, device_map="auto",torch_dtype=torch.float32,)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
```
Try removing the `torch_dtype=torch.float32` too, these models are meant to be used in half precision. |
transformers | 16,848 | closed | Add YOLOS | # What does this PR do?
This PR adds [YOLOS](https://github.com/hustvl/YOLOS), an awesome and simple object detector.
YOLOS is just a single Transformer encoder (ViT), trained using DETR's objective.
For now, I've used "vit" as `base_model_prefix`, in order to easily load weights from ViT and ViTMAE checkpoints on the hub. | 04-20-2022 09:48:29 | 04-20-2022 09:48:29 | Addressed most comments. The remaining comments are about badly formatted docstrings, however these are all copied from DETR (so I can't change them due to `#Copied from` statements). Is it ok if I address these docstrings in a separate PR for both models?
Also pinging @Narsil as the pipeline test for YOLOS is failing. This is because YOLOS doesn't take `pixel_mask` as input, whereas DETR does. This makes YOLOS fail for the object detection pipeline.<|||||>I'd advocate to make the changes in docstrings in DETR to be propagated to YOLOS in this PR, just to make sure we don't forget.<|||||>> Also pinging @Narsil as the pipeline test for YOLOS is failing. This is because YOLOS doesn't take pixel_mask as input, whereas DETR does. This makes YOLOS fail for the object detection pipeline.
Then the feature_extractor should not output them. The image pipeline are pretty simple and roughly simply do
`model(**feature_extractor(image))` so if the feature extractor only outputs what's needed then it should work.
That or `pixel_mask` should be handled (doesn't seem to be making sense for this model reading your comment).<|||||>> Then the feature_extractor should not output them.
Yeah the problem is, YOLOS uses the same feature extractor as DETR, which outputs both `pixel_values` and `pixel_mask`. Hence, I've just added `("yolos", "DetrFeatureExtractor")` to the Auto Feature Extractor API.
I think the easiest here is to add `pixel_mask=None` to the forward of YOLOS, in order to make `model(**feature_extractor(image))` work.<|||||>> I think the easiest here is to add pixel_mask=None to the forward of YOLOS, in order to make model(**feature_extractor(image)) work.
We're not adding an argument that will be ignored all the time, that's just confusing to users. Especially if they end up passing one and don't get why it's not used.
If the feature extractor should not return `pixel_mask` then either use a class attribute on `DetrFeatureExtractor` to make it not return that in certain cases, or create a `YolosFeatureExtractor` that removes that field from the output of the feature extractor.<|||||>Ok so I created a new YolosFeatureExtractor, however the pipeline test is still failing:
```
def _call_impl(self, *input, **kwargs):
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)
# If we don't have any hooks, we want to skip the rest of the logic in
# this function, and just call forward.
if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
or _global_forward_hooks or _global_forward_pre_hooks):
> return forward_call(*input, **kwargs)
E TypeError: forward() got an unexpected keyword argument 'pixel_mask'
```
@Narsil could you help me debug this? It's weird cause `YolosFeatureExtractor` doesn't create a pixel mask. Also, I added doc tests which are passing.<|||||>> @Narsil could you help me debug this? It's weird cause YolosFeatureExtractor doesn't create a pixel mask. Also, I added doc tests which are passing.
I have checked and the reason is that the tested Feature extractor is actually a detr one, not a Yolo one:
https://github.com/huggingface/transformers/pull/16848/files#diff-fcbe32a3a065f97b00f1c242ecd45858b8d5680a2437b65af183eb0c439e2be9R347
The pipeline tests rely on ModelTester to create the base objects
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated, merging. |
transformers | 16,847 | closed | get the error "_forward_unimplemented() got an unexpected keyword argument 'labels'" | ### System Info
```shell
- `transformers` version: 2.11.0
- Platform: Linux-5.4.0-81-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.7.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
class Seq2SeqModel(nn.Module):
def __init__(self, device):
super(Seq2SeqModel, self).__init__()
self.device = device
self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
self.model = BertModel.from_pretrained("bert-base-uncased")
self.decoder = BertLMPredictionHead(self.model.embeddings.word_embeddings.weight)
def farward(self, indexed_tokens, indexed_segment, labels, seq_len = 100):
ones = torch.ones((1, 1, seq_len, seq_len), dtype=torch.float32, device=self.device)
a_mask = ones.tril()
s_ex12 = indexed_segment.unsqueeze(1).unsqueeze(2).float()
s_ex13 = indexed_segment.unsqueeze(1).unsqueeze(3).float()
a_mask = (1.0 - s_ex12) * (1.0 - s_ex13) + s_ex13 * a_mask
enc_layers, _ = self.model(input_ids = indexed_tokens, token_type_ids=indexed_segment,attention_mask=a_mask,
output_all_encoded_layers=True)
squence_out = enc_layers[-1]
```
When I train the model defined above, train code like this:
```
self.bert_model = Seq2SeqModel(self.device)
_, _ = self.bert_model(token_ids,
token_type_ids,
labels=target_ids,
device=self.device)
```
it return the error:
` File "/home/fanni/anaconda3/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: _forward_unimplemented() got an unexpected keyword argument 'labels'`
I clearly defined labels and redefines the farward functionοΌ why return the error? Looking forward for your reply, thank youοΌ
### Expected behavior
```shell
I should be able to override the 'farward' function and not be affected by the original parameters
```
| 04-20-2022 09:42:35 | 04-20-2022 09:42:35 | It's my error, I write "forward" to "farward" |
transformers | 16,846 | closed | Range Error for BERT Masked Language Modeling on IMDB | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1ZpYRkJVMF5r3MukUheEFtgDvqax4YCxM?usp=sharing
### Expected behavior
```shell
Evaluation to complete and give me a perplexity score, as it does [here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section3_tf.ipynb)
```
| 04-20-2022 07:34:24 | 04-20-2022 07:34:24 | Hi @Jadiker π In your notebook, after the `tokenize_and_chunk` cell (the 2nd one, there are 2) we can see a warning that explains the error: `Token indices sequence length is longer than the specified maximum sequence length for this model (521 > 512). Running this sequence through the model will result in indexing errors`.
If you add `truncation=True` in the tokenizer call in that cell you should be able to solve the problem. Let me know if it worked :)<|||||>Nope, same error: `indices[15,32] = -9223372036854775808 is not in [0, 28996)`. (I've edited the notebook with the change.)<|||||>@Jadiker Thank you for the update :) The problem seems to raise from your custom tokenization function, which is likely not returning the correct data format. See [this](https://colab.research.google.com/drive/1oZ3CMFXOaBZYRVaNlkqEMnQrcwqr7hS_?usp=sharing) notebook, which successfully runs your code if we skip `tokenize_and_chunk`. Inside `tokenize_and_chunk`, you append `chunks.append(all_input_ids[idx: idx + context_length])`, which would explain the indexing errors.
We also reserve these GitHub issues for bugs in the repository and/or feature requests. For any other requests, like issues in your custom code, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€ I'm closing this issue, but feel free to reopen with queries that fit the criteria I described.
<|||||>@gante Thanks for your time and for the information! I really appreciate it.
Two comments:
1. **All the code (including the `tokenize_and_chunk` function) in the notebook is directly from Hugging Face.** It comes from [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/videos/mlm_processing.ipynb) which is linked in [this tutorial](https://www.youtube.com/watch?v=8PmhEIXhBvI) on data processing. The _only_ thing I have done is added code _after_ the data processing in order to actually train a model on the processed data. (And the code for training the model comes from [this Hugging Face tutorial](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section3_tf.ipynb).)
Given that, should I still have posted on the forum first? If the tutorials for data processing and model training can't be combined, how is one supposed to train a model on the processed data? It seemed like something that should be fixed in the code, rather than just discussed on the forum.
2. I don't believe the notebook you linked to is shared with me.
Thanks again for engaging with this!<|||||>> I don't believe the notebook you linked to is shared with me.
Oops, forgot to change the permissions. Should be okay now<|||||>After looking at the notebook you linked, it seems like the issue is that the tutorial notebook gives two different options for tokenizing text - by using both of them, rather than just using the first one, I introduced a bug into the code.
Does that sounds accurate?<|||||>@Jadiker Yeah, the problem seems to be at the dataset preparation stage. To be candid, I also can't find the issue from a quick glance -- I've double checked the `input_ids`, they are all within `vocabulary_size`, so `gather` shouldn't complain π€ Can you have a look at the script example [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_mlm.py), which was working as of a few weeks ago (and should be working), and see if you can find the issue? My number 1 suspect is the lack of a labels column, but the thrown error does not point at that.
As I mentioned above, we don't have the resources to do proper support in situations like this, but I'd be curious to find the root cause. Perhaps we could improve documentation with the findings :) If you get stuck, I might have capacity to pick it up in a few weeks. |
transformers | 16,845 | closed | AutoFeatureExtractor not respecting override parameters (for LayoutLMv2FeatureExtractor) | ### System Info
```shell
- `transformers` version: 4.17.0 - (but checked issue is still present on `main` branch)
- Platform: Linux-4.14.262-200.489.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. In any environment without `pytesseract` installed
2. Create an `AutoFeatureExtractor` for LayoutLMv2, specifying an override that `apply_ocr = False`:
```
from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained(
"microsoft/layoutlmv2-base-uncased",
apply_ocr=False,
)
```
Because [FeatureExtractionMixin.from_dict()](https://github.com/huggingface/transformers/blob/e1c153cbaa2f4dc6fa10aec8e3afb38c1b437947/src/transformers/feature_extraction_utils.py#L490) first creates the instance with *only* the configuration dict (in which `apply_ocr` defaults to `True`) and then **updates** the override parameters on the object, this still hits the `pytesseract` install check and fails with:
```
ImportError:
LayoutLMv2FeatureExtractor requires the PyTesseract library but it was not found in your environment. You can install it with pip:
`pip install pytesseract`
```
### Expected behavior
I would like/expect the override parameter to be directly applied when calling the class constructor, so the instance knows its target state during `__init__` and doesn't fail the Tesseract install check.
I'm looking at updating [this script bundle](https://github.com/aws-samples/amazon-textract-transformer-pipeline/tree/main/notebooks/src) in the [aws-samples/amazon-textract-transformer-pipeline](https://github.com/aws-samples/amazon-textract-transformer-pipeline) sample (which currently uses LLMv1) - and would like to continue using Auto classes if possible to make switching between similar models (LLMv1, v2, XLM) relatively straightforward. However, in this context we're using external OCR from Amazon Textract and hoping not to bloat the container image with an unnecessary Tesseract install.
| 04-20-2022 06:19:11 | 04-20-2022 06:19:11 | Hi,
Thanks for your interest in LayoutLMv2! I remember we had a similar request at https://github.com/huggingface/transformers/issues/15269. Shouldn't this solve the issue?
I can't reproduce it, I don't get an import error, even though I don't have Tesseract installed.<|||||>Ahh sorry yes you're right - looks like this is a duplicate and I see my v4.17.0 (3rd Mar) came out before the fix PR was merged (5th Mar)! I got confused because the fix deferred the check rather than pulling forward the parameter override
Thanks for speedy response! |
transformers | 16,844 | closed | [modeling_utils] use less cpu memory with sharded checkpoint loading | This PR lowers the peak cpu memory usage for sharded checkpoint loading
The following demonstration tells the full story. I'm using `/usr/bin/time -f %M` to report max rss = total cpu memory used by the process including peak memory.
This demo uses T0 which is 42GB big in fp32 https://huggingface.co/bigscience/T0/tree/main
So with the normal loading the program needs 87GB of CPU RAM (42x2 plus a few GBs for temps)
```
# full checkpoint
/usr/bin/time -f %M python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0')"
87286376
# shard it to 10GB / shard
python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0'); \
model.save_pretrained('t0-sharded')"
# before this PR
/usr/bin/time -f %M python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('t0-sharded')"
68358000
# after this PR
/usr/bin/time -f %M python -c "from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained('t0-sharded')"
53529416
```
So after this PR the CPU memory usage is 1x model size (42GB here) + largest shard (10GB) + some temps = 53GB
Before this PR we were getting an additional 15GB (1.5x shard) of peak cpu memory.
@sgugger | 04-20-2022 02:52:32 | 04-20-2022 02:52:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,842 | closed | CodeT5 tokenizer.model_max_length is 1000000000000000019884624838656 | ### System Info
```shell
- `transformers` version: 4.12.2
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("formermagic/codet5-large")
print(tokenizer.model_max_length)
```
This prints `1000000000000000019884624838656`.
### Expected behavior
```shell
To print 512.
```
| 04-19-2022 19:38:41 | 04-19-2022 19:38:41 | Hey @khoda81,
The bug seems to be related directly to the model of @mozharovsky and not really to a canonical model. However, also not that T5 does **not** have a maximum length. You can use T5 with lengths as long as you want until your memory errors out. T5 uses relative positional embeddings so that there is no static weight that will give you an "out-of-index" error.
Also see: https://github.com/huggingface/transformers/issues/5204<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,841 | closed | Converting PyTorch to ONNX model doubles file size for Deberta v3. Not case of renaming. | ### System Info
```shell
- OS Platform and Distribution: Amazon Linux 2
- ONNX version 1.11.0
- Python version: 3.8
- Transformers 4.17
```
### Who can help?
@LysandreJik, @patil-suraj
### Describe the bug
When I try and convert `microsoft/deberta-v3-large` to onnx format, the file size doubles to 1.8Gb while the pytorch file size is 800Mb.
There was an issue [here](https://github.com/onnx/onnx/issues/3278#issuecomment-781948998) that was resolved by removing the duplication of weights. However, trying this code snippet in the deberta case only reduced the file size from 1.8Gb to 1.6Gb.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from collections import OrderedDict
from typing import Mapping
from pathlib import Path
from transformers.onnx import export
from transformers.onnx import OnnxConfig
from transformers import AutoTokenizer, AutoModel, AutoConfig
onnx_path = Path("deberta.onnx")
class DebertaConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("input_ids", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
("token_type_ids", {0: "batch", 1: "sequence"}),
]
)
config = AutoConfig.from_pretrained("microsoft/deberta-v3-large")
base_model = AutoModel.from_pretrained("microsoft/deberta-v3-large")
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-large")
onnx_config = DebertaConfig(config)
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, 15, onnx_path)
```
### Expected behavior
```shell
Expect the model.onnx file to be ~800Mb like the pytorch file.
```
| 04-19-2022 18:47:46 | 04-19-2022 18:47:46 | Maybe @lewtun and @michaelbenayoun have insights<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Reopen<|||||>Just to mention that @michaelbenayoun has built a tool in [Optimum](https://github.com/huggingface/optimum/blob/main/optimum/onnx/graph_transformations.py) to remove duplicate weights. In this case, it reduces the .onnx size from 1.9GB to 1.7GB.
```python
from optimum.onnx.graph_transformations import remove_duplicate_weights
new_model = remove_duplicate_weights(model)
```
And `optimum.onnx` might be the place where further enhancement shall take place. <|||||>Hi @jordiclive,
It might be due to some tensor creation ops (either numpy or pytorch) that end up being stored in the ONNX graph.
The state dict does not have to contain those so that would explain the difference, but that's just a guess I this point.<|||||>I'm not sure if this is a Transformers issue anymore. The same happens with Roberta.
I think the code-snippet I posted does the optimum tool reduction.<|||||>Just in case anyone comes across this, a large part of the size increase may have nothing to do with ONNX. The deberta-v3-* appears to be trained with mixed precision, but from_pretrained automatically loads with float32 tensors. You may be able to shave 800mb off by loading the model as:
```
model = AutoModel.from_pretrained('microsoft/deberta-v3-large', torch_dtype=torch.float16)
``` |
transformers | 16,840 | closed | [Typo] Fix typo in modeling utils | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typo
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-19-2022 17:32:14 | 04-19-2022 17:32:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,839 | closed | Fix typo modeling utils | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-19-2022 17:30:24 | 04-19-2022 17:30:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16839). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,838 | closed | TF: XLA model output differs when certain outputs are passed | Depending on the passed inputs, the output of an XLA-compiled model may significantly differ from its non-XLA counterpart. This suggests we should add tests for XLA-output equivalence, just like we do with e.g. PT-TF, as it is not guaranteed.
At the moment, this blocks further developments in `generate()` (can't reliably reproduce non-XLA results with XLA). I will assess this problem for T5 (first model where I've noticed this), then check whether it is present for other key models, and finally add equivalence tests.
cc @patrickvonplaten @Rocketknight1 (feel free to pitch in with ideas and suggestions)
____________________________________________________
Example for reproducibility (updated: assert diff < x-> print diff):
```python
import tensorflow as tf
from transformers import TFT5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
model = TFT5ForConditionalGeneration.from_pretrained("t5-base")
model_xla = tf.function(model, jit_compile=True)
pad_token_id = model.config.pad_token_id
sentence_1 = "Translate English to German: I have a cat, two dogs, three horses, and four birds."
sentence_2 = "Translate English to German: I have a cat, two dogs, and three horses."
ids_single = tokenizer([sentence_1], return_tensors="tf", padding=True).input_ids
decoder_ids_single = tf.zeros((1, 1), dtype=tf.int32)
attention_single = tf.cast(tf.math.not_equal(ids_single, pad_token_id), dtype=tf.int32) # as computed in generate
ids_pair = tokenizer([sentence_1, sentence_2], return_tensors="tf", padding=True).input_ids
decoder_ids_pair = tf.zeros((2, 1), dtype=tf.int32)
attention_pair = tf.cast(tf.math.not_equal(ids_pair, pad_token_id), dtype=tf.int32) # as computed in generate
# case 1: with batch size = 1 and NO attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_single, decoder_input_ids=decoder_ids_single)
outputs_xla = model_xla(input_ids=ids_single, decoder_input_ids=decoder_ids_single)
print(tf.math.reduce_max(tf.math.abs(outputs.logits - outputs_xla.logits)).numpy())
# case 2: with batch size > 1 and NO attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_pair, decoder_input_ids=decoder_ids_pair)
outputs_xla = model_xla(input_ids=ids_pair, decoder_input_ids=decoder_ids_pair)
print(tf.math.reduce_max(tf.math.abs(outputs.logits - outputs_xla.logits)).numpy())
# case 3 FAILING: with batch size = 1 and attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single)
outputs_xla = model_xla(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single)
print(tf.math.reduce_max(tf.math.abs(outputs.logits - outputs_xla.logits)).numpy())
# case 4 FAILING: with batch size < 1 and attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_pair, decoder_input_ids=decoder_ids_pair, attention_mask=attention_pair)
outputs_xla = model_xla(input_ids=ids_pair, decoder_input_ids=decoder_ids_pair, attention_mask=attention_pair)
print(tf.math.reduce_max(tf.math.abs(outputs.logits - outputs_xla.logits)).numpy())
```
| 04-19-2022 15:40:30 | 04-19-2022 15:40:30 | How significant are the differences? Would it pass with 1e-1?<|||||>> How significant are the differences? Would it pass with 1e-1?
Tried twice, for `# case 3 FAILING: with batch size = 1 and attention mask`, the diffs are as large as `12.3...`<|||||>Just tried it here.
On CPU:
| Test number | Max error |
| ----------- | ----------- |
| 1 | 1.5258789e-05 |
| 2 | 1.335144e-05|
| 3 | 12.374499 |
| 4 | 12.5263195 |
On GPU (3090, using TensorFloat32):
| Test number | Max error |
| ----------- | ----------- |
| 1 | 0.0053577423 |
| 2 | 0.0062656403 |
| 3 | 0.0053577423 |
| 4 | 0.004333496 |
<|||||>My best guess is that there are two separate issues:
1) XLA on CPU is buggy (I believe this isn't an intended use-case for XLA anyway, because kernel fusion doesn't make much difference there)
2) GPUs, especially when using tensor cores/TensorFloat32, have somewhat worse precision than CPU, but it's fine if we use a larger tolerance.<|||||>Wait so XLA works on GPU, but not on CPU? That's very weird<|||||>@gante Probably the following code and outputs could make you spot the places more easily.
~~There is one thing: XLA model doesn't return `hidden_states`, `attentions`, etc., even if I specify to output them.
Therefore I couldn't compare them.~~
(able to get the full outputs with a ugly hack)
## Code
```python
import numpy as np
import tensorflow as tf
from transformers import TFT5Model, T5Tokenizer
from transformers.utils.generic import ModelOutput
checkpoint = "t5-base"
tokenizer = T5Tokenizer.from_pretrained(checkpoint)
model = TFT5Model.from_pretrained(checkpoint)
# Ugly hack to retrun all outputs
model.config.output_hidden_states = True
model.config.output_attentions = True
model = TFT5Model.from_pretrained(checkpoint, config=model.config)
model_xla = tf.function(model, jit_compile=True)
# tokenizer.pad_token_id = tokenizer.eos_token_id
pad_token_id = tokenizer.pad_token_id
sentence_1 = "Translate English to German: I have a cat, two dogs, three horses, and four birds."
sentence_2 = "Translate English to German: I have a cat, two dogs, and three horses."
ids_single = tokenizer([sentence_1], return_tensors="tf", padding=True).input_ids
decoder_ids_single = tf.zeros((1, 1), dtype=tf.int32)
# attention_single = tf.cast(tf.math.not_equal(ids_single, pad_token_id), dtype=tf.int32) # as computed in generate
attention_single = tf.cast(tf.ones_like(ids_single), dtype=tf.int32) # as computed in generate
ids_pair = tokenizer([sentence_1, sentence_2], return_tensors="tf", padding=True).input_ids
decoder_ids_pair = tf.zeros((2, 1), dtype=tf.int32)
# attention_pair = tf.cast(tf.math.not_equal(ids_pair, pad_token_id), dtype=tf.int32) # as computed in generate
attention_pair = tf.cast(tf.ones_like(ids_pair), dtype=tf.int32)
# case 3 FAILING: with batch size = 1 and attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single, output_hidden_states=True, output_attentions=True)
outputs_xla = model_xla(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single, output_hidden_states=True, output_attentions=True)
# Please ignore the bad naming - this is just a quick copy from the test script
def check_pt_tf_outputs(tf_outputs, pt_outputs, model_class, tol=1e-5, name="outputs", attributes=None):
# Allow `ModelOutput` (e.g. `CLIPOutput` has `text_model_output` and `vision_model_output`).
if isinstance(tf_outputs, ModelOutput):
tf_keys = tuple([k for k, v in tf_outputs.items() if v is not None])
pt_keys = tuple([k for k, v in pt_outputs.items() if v is not None])
# (if without the hack) XLA models don't return full outputs at this moment ... need to ignore them at this moment
# keys = tuple(set(tf_keys).intersection(pt_keys))
# tf_outputs = tuple([tf_outputs[k] for k in keys])
# pt_outputs = tuple([pt_outputs[k] for k in keys])
# convert to the case of `tuple`
# appending each key to the current (string) `names`
attributes = tuple([f"{name}.{k}" for k in tf_keys])
check_pt_tf_outputs(tf_outputs.to_tuple(), pt_outputs.to_tuple(), model_class, tol=tol, name=name, attributes=attributes)
# Allow `list` (e.g. `TransfoXLModelOutput.mems` is a list of tensors.)
elif type(tf_outputs) in [tuple, list]:
if attributes is not None:
# case 1: each output has assigned name (e.g. a tuple form of a `ModelOutput`)
pass
else:
# case 2: each output has no assigned name (e.g. hidden states of each layer) -> add an index to `names`
attributes = tuple([f"{name}_{idx}" for idx in range(len(tf_outputs))])
for tf_output, pt_output, attr in zip(tf_outputs, pt_outputs, attributes):
check_pt_tf_outputs(tf_output, pt_output, model_class, tol=tol, name=attr)
elif isinstance(tf_outputs, tf.Tensor):
tf_outputs = tf_outputs.numpy()
pt_outputs = pt_outputs.numpy()
# deal with NumPy's scalars to make replacing nan values by 0 work.
if np.isscalar(tf_outputs):
tf_outputs = np.array([tf_outputs])
pt_outputs = np.array([pt_outputs])
tf_nans = np.isnan(tf_outputs)
pt_nans = np.isnan(pt_outputs)
pt_outputs[tf_nans] = 0
tf_outputs[tf_nans] = 0
pt_outputs[pt_nans] = 0
tf_outputs[pt_nans] = 0
max_diff = np.amax(np.abs(tf_outputs - pt_outputs))
print(f"{name}: {max_diff}")
else:
raise ValueError(
f"`tf_outputs` should be an instance of `tf.Tensor`, a `tuple`, or an instance of `tf.Tensor`. Got {type(tf_outputs)} instead.")
check_pt_tf_outputs(outputs, outputs_xla, model_class=TFT5Model)
```
## Outputs
```python
outputs.last_hidden_state: 2.800762176513672
outputs.past_key_values_0_0: 4.291534423828125e-06
outputs.past_key_values_0_1: 1.0728836059570312e-06
outputs.past_key_values_0_2: 3.4570693969726562e-06
outputs.past_key_values_0_3: 3.337860107421875e-06
outputs.past_key_values_1_0: 0.4949379563331604
outputs.past_key_values_1_1: 0.8448842763900757
outputs.past_key_values_1_2: 4.291534423828125e-06
outputs.past_key_values_1_3: 4.887580871582031e-06
outputs.past_key_values_2_0: 0.4911351203918457
outputs.past_key_values_2_1: 0.5065852403640747
outputs.past_key_values_2_2: 4.76837158203125e-06
outputs.past_key_values_2_3: 5.7220458984375e-06
outputs.past_key_values_3_0: 0.47093653678894043
outputs.past_key_values_3_1: 0.5624567270278931
outputs.past_key_values_3_2: 4.410743713378906e-06
outputs.past_key_values_3_3: 5.9604644775390625e-06
outputs.past_key_values_4_0: 0.775518536567688
outputs.past_key_values_4_1: 0.934751570224762
outputs.past_key_values_4_2: 5.7220458984375e-06
outputs.past_key_values_4_3: 7.152557373046875e-06
outputs.past_key_values_5_0: 1.0620229244232178
outputs.past_key_values_5_1: 1.1955945491790771
outputs.past_key_values_5_2: 5.7220458984375e-06
outputs.past_key_values_5_3: 9.059906005859375e-06
outputs.past_key_values_6_0: 1.5020784139633179
outputs.past_key_values_6_1: 1.768876552581787
outputs.past_key_values_6_2: 6.4373016357421875e-06
outputs.past_key_values_6_3: 8.344650268554688e-06
outputs.past_key_values_7_0: 1.9831377267837524
outputs.past_key_values_7_1: 1.7343039512634277
outputs.past_key_values_7_2: 6.67572021484375e-06
outputs.past_key_values_7_3: 1.0251998901367188e-05
outputs.past_key_values_8_0: 2.3230268955230713
outputs.past_key_values_8_1: 2.937762498855591
outputs.past_key_values_8_2: 5.7220458984375e-06
outputs.past_key_values_8_3: 9.775161743164062e-06
outputs.past_key_values_9_0: 2.8203392028808594
outputs.past_key_values_9_1: 5.384043216705322
outputs.past_key_values_9_2: 5.9604644775390625e-06
outputs.past_key_values_9_3: 1.33514404296875e-05
outputs.past_key_values_10_0: 4.303163528442383
outputs.past_key_values_10_1: 10.02894401550293
outputs.past_key_values_10_2: 6.198883056640625e-06
outputs.past_key_values_10_3: 1.430511474609375e-05
outputs.past_key_values_11_0: 4.163003921508789
outputs.past_key_values_11_1: 7.657519817352295
outputs.past_key_values_11_2: 4.76837158203125e-06
outputs.past_key_values_11_3: 1.9073486328125e-05
outputs.decoder_hidden_states_0: 0.0
outputs.decoder_hidden_states_1: 2151.3359375
outputs.decoder_hidden_states_2: 2724.79736328125
outputs.decoder_hidden_states_3: 4147.70751953125
outputs.decoder_hidden_states_4: 6162.63720703125
outputs.decoder_hidden_states_5: 7066.3046875
outputs.decoder_hidden_states_6: 7329.43603515625
outputs.decoder_hidden_states_7: 7471.92333984375
outputs.decoder_hidden_states_8: 7749.91162109375
outputs.decoder_hidden_states_9: 8324.51953125
outputs.decoder_hidden_states_10: 8609.3359375
outputs.decoder_hidden_states_11: 7732.30224609375
outputs.decoder_hidden_states_12: 2.800762176513672
outputs.decoder_attentions_0: 0.0
outputs.decoder_attentions_1: 0.0
outputs.decoder_attentions_2: 0.0
outputs.decoder_attentions_3: 0.0
outputs.decoder_attentions_4: 0.0
outputs.decoder_attentions_5: 0.0
outputs.decoder_attentions_6: 0.0
outputs.decoder_attentions_7: 0.0
outputs.decoder_attentions_8: 0.0
outputs.decoder_attentions_9: 0.0
outputs.decoder_attentions_10: 0.0
outputs.decoder_attentions_11: 0.0
outputs.cross_attentions_0: 0.9293187856674194
outputs.cross_attentions_1: 0.8967262506484985
outputs.cross_attentions_2: 0.7246492505073547
outputs.cross_attentions_3: 0.9164008498191833
outputs.cross_attentions_4: 0.8164070248603821
outputs.cross_attentions_5: 0.7364302277565002
outputs.cross_attentions_6: 0.6568543314933777
outputs.cross_attentions_7: 0.6275004744529724
outputs.cross_attentions_8: 0.6810514330863953
outputs.cross_attentions_9: 0.631909966468811
outputs.cross_attentions_10: 0.4159456491470337
outputs.cross_attentions_11: 0.39396628737449646
outputs.encoder_last_hidden_state: 5.960464477539062e-07
outputs.encoder_hidden_states_0: 0.0
outputs.encoder_hidden_states_1: 0.000244140625
outputs.encoder_hidden_states_2: 0.0003662109375
outputs.encoder_hidden_states_3: 0.00048828125
outputs.encoder_hidden_states_4: 0.00048828125
outputs.encoder_hidden_states_5: 0.00048828125
outputs.encoder_hidden_states_6: 0.0009765625
outputs.encoder_hidden_states_7: 0.00048828125
outputs.encoder_hidden_states_8: 0.001953125
outputs.encoder_hidden_states_9: 0.001953125
outputs.encoder_hidden_states_10: 0.0078125
outputs.encoder_hidden_states_11: 0.0078125
outputs.encoder_hidden_states_12: 5.960464477539062e-07
outputs.encoder_attentions_0: 5.066394805908203e-07
outputs.encoder_attentions_1: 5.364418029785156e-07
outputs.encoder_attentions_2: 7.152557373046875e-07
outputs.encoder_attentions_3: 5.960464477539062e-07
outputs.encoder_attentions_4: 5.662441253662109e-07
outputs.encoder_attentions_5: 5.960464477539062e-07
outputs.encoder_attentions_6: 5.364418029785156e-07
outputs.encoder_attentions_7: 6.258487701416016e-07
outputs.encoder_attentions_8: 8.642673492431641e-07
outputs.encoder_attentions_9: 5.960464477539062e-07
outputs.encoder_attentions_10: 7.152557373046875e-07
outputs.encoder_attentions_11: 5.960464477539062e-07
```
<|||||>Thank you for your suggestions, you have solved the puzzle π The winning suggestion award goes to @Rocketknight1 -- XLA on CPU is indeed buggy.
I've spun up an Nvidia T4 ( = no `tf32` format) and got an error < `1e-5` for all cases. `tf32` does make the difference slightly bigger, the having a GPU is the main difference. It has also passed the `generate` cases that were failing on XLA with CPU (see below).
As a result of this thread, I was thinking of:
1. Raising an exception in TF generate when `use_xla` is `True` and there are no GPU devices -- @patrickvonplaten WDYT?
2. Pushing all XLA tests to GPU;
3. Opening an issue in the TensorFlow repo -- @Rocketknight1 do you think they will care? π€
_________________________________
Greedy search translating correctly with GPU:

Greedy search failing with CPU:

Sample behaving okay with GPU (sampling 10 outputs for the first sentence input):

<|||||>@gante I think they certainly would be interested, but we'd have to localize the bug a little more! If you could fix an input and make a minimal single module that showed the buggy behaviour, you should definitely report that upstream. I totally understand if that's not a priority with everything else on your plate, though!<|||||>Cool, great job guys in locating the error!
I don't think it's a good idea to to raise an error / exception if XLA is enabled on CPU. XLA should work on CPU - why wouldn't it? To me this clearly looks like a TF bug and quite a big one actually.
IMO, lots of people **debug** their code on CPU in XLA so I think it is pretty important that it works on CPU.
E.g. some generate processors work differently in XLA and it's important to verify as easy as possible (on CPU) that new XLA code works as expected.
Also note that XLA works inherently differently to non-XLA (static shapes, different computation operations). This should also be easy to test/debug on CPU. E.g. to me it's a non-negligible use case to check if your code leads to constant recompilation or not on CPU.
Also we need to test XLA on CPU as well so that it runs on circle ci IMO
cc @sanchit-gandhi, who's is working quite a bit with XLA at the moment. <|||||>It shouldn't be too difficult to locate where the difference is coming from since we know that without attention_mask is works no?<|||||>Change this line
https://github.com/huggingface/transformers/blob/e1c153cbaa2f4dc6fa10aec8e3afb38c1b437947/src/transformers/models/t5/modeling_tf_t5.py#L401
to
```
weights = tf.math.softmax(scores + 1.0, axis=-1)
```
will solve the problem. This gives the same weights (on CPU + XLA) as the ones computed on GPU machine (both non-XLA & XLA).
I tested this trick with @gante code samples.
I also looked the expected values for `weights` using the code below.
The expected values looks like
```python
[8.03906238e-04 4.91665269e-04 6.60848498e-01 7.20867813e-02, ...]
```
without this, on CPU + XLA, we get
```python
[0.04347826, 0.04347826, 0.04347826, 0.04347826, ...]
```
I guess some trick (about [numerical stability of Softmax](https://ogunlao.github.io/2020/04/26/you_dont_really_know_softmax.html)) is not done for XLA + CPU.
### The code I use
```python
import numpy as np
import tensorflow as tf
from transformers import TFT5Model, T5Tokenizer
from transformers.utils.generic import ModelOutput
checkpoint = "t5-base"
tokenizer = T5Tokenizer.from_pretrained(checkpoint)
model = TFT5Model.from_pretrained(checkpoint)
# Ugly hack to retrun all outputs
model.config.output_hidden_states = True
model.config.output_attentions = True
model = TFT5Model.from_pretrained(checkpoint, config=model.config)
model_xla = tf.function(model, jit_compile=True)
# tokenizer.pad_token_id = tokenizer.eos_token_id
pad_token_id = tokenizer.pad_token_id
sentence_1 = "I have a cat, two dogs"
sentence_2 = "I have a cat"
sentence_1 = "Translate English to German: I have a cat, two dogs, three horses, and four birds."
sentence_2 = "Translate English to German: I have a cat, two dogs, and three horses."
ids_single = tokenizer([sentence_1], return_tensors="tf", padding=True).input_ids
decoder_ids_single = tf.zeros((1, 1), dtype=tf.int32)
# attention_single = tf.cast(tf.math.not_equal(ids_single, pad_token_id), dtype=tf.int32) # as computed in generate
attention_single = tf.cast(tf.ones_like(ids_single), dtype=tf.int32) # as computed in generate
decoder_attention_single = tf.cast(tf.ones_like(decoder_ids_single), dtype=tf.int32) # as computed in generate
ids_pair = tokenizer([sentence_1, sentence_2], return_tensors="tf", padding=True).input_ids
decoder_ids_pair = tf.zeros((2, 1), dtype=tf.int32)
# attention_pair = tf.cast(tf.math.not_equal(ids_pair, pad_token_id), dtype=tf.int32) # as computed in generate
attention_pair = tf.cast(tf.ones_like(ids_pair), dtype=tf.int32)
decoder_attention_pair = tf.cast(tf.ones_like(decoder_ids_pair), dtype=tf.int32) # as computed in generate
# case 3 FAILING: with batch size = 1 and attention mask, XLA and non-XLA match
outputs = model(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single, decoder_attention_mask=decoder_attention_single, output_hidden_states=True, output_attentions=True)
outputs_xla = model_xla(input_ids=ids_single, decoder_input_ids=decoder_ids_single, attention_mask=attention_single, decoder_attention_mask=decoder_attention_single, output_hidden_states=True, output_attentions=True)
```<|||||>As @patrickvonplaten mentioned, it's pretty imperative to have XLA working on CPU for any kind of debugging - there are all sorts of debugging methods that pull values back to the host and perform checks on an op-by-op basis (see https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#nans). These are pretty crucial for understanding the inner-workings of a compiled function that you wouldn't otherwise see if running XLA purely on an accelerator.
Also running JAX/Flax on CPU, the floating-point precision of internal computations used in TPU matrix multiplications and convolutions is always highest. When you move to a TPU, the floating-point precision is lowered by default. We need to be able to test our code on CPU to run at highest precision, especially for any sort of PT-Flax equivalence tests (see https://github.com/huggingface/transformers/issues/15754). I'm not familiar with how TF treats matmul precisions, but it's these sorts of considerations that mean running XLA on CPU is pretty essential!<|||||>Great find @ydshieh! We should talk to the TF guys about this no? <|||||>(sorry, accidentally edited @patrickvonplaten above comment)
Yes. Let's extract (or create) some inputs , and reproduce the issue with only the softmax part. <|||||>This is great @ydshieh π₯ I'm going to build a toy example and open an issue in TF, linking to this thread.<|||||>Pinned the problem: it is due to the softmax with numerically masked (= large negative) inputs, on XLA+CPU. I've opened an issue on TensorFlow (as backlinked above), where it contains a simple reproducible example.
Meanwhile, avoid XLA+CPU :D <|||||>If this would require long time for TF team to fix, we might use a wrapped version of `tf.nn.softmax`.
I don't like much this approach though, just an option.<|||||>I'm sure somewhere hidden there is a tf softmax that is stable on XLA. We could then create a custom `def softmax(...)` in https://github.com/huggingface/transformers/blob/main/src/transformers/tf_utils.py that wraps `tf.nn.softmax(...)` in non-XLA and a stable version for XLA <|||||>It should work! The toy example below adds said wrapper (with `+1`), and both CPU and GPU XLA have a difference of ~`1e-8` to its non-XLA version π It also confirms that the stable softmax outputs the same as the original softmax.
```python
import tensorflow as tf
LARGE_PENALTY = -1e9
def stable_softmax(x):
return tf.nn.softmax(x + 1)
def masked_softmax(x, boolean_mask):
numerical_mask = (1. - tf.cast(boolean_mask, dtype=tf.float32)) * LARGE_PENALTY
masked_x = x + numerical_mask
return stable_softmax(masked_x)
xla_masked_softmax = tf.function(masked_softmax, jit_compile=True)
xla_stable_softmax = tf.function(stable_softmax, jit_compile=True)
x = tf.random.normal((1, 10))
# same outcome regardless of the boolean mask here
boolean_mask = tf.convert_to_tensor([[1] * 9 + [0] * 1], dtype=tf.int32)
# passes
numerical_mask = (1. - tf.cast(boolean_mask, dtype=tf.float32)) * LARGE_PENALTY
masked_x = x + numerical_mask
xla_out = xla_stable_softmax(masked_x)
out = stable_softmax(masked_x)
print(tf.math.reduce_max(tf.math.abs(xla_out - out)).numpy())
assert tf.experimental.numpy.allclose(xla_out, out)
# The stable softmax has the same output as the original fn
unstable_out = tf.nn.softmax(masked_x)
print(tf.math.reduce_max(tf.math.abs(unstable_out - out)).numpy())
assert tf.experimental.numpy.allclose(unstable_out, out)
# passes (with the + 1 in the softmax)
xla_out = xla_masked_softmax(x, boolean_mask)
out = masked_softmax(x, boolean_mask)
print(tf.math.reduce_max(tf.math.abs(xla_out - out)).numpy())
assert tf.experimental.numpy.allclose(xla_out, out)
```
Opening a PR soon with this temporary fix, and will replace ALL softmax calls with this wrapped version.<|||||>~~Could we use the following instead π~~
Didn't work! Very strange :(
```
tf.nn.softmax(x - tf.math.reduce_max(x, axis=-1, keepdims=True), axis=-1)
```<|||||>> ```python
> LARGE_PENALTY
> ```
The problem with `+1` is that it won't work in general (despite it works in our 2 cases, but I don't know why )<|||||>@ydshieh I agree that it should be more stable numerically, but I'd rather add a fixed constant. `reduce_max` would add extra computational requirements (reduce operations are not lightweight) and, if it does fix numerical stability issues, it could be introducing a drift between the model at train time and at inference time.
Perhaps not `1`, but a very small constant like `1e-9` (which also works in this toy example)<|||||>OK, good point @gante . And my suggestion didn't work well even with your code above! So good for me to use a constant.<|||||>From further experimentation, I think the reason the small constant works has nothing to do with numerical stability - I think inserting an addition just changes the particular compiled program that XLA generates, and so avoids this issue. |
transformers | 16,837 | closed | no attribute ViTFeatureExtractor | Getting following error:
AttributeError: module transformers.models.vit has no attribute ViTFeatureExtractor
on calling:
`processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-printed")`
System:
Jenson AGX
Tried in RTX & it is working fine.
Installation:
`pip install git+https://github.com/huggingface/transformers` | 04-19-2022 15:31:24 | 04-19-2022 15:31:24 | Installing `Pillow` to env fixed it.<|||||>Thanks for this! |
transformers | 16,836 | closed | Fx with meta | # What does this PR do?
This PR simplies and improves the way tracing works with torch.fx.
Instead of recording concrete values via a forward pass on the original model, metadata is attached to the proxies, either tensors on the `meta` device (which saves us from making actual computations, only shape inference is performed) or any other type such as `torch.Size` and builtin types. On top of allowing to trace very big models, this gives much more flexibility and should allow to support many new architectures.
A big thanks to @jamesr66a as he was the one who provided the basis for tracing with meta tensors, and I simply extended what was already done to our purposes.
@jamesr66a @pbelevich I would love your review and feedbacks!
| 04-19-2022 15:11:33 | 04-19-2022 15:11:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger is it ready now?<|||||>Since it does not change the API, and does not provide any new feature, the current set of test is enough.
I also ran the test with `torch=1.10.2` (we have `TORCH_FX_REQUIRED_VERSION = version.parse("1.10")`), and they pass. The torch version could have been a limitation since we use the meta device, but everything is okay. |
transformers | 16,835 | closed | replace `Speech2TextTokenizer` by `Speech2TextFeatureExtractor` in some docstrings | # What does this PR do?
It seems to me that fbank features can be returned by an `FeatureExtractor` and not by a `Tokenizer`. This PR proposes to make some changes in the docstrings that mention that `Speech2TextTokenizer` should be used to return fbank features.
It seems to me that among the 3 models - `Speech2Text`, `Speech2Text2` and `SpeechEncoderDecoder` - only `Speech2Text` implements a feature extractor and I therefore deduce that the same feature extractor class that is used for the 3 models.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. As it concerns audio related model, I would love to have your feedback @patrickvonplaten , @anton-l or @patil-suraj | 04-19-2022 14:52:55 | 04-19-2022 14:52:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,834 | closed | Add semantic script, trainer | # What does this PR do?
This PR adds `run_semantic_segmentation.py`, which leverages the Trainer API for fine-tuning any model supported by `AutoModelForSemanticSegmentation`.
To do:
- [x] fix WandB logging, which doesn't seem to like the fact that metrics are turned into lists. Might need some help here from @sgugger.
- [x] also wondering why the [model card](https://huggingface.co/nielsr/segformer-finetuned-sidewalk-trainer) doesn't include the dataset name (even though it has a default value in the script):
<img width="535" alt="Screenshot 2022-04-19 at 16 19 22" src="https://user-images.githubusercontent.com/48327001/164025421-2c74a31e-26b9-4652-a99a-ddedbd54428b.png"> | 04-19-2022 13:47:24 | 04-19-2022 13:47:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,833 | closed | [ASR Pipeline] Correct init docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently the docs of the ASR pipelines are not correctly displayed: https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline
This PR improves the doc string so that they look as follows:
https://moon-ci-docs.huggingface.co/docs/transformers/pr_16833/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-19-2022 12:47:16 | 04-19-2022 12:47:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,832 | closed | Dropping support for Python 3.6 | Python 3.6 [has been retired](https://peps.python.org/pep-0494/#lifespan) on December 21st 2021 and won't get new security releases anymore. As a result, Transformers will stop supporting Python 3.6 and have a minimum requirement of Python 3.7 in the next release (v4.19.0), around the beginning of May. | 04-19-2022 12:27:04 | 04-19-2022 12:27:04 | What is the purpose of dropping support while not using latest features of newer python versions (or newer versions of other libraries, which are dropping support)?<|||||>Hey @LSinev, thank you for asking. There are some features that would have been implemented in a simpler manner had Python 3.6 been dropped earlier (https://github.com/huggingface/transformers/issues/15739, @sgugger can chime in on the init system), and we already use features that are only available in 3.7 (`dataclasses`).
Furthermore:
- Python 3.6 is now EOL and will not be receiving new bug and security patches
- All backends of `transformers` newer versions (torch 1.11, tensorflow 2.8, jax 0.2.18) now require python 3.7<|||||>We were not using the features of newest Python versions without dropping the old ones ;-)
For instance, our inits will now be able to use [PEP 562](https://docs.python.org/3/whatsnew/3.7.html#whatsnew37-pep562) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,831 | closed | GPT2.generate() with custom input_embeds argument returning tensor (1*max_length) instead of (batch_size*max_length) | ### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Nope
```
### Who can help?
@patrickvonplaten, @patil-suraj
Hi everybody I have a problem/bug to report regarding the .generate() function, when using GPT2 with custom embeddings instead of embedding ID's.
When using it like stated below with custom input_embeds, the output shape of the longtensor is [1,maxlength] instead of [batch_size, maxlength]. I am confused as to why this happens.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import torch.nn as nn
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BartForConditionalGeneration
MOD = AutoModelForCausalLM.from_pretrained("nferruz/ProtGPT2").to('cuda:0')
Out = MOD.generate(input_embeds=torch.rand(3,314,1280).to('cuda:0'), max_length=40, temperature=1.0, repetition_penalty=1.2, top_k=950, num_return_sequences=1, do_sample=True, top_p=1.0)
Out.shape
### Expected behavior
```shell
Expected shape of Out = [batch_size, maxlength]
```
| 04-19-2022 12:00:22 | 04-19-2022 12:00:22 | Hey @JustABiologist,
I sadly can't run the above code as I don't have access to `"GPT_2 filepath"` - could you make sure that the code snippet is reproducible?<|||||>There are two issues here.
1. The argument is called `inputs_embeds` and not `input_embeds`
2. `generate` doesn't support passing `inputs_embeds` for auto-regressive models, cf
https://github.com/huggingface/transformers/blob/74814574aeab5256ab3c6e428c247739aa0c869d/src/transformers/generation_utils.py#L436-L442<|||||>> There are two issues here.
>
> 1. The argument is called `inputs_embeds` and not `input_embeds`
>
> 2. `generate` doesn't support passing `inputs_embeds` for auto-regressive models, cf
> https://github.com/huggingface/transformers/blob/74814574aeab5256ab3c6e428c247739aa0c869d/src/transformers/generation_utils.py#L436-L442
Hey @patil-suraj , sorry for not mentioning, when I use input_embeds the output is the bugish matrix, the decoded output makes sense though. It just is outputting one sample instead of batch size amount of samples...
When using inputs_embeds it does throw the afformentioned error.
@patrickvonplaten the filepath is "nferruz/ProtGPT2" I corrected it in the issue! :)
Thanks Guys!<|||||>> Sorry for not mentioning, when I use input_embeds the output is the bugish matrix, the decoded output makes sense though. It just is outputting one sample instead of batch size amount of samples.
This is because `input_embeds` is not a valid argument and is ignored. And since there are no `input_ids` provided, `generate` creates one `input_ids` with `bos_token_id` hence the batch size of 1.<|||||>@patil-suraj
Okay so to circumvent this I should do it like this ?
`config = AutoConfig.from_pretrained("nferruz/ProtGPT2")
model = GPT2LMHeadModel(config)
TRAIN = model.from_pretrained("nferruz/ProtGPT2")
OUT = TRAIN(inputs_embeds=torch.rand(2, 314, 1280))
OUT['logits'].shape`
And write my own sampling/CLM script ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Okay whatever solved the problem on my own ciao cacao.<|||||>@JustABiologist could you please share how you made it work with using input_embeds to a decoder-based model like ProtGPT2? I'm trying to implement the same, but I also read that decoder-only models cannot generate with "input_embeds", so I'm wondering how you made it work.
Thanks :)
cc: @patrickvonplaten @patil-suraj <|||||>@JustABiologist could you please share how you made it work with using input_embeds to a decoder-based model like ProtGPT2? I'm trying to implement the same, but I also read that decoder-only models cannot generate with "input_embeds", so I'm wondering how you made it work.
Thanks :)
cc: @patrickvonplaten @patil-suraj<|||||>@hunarbatra sure I basicially just wrote my own generation class on top of the model. That is the most straightforward approach in my opinion. Sorry I can't share the exact code it's for a startup idea.. <|||||>If you want pseudocode I can write that out if needed :)<|||||>No worries, I can understand that! Sure, it'll be really helpful if you could share pseudocode for it! Thank you soo much! :) @JustABiologist <|||||>def generate(model, start_embs, generation_parameters, tokenizer):
for i in range(whatever_iterations)
### raw logit output of the mode
out = model(input_embeds = start_embeds)
### modify logit output with whatever metod you want beam search whatever
out = modify_function(generation_parameters
### sample token from modified logits
tok = sample_function(out, generation_parameters)
### turn tok into long tensor
### turn tok into embedded tok with embedding layer of the model or whatever upstream model you might use
### concatenate start_embeds and tok_embeds
### feed concat start_embeds + tok_embeds back into model with input_embeds
### break when endofline or max toks
You can streamline that alot though :) Do you mind connecting on linkedin? I saw you did some work with proteins aswell so we could discuss ideas!
<|||||>Thank you so much for sharing this! This is helpful :)
Sure, you can add me on [LinkedIn](https://linkedin.com/in/hunarbatra) |
transformers | 16,830 | closed | Repeated Generation | When I am giving input for the summarization, every time it is the generating same summary. (Sometimes even if I consider the first few lines of input as new input, in those cases also it is generating the same output summary.)
Example:
Input: With an immunocompromised son, how could we not be terrified of COVID making its way into our home? For the past two years, weβve been so careful to limit exposure; we were able to homeschool our kids last year and weβve been conscious of avoiding large crowds even with the vaccines. Our kids being able to receive the vaccine was a huge step for us in feeling less of the overwhelming worry of what would happen if any of us were to get COVID, but the concerns were still there. Our son Conner has Duchenne muscular dystrophy, so any virus could have a detrimental effect on his health. With DMD, the whole body is already fighting everyday so itβs possible that the body wonβt be able to fight off any illnesses. The heart and lungs are also already working overtime, so the potential of getting a virus that is known to harm the heart and lungs is a scary thing. We made it almost two years without any of us getting COVID but that came to end over the holidays. I got sick first and despite our precautions, Conner ended up testing positive right before he was supposed to go back to school, with his initial symptom being a stuffy nose. We called our doctor right away and she told us that we should just keep a close eye on him to make sure he didnβt get any worse. Luckily, his symptoms never got too bad (just a low grade fever, sore throat, and slight cold) and he was able to get the booster shot as soon as possible after testing negative again. All things considered, we truly believe that he was as lucky as he was because he had both doses of the vaccine and got Omicron instead of one of the earlier variants. Through our whole experience with the pandemic and getting COVID, one of the most difficult aspects was understanding and coming to terms with the fact that many people donβt share our caution. It can be hard to understand why weβre as careful as we are when you donβt know what it feels like to have someone you love be immunocompromised during a deadly pandemic. We found ourselves having to explain to those around us why it is so important for us all to wear our masks and avoid large crowds and people donβt always understand, especially with the βstereotypesβ surrounding what happens when you get COVID as a child. The main information about children and COVID has been that children are resilient and wonβt be too negatively impacted by it, but that just isnβt true for immunocompromised children. The majority of people will never truly understand our situation until they find themselves in a similar one so it can be hard to hear people complaining about still having to wear masks and saying that COVID isnβt a big deal anymore. At the beginning, it was also hard to find balance in the conversations we had to have with our children because of our own levels of worry. We wanted to be clear about the importance of staying safe but at the same time didn't want to scare them or project our own fears onto them.β Karen Morales, a mom of two, lives with a very rare form of a rare disease called Limb-girdle muscular dystrophy. This is her COVID experience from her perspective: βIt's inevitable. Or at least that was always my feeling: someday we were bound to get infected with the virus. The question we really struggled with was when? In the early days of lockdown and homeschool, we felt scared. It was easiest to stay in our homes and order groceries from isolation, then to risk the fear in the world. But like many things, over time, especially with vaccines and home testing, the COVID-19 pandemic shifted from a new fear to just the normal way of life. My ten year old daughter got it first, spiking a fever after a day on the ski slopes. It was then that our first family decision was made: do we isolate or go through this together. For us, the decision to leave a young child who had lived in fear of this moment for almost 24 months was harder than just getting it together. In rare disease families, we are used to making hard choices. We are accustomed to inconvenience and we understand, out of practice, how isolating some experiences can be. At the same time, we have a special bond. There is a flavor of love that is only captured when you must depend on each other for basic needs and survival. That depth of connection and trust is what our families are built on. For us, we weathered COVID as a team. We trusted in the science of vaccinations and boosters and we leaned into natural immunity of vitamins and healing food. The hardest part for us was not the illness itself, but the toll that laying around for multiple days takes on a body with fragile mobility. I find that recovering from being immobile is always the harder part than the fall or disease before it. The body adapts to what it experiences and for me, the more it sits around the more it prefers that state. I challenge the conventional wisdom that you lose function so quickly and I always tell myself that my body must again learn to move with fear. Having a mobility challenge is like walking on a rocking cruise ship every day. You must remind yourself that you can do it and keep alert and aware of any changes in your body or environment. We survived COVID as I knew we would. And we carry those lessons with us into everyday life. In a world that sometimes feels isolated, we have a depth of bond that is hard to emulate. And when the forces impact our health it takes strength and courage to find a way to regain our footing in an ever changing environment.
Summary: Weβve been living in fear of the COVID-19 pandemic for the past two years and itβs been a difficult time for many people to understand what it feels like to have someone you love be immunocompromised during a deadly pandemic Like many people, weβre accustomed to inconvenience and we understand, out of practice, how isolating some experiences can be, but it can be hard to understand why many people donβt share our caution and why some people are able to avoid large crowds even with the vaccines we received when we were in the state of the virus in 2015..β- Karen Morales, a mom of two, lives with a very rare form of a rare disease called Limb-girdle muscular dystrophy and has been living with the virus for the last two years, including the first COVID pandemic in the United States, which killed more than 1,000 people and left more than 2,000 of them in a coma and hundreds of others with life-threatening conditions, including a severe brain injury and a brain haemorrhage, a stroke, and a heart attack and a broken heart, among other things, and many others with rare diseases, including my son Conner, who has Duchenne.
------------------------------------------------
Input: With an immunocompromised son, how could we not be terrified of COVID making its way into our home? For the past two years, weβve been so careful to limit exposure; we were able to homeschool our kids last year and weβve been conscious of avoiding large crowds even with the vaccines. Our kids being able to receive the vaccine was a huge step for us in feeling less of the overwhelming worry of what would happen if any of us were to get COVID, but the concerns were still there. Our son Conner has Duchenne muscular dystrophy, so any virus could have a detrimental effect on his health. With DMD, the whole body is already fighting everyday so itβs possible that the body wonβt be able to fight off any illnesses. The heart and lungs are also already working overtime, so the potential of getting a virus that is known to harm the heart and lungs is a scary thing. We made it almost two years without any of us getting COVID but that came to end over the holidays. I got sick first and despite our precautions, Conner ended up testing positive right before he was supposed to go back to school, with his initial symptom being a stuffy nose. We called our doctor right away and she told us that we should just keep a close eye on him to make sure he didnβt get any worse. Luckily, his symptoms never got too bad (just a low grade fever, sore throat, and slight cold) and he was able to get the booster shot as soon as possible after testing negative again. All things considered, we truly believe that he was as lucky as he was because he had both doses of the vaccine and got Omicron instead of one of the earlier variants. Through our whole experience with the pandemic and getting COVID, one of the most difficult aspects was understanding and coming to terms with the fact that many people donβt share our caution. It can be hard to understand why weβre as careful as we are when you donβt know what it feels like to have someone you love be immunocompromised during a deadly pandemic. We found ourselves having to explain to those around us why it is so important for us all to wear our masks and avoid large crowds and people donβt always understand, especially with the βstereotypesβ surrounding what happens when you get COVID as a child. The main information about children and COVID has been that children are resilient and wonβt be too negatively impacted by it, but that just isnβt true for immunocompromised children. The majority of people will never truly understand our situation until they find themselves in a similar one so it can be hard to hear people complaining about still having to wear masks and saying that COVID isnβt a big deal anymore. At the beginning, it was also hard to find balance in the conversations we had to have with our children because of our own levels of worry.
Summary: Weβve been living in fear of the COVID-19 pandemic for the past two years and itβs been a difficult time for many people to understand what it feels like to have someone you love be immunocompromised during a deadly pandemic Like many people, weβre accustomed to inconvenience and we understand, out of practice, how isolating some experiences can be, but it can be hard to understand why many people donβt share our caution and why some people are able to avoid large crowds even with the vaccines we received when we were in the state of the virus in 2015..β- Karen Morales, a mom of two, lives with a very rare form of a rare disease called Limb-girdle muscular dystrophy and has been living with the virus for the last two years, including the first COVID pandemic in the United States, which killed more than 1,000 people and left more than 2,000 of them in a coma and hundreds of others with life-threatening conditions, including a severe brain injury and a brain haemorrhage, a stroke, and a heart attack and a broken heart, among other things, and many others with rare diseases, including my son Conner, who has Duchenne. | 04-19-2022 11:50:52 | 04-19-2022 11:50:52 | Hi @Sandip-Hapani134 π Assuming you are using our `generate()` function, try adding the `do_sample=True` argument to it.
Let us know if it helped :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,829 | closed | Add doc about `attention_mask` on gpt2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16811.
>If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for `past_key_values`. In other words, the attention_mask always has to have the length: `len(past_key_values) + len(input_ids)`
I add the sentence above describing how `attention_mask` needs to be constructed when `past_key_values` is used. The sentence is added in both the `PyTorch` and `TensorFlow` versions of code.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-19-2022 11:10:35 | 04-19-2022 11:10:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,828 | closed | Fixing return type tensor with `num_return_sequences>1`. | # What does this PR do?
Fixes #16796
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-19-2022 10:02:02 | 04-19-2022 10:02:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,827 | closed | Adding support for `array` key in raw dictionnaries in ASR pipeline. | # What does this PR do?
Adding support for `array` key in raw dictionnaries in ASR pipeline.
This means we can simplify the Quicktour example to look again a bit
more what the old `ffmpeg` example looked like.
Even simpler code might be enabled by
https://github.com/huggingface/datasets/issues/4180
in the future. We could then remove the `[:4]` which is currently necessary to prevent
loading a huge audio in RAM upfront.
Performance-wise the iterator also enables using stuff like `batch_size` and `num_workers` in the
pipeline object to get optimal performance (this will only show real differences on GPU which is not
included in this quicktour)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-19-2022 09:24:27 | 04-19-2022 09:24:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,826 | closed | [Semantic script] Improve README | # What does this PR do?
This PR improves the README of the semantic segmentation example script.
To do:
- [ ] include higher quality gif | 04-19-2022 08:21:35 | 04-19-2022 08:21:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closing this one in favor of #16834 |
transformers | 16,825 | closed | Correct Logging of Eval metric to Tensorboard | # What does this PR do?
An empty dictionary ``eval_metrics`` was being logged, is replaced by ``eval_metric`` which is the output dictionary of ``metric.compute()``. Probably this was a typo in the code.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger, @patil-suraj | 04-19-2022 04:47:12 | 04-19-2022 04:47:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Actually `eval_metrics` is never used, could you delete this line and update the PR ? This will fix the failing test.
https://github.com/huggingface/transformers/blob/d3bd9ac72802c0a3d04c3c63739bcd8f0731b593/examples/flax/text-classification/run_flax_glue.py#L595 |
transformers | 16,824 | closed | Tranformers documentation translation to Portuguese | Hi!
Let's bring the documentation to all the Portuguese-speaking community :)
Who would want to translate? **Please follow our [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md).** Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list.
Some notes:
- Please translate using `VocΓͺ` and not `Tu`.
- Please translate in a gender-neutral way.
- Add your translations to the folder [source/pt/](https://github.com/huggingface/transformers/blob/main/docs/source/pt/)
- Register your translation in [pt/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/pt/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
- Once you're finished, open a pull request and tag this issue by including `#issue-number` in the description, where `issue-number` is the number of this issue.
- π If you'd like others to help you with the translation, you can also post in our [forums](https://discuss.huggingface.co/) or tag [@espejelomar](https://twitter.com/espejelomar) on Twitter to gain some visibility.
## Get Started section
- [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). @vitorfrois
- [x] [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx). @rzimmerdev
## Tutorials
- [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/pipeline_tutorial.mdx) @rzimmerdev
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/preprocessing.mdx) WIP @rzimmerdev
- [x] [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx) @rzimmerdev
- [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx) @rzimmerdev
- [x] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.mdx) @rzimmerdev
- [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/multilingual.mdx) @rzimmerdev
## How-to guides
- [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx "fast_tokenizers.mdx") WIP @Fellip15
- [x] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx "create_a_model.mdx") WIP @[Fellip15](https://github.com/Fellip15)
- [ ] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx "custom_models.mdx")
- [ ] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx "run_scripts.mdx")
- [ ] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx "sagemaker.mdx")
- [ ] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx "converting_tensorflow_models.mdx")
- [ ] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx "serialization.mdx")
- [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx "performance.mdx")
- [ ] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx "parallelism.mdx")
- [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx "benchmarks.mdx")
- [ ] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx "migration.mdx")
- [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx "troubleshooting.mdx")
- [ ] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx "debugging.mdx")
- [ ] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx "community.mdx")
- [ ] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx "docs/source/en/add_new_model.mdx")
- [ ] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx "add_new_pipeline.mdx") @Felipehonorato1
- [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx "testing.mdx")
- [ ] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx "pr_checks.mdx")
## FINE-TUNE FOR DOWNSTREAM TASKS
- [x] [sequence_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/sequence_classification.mdx "sequence_classification.mdx") @jonatasgrosman
- [x] [token_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/token_classification.mdx "token_classification.mdx") @jonatasgrosman
- [ ] [question_answering.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/question_answering.mdx "question_answering.mdx") WIP @jonatasgrosman
- [ ] [language_modeling.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/language_modeling.mdx "language_modeling.mdx") WIP @jonatasgrosman
- [ ] [translation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/translation.mdx "translation.mdx") WIP @jonatasgrosman
- [ ] [summarization.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/summarization.mdx "summarization.mdx") WIP @jonatasgrosman
- [ ] [audio_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/audio_classification.mdx "audio_classification.mdx") WIP @jonatasgrosman
- [ ] [asr.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/asr.mdx "asr.mdx") WIP @jonatasgrosman
- [ ] [image_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/image_classification.mdx "image_classification.mdx") WIP @jonatasgrosman
- [ ] [multiple_choice.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/multiple_choice.mdx "multiple_choice.mdx") WIP @jonatasgrosman
## CONCEPTUAL GUIDES
- [ ] [philosophy.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/philosophy.mdx "philosophy.mdx") @victorescosta
- [ ] [glossary.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/glossary.mdx "glossary.mdx") @victorescosta
- [ ] [pad_truncation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/pad_truncation.mdx "docs/source/en/pad_truncation.mdx") @victorescosta
- [ ] [bertology.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/bertology.mdx "bertology.mdx") @victorescosta
- [ ] [perplexity.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/perplexity.mdx "perplexity.mdx") @victorescosta
FYI @osanseviero @stevhliu @sgugger @mishig25 | 04-18-2022 22:49:19 | 04-18-2022 22:49:19 | Hey, I'll translate quicktour.mdx. <|||||>USP π§π· presente!
Obrigado @vitorfrois e @rzimmerdev! Coloquei como WIP (work in progress) os docs em que estΓ£o trabalhando π€. @rzimmerdev muito obrigado pelo seu PR! Se vocΓͺ puder, por favor, coloque aqui quais outros documentos vocΓͺ deseja traduzir. Podemos adicionar mais se formos mais longe π<|||||>Oi Omar, eu conclui a traduΓ§Γ£o dos seguintes docs:
pipeline_tutorial.mdx
training.mdx
accelerate.mdx
multilingual.mdx
installation.mdx
Ainda pretendo concluir os seguintes:
preprocessing.mdx
model_sharing.mdx
Quando eu concluir esses ultimos dois eu dou um update sobre os prΓ³ximos que vou fazer.
π€<|||||>@rzimmerdev muito obrigado! I just added your name above to the files you already translated and the ones you plan to translate π€<|||||>OlΓ‘ Omar,
Estou traduzindo:
* fast_tokenizers.mdx
* create_a_model.mdx
A medida que for pegando mais para traduzir vou colocando nesse comentΓ‘rio.<|||||>Muito obrigado @Fellip15! I added your name to the first comment in this issue for `fast_tokenizers`.<|||||>Hi @omarespejel I'd like to contribute here! π π§π·
I'll translate the "FINE-TUNE FOR DOWNSTREAM TASKS" section.
PS: It seems that some "How-to guides" pages is in the "Downstream tasks" section on your checklist<|||||>Oi @jonatasgrosman! Amazing π§π·! I added your name to the "FINE-TUNE FOR DOWNSTREAM TASKS" section and also reviewed your #17352 PR!
Thank you for your feedback on the checklist order! I applied it π€.<|||||>Oi Omar, vou enviar as traduΓ§Γ΅es pros arquivos que me pediu :)<|||||>Muito obrigado @rzimmerdev. Eles sΓ£o chave para ter pronta a documentaΓ§Γ£o para seguinte release de Transformers π€.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Oi Omar, vou voltar a traduzir alguns documentos<|||||>OlΓ‘ @rzimmerdev! Thank you, that would be great! Any one you prefer to translate? Sorry for my late reply.<|||||>@omarespejel I can translate the entire part of conceptual guides!<|||||>Thank you, @victorescosta! I added your name to the guide above! π Do you use Brazilian or Portuguese PT?<|||||>> Thank you, @victorescosta! I added your name to the guide above! π Do you use Brazilian or Portuguese PT?
Brazilian portuguese. Is it ok? I will try to minimize these differences in my translation<|||||>@victorescosta, that sounds perfect, thank you!<|||||>Hey, @omarespejel I'd like to contribute! π§π·
I would like to translate the add_new_pipeline How to guide!<|||||>Obrigado @Felipehonorato1! I added you to the list π§π·. Please tag me in your PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey, I'll translate:
- custom_models.mdx
- run_scripts.mdx
- converting_tensorflow_models.mdx
- serialization.mdx<|||||>Awesome! Looking forward the PR! :fire: |
transformers | 16,823 | closed | (TF) model.generate to tf.function for tf serving | ### Feature request
It would be nice if you wrapped the generate method of autorregressive models into a `tf.function`. That way we could export and serve it with all the Tensorflow production stack.
Its kinda a revival of #5443.
It would enable us to do something like:
```python
from transformers import AutoTokenizer, TFAutoModelForCausalLM
import tensorflow as tf
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
model.save(
"some_place",
signatures={
"serving_default": model.generate.get_concrete_function(tf.TensorSpec([None, None], tf.int32))
}
)
```
And then serve it on TF production stack.
### Motivation
It would be nice if you wrapped the generate method of autorregressive models into a `tf.function`. That way we could export and serve it with all the Tensorflow production stack.
It is frustrating to have to write generate by hand or move to PyTorch to serve generative language models.
### Your contribution
I could write a PR, thou it would be nice if HF could share what they have done when trying it, as @Rocketknight1 and @patrickvonplaten said in : https://github.com/huggingface/transformers/issues/5443#issuecomment-1020067525_ , so I would have somewhere to go from. | 04-18-2022 22:34:10 | 04-18-2022 22:34:10 | Hey @piEsposito,
The function should now be useable with `tf.function` I think. We don't want to wrap generate `tf.function` automatically ourselves, but you should be able to do the following now:
```py
#!/usr/bin/env python3
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
for device in physical_devices:
tf.config.experimental.set_memory_growth(device, True)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained("gpt2")
input_ids = tokenizer("hello there can you continue", return_tensors="tf").input_ids
xla_generate = tf.function(model.generate, jit_compile=True)
outputs = xla_generate(input_ids)
print("Output", tokenizer.batch_decode(outputs))
```<|||||>cc @gante <|||||>Hey @piEsposito π As @patrickvonplaten mentioned, we have some generation functionality that can be wrapped by `tf.function` to be highly accelerated -- our tests point at a >30x speedup if an nVidia T4 is used.
The example provided should be functional and XLA-accelerated. However, some advanced features are not yet XLA-compatible, including:
- accelerated serving of different lengths (changing input length triggers recompilation at the moment)
- Beam Search (`num_samples` option in `generate`)
- `generate` options like `bad_words_ids` or `no_repeat_ngram_size`
All these should be solved in the next 1-2 months. Keep an eye on our releases, and let us know if you run into problems :)<|||||>Hey @gante , thanks for the quick reply.
Actually, my problem is specifically creating a serving signature that receives an input with variable length so I can use it with TF Serving in production. Do you have anything on that? <|||||>`tf.function` has a `experimental_relax_shapes` argument, which may help there. I can't confirm, as I haven't tested :) An alternative would be to pad all inputs to the maximum length accepted by the model, but that might spend needless memory/computing.<|||||>@gante thanks. Do you know how can I use the generate method with the fully padded sequences? It always throws an error here :( .<|||||>Pardon me, I wrote a half-truth above :) For encoder-decoder (aka sequence to sequence) models like T5, you can do as I wrote above. For decoder-only models like gpt-2 you can left-pad to a constant length -- see [this test](https://github.com/huggingface/transformers/blob/main/tests/gpt2/test_modeling_tf_gpt2.py#L451) as an example.<|||||>Sorry, but still when I do pad it to `max_length` (if we set padding to `True` it won't pad the max accepted length) it throws me an error:
```python
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel
import tensorflow as tf
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
model = TFGPT2LMHeadModel.from_pretrained("gpt2")
encoded_input = tokenizer([text],
return_tensors='tf',
padding="max_length")
model.generate(
encoded_input.input_ids,
max_length=1024
)
```
Throws me a:
```
ValueError: The context has 1024 number of tokens, but `max_length` is only 1024.
```
And of course I can't set `max_length` to anything more than 1024.
Am I doing something wrong? <|||||>The constant length in decoder-only models has to be smaller than `max_length` (as opposed to encoder-decoder models, where it can be padded to `max_length`), and the difference between your constant and `generate`'s `max_length` corresponds to the maximum tokens `generate` can generate.<|||||>When I pad and leave a few tokens for new generation, it still won't generate my text, but rather some random stuff after about 1000 eos tokens:
```python
text = "Replace me by any text you'd like."
encoded_input = tokenizer([text],
return_tensors='tf',
padding="max_length")
preds = model.generate(
encoded_input.input_ids[:, 50:],
max_length=1024,
pad_token_id=tokenizer.pad_token_id
)
tokenizer.batch_decode(preds)
```
And I get something like
```
[
"<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>...
Replace me by any text you'd like.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
]
```
This result stays the same even when I explicitly mask the padded tokens:
```python
preds = model.generate(
encoded_input.input_ids[:, 50:],
max_length=1024,
attention_mask=encoded_input.attention_mask[:,50:]
)
```
When we try with the same input and do greedy decoding it makes sense. <|||||>It seems to be related to https://github.com/huggingface/transformers/blob/3104036e7f1a3cd6e07a69d648c3597de32f72fe/src/transformers/models/gpt2/modeling_tf_gpt2.py#L816-L842
Where when we are not passing `use_xla=True` it will set the attention masks as `None`.
But it could be something else, as just passing use_xla as True changes the result but won't fix it.<|||||>@piEsposito it seems like we still have a couple of bugs to fix :D
I'm afraid I can't be of much further help -- I'm actively developing XLA + `generate`, but I don't expect to be able to sort your particular issue within the next month. The roadmap is approximatelly XLA logits processors -> XLA beam search -> efficient XLA batching (your issue) -> XLA on more models beyond GPT-2 and T5. When all this is sorted, we will make a big announcement and publish some tutorials. Until then, feel free to ping me to query the state of the XLA changes :)<|||||>@gante if you have an open-sourced branch I would love to help with that generate stuff. If not, thank you for your time and for trying to help me out with this. <|||||>@piEsposito that would be lovely :)
The step I will work next, as I mentioned above, is to make the [logit processors](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_tf_logits_process.py) XLA-compatible. In other words, rewrite them such that the tests [here](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_tf_logits_process.py) pass if you compile the function with `tf.function(jit_compile=True)`. Some of them may already work -- feel free to claim one (or more) for you to work on, excluding the `repetition_penalty` (which I've already rewrote for XLA in a branch)<|||||>@gante hacking Tensorflow away to make stuff serializable is kind of a hobby and also is paying my bills for a long time, so I can work on that.
I just need a bit more context:
- How do I "claim" those logit-processors to work on?
- Should I re-write those tests but using the compiled tf functions?
- Can you point me to your branch to check how are you adding the new tests (to keep it the same style)?
Thanks, letΒ΄s do it. <|||||>Awesome @piEsposito! I will open a PR today, so you can have an example, and post here a more detailed guide πͺ <|||||>Thanks! <|||||>@piEsposito
[This is the PR for an XLA-compatible repetition penalty logits processor](https://github.com/huggingface/transformers/pull/16879). I've just opened it, so I'd suggest waiting until the review process is complete before starting on a new logit processor.
After the PR above gets approved, the process would be:
- write here which logit processor you would like to work on, so we don't work on the same one (this is what I meant by "claim" :) );
- write the XLA test, as in the PR linked above (feel free to make the tests stricter, as I did in the PR);
- make modifications until it passes -- I suspect that a few of them are already XLA-compatible.
If you run into issues along the way, let me know. I will let you know here when the PR gets approved, so we can start on the next processors.<|||||>(The PR got approved and merged. Working on the `TFLogitsWarper` subclasses now.)<|||||>> (The PR got approved and merged. Working on the `TFLogitsWarper` subclasses now.)
Let's do it man. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>(beam search being worked on atm, last missing piece) |
transformers | 16,822 | closed | Inference/prediction ValueError using BART | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.19.0-19-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import BartForConditionalGeneration, BartTokenizerFast
from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
from datasets import load_dataset
raw_dataset = load_dataset(path='parquet', data_files={
'train': ['2021_1.parquet', '2021_2.parquet', '2021_3.parquet'],
'test': ['2021_4.parquet'
]})
model = BartForConditionalGeneration.from_pretrained("bart_nl_sum_17-04_15-50-23/checkpoints/checkpoint-10000")
tokenizer = BartTokenizerFast.from_pretrained("bart_nl_tiny_tz")
model_inputs = tokenizer(raw_dataset['train'][2]['description'], max_length=1024, return_tensors='pt', truncation=True) # padding=max_length
model_outputs = model.generate(
inputs=model_inputs["input_ids"],
max_length=150,
min_length=40,
length_penalty=2.0,
num_beams=4,
early_stopping=True
)
print(tokenizer.decode(model_outputs[0]))
```
```python
Traceback (most recent call last):
File "legalsum_inf.py", line 22, in <module>
early_stopping=True
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1325, in generate
**model_kwargs,
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 2162, in beam_search
output_hidden_states=output_hidden_states,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1363, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1224, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 793, in forward
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
```
### Expected behavior
```shell
I tried to summarize a text from my dataset using a custom fine-tuned BART model. To this end I followed the pytorch example listed at https://huggingface.co/docs/transformers/task_summary#summarization
This is the code that was used and the error that was shown. Any ideas what could be the problem here?
```
| 04-18-2022 21:43:09 | 04-18-2022 21:43:09 | What is the error here ?
<|||||>> What is the error here ?
I actually forgot to include the error. I just added it :)<|||||>This just looks like a typo, the argument to `generate` is `input_ids` and `inputs`.<|||||>Hey @patil-suraj, thank you for your response.
I'm not entirely sure what you mean; I followed the exact implementation as shown here https://huggingface.co/docs/transformers/task_summary#summarization .
However, I just found out that I made a mistake during pretraining... Instead of pretraining for conditional generation, I trained on the CausalLM model. This lead to a model that was not an ecoder-decoder. I just re-pretrained a small model to see whether everything works fine and indeed it does!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,821 | closed | fix _setup_devices in case where there is no torch.distributed package in build | # What does this PR do?
At least in some instances (e.g. conda on my m1), torch is built without distributed support enabled. This takes the form of torch.distributed.is_available() returning false and torch.distributed.is_initialized() raising an exception. In this particular method, it's enough to skip the check if it's not available.
This behavior causing this crash was added in https://github.com/huggingface/transformers/pull/16487
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I could mock this out but it seems painful and unnecessary. I can try to do it if you want.
## Who can review?
Seems like @sgugger is the best reviewer here? | 04-18-2022 21:20:26 | 04-18-2022 21:20:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>done!<|||||>Thanks! |
transformers | 16,820 | closed | [Data2Vec Text] Correct Data2Vec Text | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Data2Vec Text has no `ForMaskedLM` model. The point of the paper is to not have this pretraining objective, but the Data2Vec pretraining method as shown here: https://huggingface.co/facebook/data2vec-text-base .
This pretraining method is too complicated though to add it right away. So for now we'll just add the fine-tuned checkpoints.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-18-2022 15:57:11 | 04-18-2022 15:57:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> We can't just remove a class that is in the main init and was in the release, and I also don't see why we should remove it. GPT-2 is not meant for token classification if you read its paper, yet we still have the ` GPT2ForTokenClassification`.
>
> Likewise, even if this model was not pretrained using masked language modeling, I don't see why we should delete the architecture.
Ah yeah this one was already in the release :-/
We haven't promoted it yet, so not sure anybody really uses the models yet, but yeah not great if the class already existed in a release.
It's a borderline error to me as `ForMaskedModeling` is rarely used for fine-tuning IMO, but more or less solely for pretraining. We might in the future add the correct `ForPreTraining` class, so this could be a bit confusing, but overall probably not worth the hussle here so fine with just leaving it.
cc @edugp @mrm8488 |
transformers | 16,819 | closed | TF: Add sigmoid activation function | # What does this PR do?
Fixes #16810 -- adds the sigmoid activation function to the TF activation functions.
Also sorts the activation functions in the enum-like dict and in the tests, to quickly identify missing functions. | 04-18-2022 15:46:00 | 04-18-2022 15:46:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>As an aside, GELU is a core Keras activation now, although we might have to wait until we can move our minimum version before we can switch to using it instead of our own implementations. Other than that, this looks great! |
transformers | 16,818 | closed | Update build_pr_documentation.yml | null | 04-18-2022 15:45:42 | 04-18-2022 15:45:42 | Ignore =) |
transformers | 16,817 | closed | Wav2 vec2 phoneme ctc tokenizer optimisation | # What does this PR do?
This is my FIRST PR!
The Wav2Vec2PhonemCTCTokenizer is slow when its argument `do_phonemize` is set to True. It re-initialises the backend at each forward pass. This is adressed using a class argument.
There was also an H4 title in the documentation which had a link which did not render( `<h4></h4>` used to replace `####`)
Tests were passed, no additional ones were created. Runtime experiments to phonemize the entire 'tr' (turkish) subset of the common voice dataset gives a x10 boost in performances.
Models:
- Wav2Vec2PhonemeCTCTokenizer: @patrickvonplaten, @LysandreJik
Documentation: @sgugger
| 04-18-2022 13:23:29 | 04-18-2022 13:23:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,816 | closed | CI: Create empty venv on cache miss | # What does this PR do?
Fixes a CI error introduced by https://github.com/huggingface/transformers/pull/16789 -- when there was a cache miss, it attempted to initialize a virtual environment that didn't exist, in two CI workflows.
This PR fixes that by creating empty venvs on a cache miss.
(I wonder why the original CI run inside the PR didn't fail, the new cache names should have triggered the same error π€ ) | 04-18-2022 10:45:51 | 04-18-2022 10:45:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,815 | closed | Make MegatronBert converter compatible with latest megatron code | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The attention parameters in latest megatron code are renamed to `self_attention.*`, (see https://github.com/NVIDIA/Megatron-LM/blob/e156d2fea7fc5c98e645f7742eb86b643956d840/megatron/model/transformer.py#L429), we should also change them in the megatron converter code.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-18-2022 09:54:54 | 04-18-2022 09:54:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, thank you for your PR! It was superseded by the following PR: https://github.com/huggingface/transformers/pull/15820
Thanks again!<|||||>> Hey, thank you for your PR! It was superseded by the following PR: #15820
>
> Thanks again!
Very good, sorry for missing it :) |
transformers | 16,814 | closed | Allow passing encoder_ouputs as tuple to EncoderDecoder Models | # What does this PR do?
Fixes #15536
For now I have added a test of the functionality to an existing test, since I'm not sure if there should be a separate test (or a test at all) for such a minor issue. Let me know if there is a better way.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/15536
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | 04-18-2022 07:46:21 | 04-18-2022 07:46:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>
@jsnfly , thanks you for this PR... Is it possible to do this fix for a T5 model as well.. It is also a sequence to sequence models and sometime we may want to pass a tuple to the decoder. |
transformers | 16,813 | closed | huggingface-cli login in dockerfile | Hello. I want to use private model in my docker environment.
for that, I need to login in the dockerfile. (to deploy model using entrypoint)
is there anyway to do this? | 04-17-2022 23:00:25 | 04-17-2022 23:00:25 | I copied token file to my /root/.huggingace. but it doesn't work.<|||||>In your dockerfile, you can manually save the token with the following:
```
python -c 'from huggingface_hub import HfFolder; HfFolder.save_token("<TOKEN>")'
```
It will save it in `~/.huggingface/token`. You can check it works with
```
python -c 'from huggingface_hub import whoami; print(whoami())'
```<|||||>Thank you !
I used `from huggingface_hub.commands.user import _login; _login(token=TOKEN)`, and this worked.
<|||||>Perfect!<|||||>Hello @hyunwoongko
The `_login` function requires `hf_api` argument, which one did you give?<|||||>Hello @taki0112. I am using the following script.
```python
from huggingface_hub import HfApi
from huggingface_hub.commands.user import _login
_login(HfApi(), token="YOUR_KEY")
```<|||||>@hyunwoongko Thanks a lot !<|||||>I go through the code but It was not work for me, I think it is regarding new updates, but you can use directly _login from _login.py
```python
from huggingface_hub._login import _login
_login(token='your token as string', add_to_git_credential=False)
```
|
transformers | 16,812 | closed | Add Wav2Vec2Conformer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16640
This PR adds Fairseq's Wav2Vec2Conformer checkpoints.
Wav2Vec2Conformer outperforms Wav2Vec2 on Librispeech by some margin (1.8% WER instead of 2.2% WER).
Compared to Wav2Vec2 the "attention" block is enhanced by a "Conformer"-block as described in https://arxiv.org/abs/2005.08100. Thus, the "large" Wav2Vec2Conformer checkpoints have roughly twice as many parameters as the "large" Wav2Vec2 checkpoints.
All checkpoints can be found
### **Final ToDos before merging**
- [x] Add all checkpoints - see here: https://huggingface.co/models?other=wav2vec2-conformer
- [x] Evaluate checkpoints on Librispeech yields following results:
- Without LM:
[rel-pos-960h](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large-960h-ft): **1.85 WER** (clean) | **3.82 WER** (other)
[rope-960h](https://huggingface.co/facebook/wav2vec2-conformer-rope-large-960h-ft): **1.96 WER** (clean) | **3.98 WER** (other)
- With 4-gram LM:
[rel-pos-960h](https://huggingface.co/patrickvonplaten/wav2vec2-conformer-rel-pos-large-960h-ft-4-gram): **1.94 WER** (clean) | **3.58 WER** (other)
[rope-960h](https://huggingface.co/patrickvonplaten/wav2vec2-conformer-rope-large-960h-ft-4-gram): **1.88 WER** (clean) | **3.57 WER** (other)
- [ ] Discuss with @sravyapopuri388 about how to release model in Transformers. Will a paper be published ? A short article?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-17-2022 21:52:51 | 04-17-2022 21:52:51 | Currently blocked by https://github.com/pytorch/fairseq/issues/4356<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger would be great if you could do a quick 2nd round of review, notably:
1.) UniSpeechSatBaseModelOutput and others don't exist anymore and are replaced by Wav2Vec2ModelOutput. Logically this makes sense since those models are all trained using the Wav2Vec2 loss and thus need to return both `extract_features` and `last_hidden_states`. Also those model outputs were never in the public init so I think that's fine from a bcp point of view
2.) There is no official paper for this model (yet) - I'm in contact with the authors and we're discussing how to best promote it. Would like to merge this model in a somewhat silent way until the authors come back to me. => thus it's not mentioned in the README.md .
Let me know if that's fine for you :-) <|||||>Test failure is unrelated |
transformers | 16,811 | closed | Confusion about past_key_values and attention_mask in GPT2Attention | # Enviorment info
- `transformers` version: 4.12.5
Models:
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
# Infomation
When I read through the code in `modeling_gpt2`, I got confused about how attention_mask is used. Here, the code concatenates the past key and value into the current hidden_state's key and value. Here's the code in `modeling_gpt2.GPT2Attention`'s `forward` method:
```python
query = self._split_heads(query, self.num_heads, self.head_dim)
key = self._split_heads(key, self.num_heads, self.head_dim)
value = self._split_heads(value, self.num_heads, self.head_dim)
if layer_past is not None:
past_key, past_value = layer_past
key = torch.cat((past_key, key), dim=-2)
value = torch.cat((past_value, value), dim=-2)
if use_cache is True:
present = (key, value)
else:
present = None
if self.reorder_and_upcast_attn:
attn_output, attn_weights = self._upcast_and_reordered_attn(query, key, value, attention_mask, head_mask)
else:
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
```
However, later in the `self._attn` function, when using an `attention_mask`, the code directly adds the `attention_mask` to the attention weight. Here's the code in the `self._attn` method:
```python
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
attn_weights = torch.matmul(query, key.transpose(-1, -2))
if self.scale_attn_weights:
attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)
# Layer-wise attention scaling
if self.scale_attn_by_inverse_layer_idx:
attn_weights = attn_weights / float(self.layer_idx + 1)
if not self.is_cross_attention:
# if only "normal" attention layer implements causal mask
query_length, key_length = query.size(-2), key.size(-2)
causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].bool()
attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype))
if attention_mask is not None:
# Apply the attention mask
attn_weights = attn_weights + attention_mask
attn_weights = nn.Softmax(dim=-1)(attn_weights)
```
The `attn_weights ` has the shape of `[batch, n_head, query_length, key_length]`, and `attention_mask` here has the shape of `[batch, 1, 1, seq_length]`. Does this action imply that the input attention mask's `seq_length` must match the full context length `key_length` instead of `query_length`? In other words, when we use `past_key_and_values`, the `attention_mask` must contain sequences from `past_key_and_values` and `input_ids` instead of only the sequences from `input_ids`?
| 04-17-2022 17:59:48 | 04-17-2022 17:59:48 | Great question @wiio12!
You're exactly right `attention_mask` needs to contain the masking strategy that was used for `past_key_values`. In other words, the `attention_mask` always has to have the length: `len(past_key_values) + len(input_ids)`<|||||>Thank you for your response @patrickvonplaten, very clear! Now I am sure how `past_key_value` and `attention_mask` work.
I wonder if this constraint is mentioned in any documentation, otherwise, the user may get an error with dimension mismatch but not know why this happens.
<|||||>Think it'd be a good idea to document this somewhere! Would you like to add a sentence to the documentation of the `attention_mask` parameter in GPT2?<|||||>Not sure I did it correctly, but I change the `doc_string` in `modeling_gpt2` and `modeling_tf_gpt2` and make a PR #16829.
Correct me if I did it wrong :)<|||||>Looks great! |
transformers | 16,810 | closed | Missing activation Function | I think the sigmoid / softmax activation function is missing here
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/roberta/modeling_tf_roberta.py#L1299 | 04-17-2022 13:04:48 | 04-17-2022 13:04:48 | Hey @RodSernaPerez π The `sigmoid` activation function was added to our list of TF activation functions, it should be operational now (you will have to install `transformers>=4.19.0.dev0` or pull from `main`).
Feel free to reopen the issue if you run into problems :) |
transformers | 16,809 | closed | [Benchmark] | # π₯ Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| 04-17-2022 03:16:29 | 04-17-2022 03:16:29 | |
transformers | 16,808 | closed | Pin Jax to last working release | # What does this PR do?
Jax maintainers has apparently no problem breaking the `optax` library and every library depending on it. So this PR pins Jax and jaxlib for now until this gets sorted out.
cc @patil-suraj @patrickvonplaten for information, will merge as soon as CI is green (or only 500 errors coming from the Hub). | 04-17-2022 00:38:05 | 04-17-2022 00:38:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for fixing this @sgugger ! |
transformers | 16,807 | closed | use transformers on apple mac m1 (TF backend) | is it possible?
I saw that there is an option to use tensorflow-metal..but does it gonna work seamlessly with transformers?
https://developer.apple.com/metal/tensorflow-plugin/
for pytorch it seems not possible yet.
| 04-16-2022 18:52:19 | 04-16-2022 18:52:19 | I believe it should work with both on M1, @NielsRogge you have experience using `transformers` on M1 macs right?<|||||>I'm using PyTorch only on my Mac, but if you use TF for Transformers you get a nice speedup, see here for a nice thread: https://twitter.com/lvwerra/status/1470818833619468290?t=1bDbo3Kl-MHBF2MbDh5GiA&s=19<|||||>thanks a lot!
I'll try it.
so do you mean that you use pytorch without the m1 gpu?<|||||>> so do you mean that you use pytorch without the m1 gpu?
Yes, for now. However the PyTorch team is planning to add support for it. |
transformers | 16,806 | closed | use base_version to check torch version in torch_less_than_1_11 | The current check `version.parse(torch.__version__) < version.parse("1.11")` doesn't work if the version has additional stuff after it like `1.11.0a0+17540c5`.
New way: `version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11")`
This change, as suggested by @whoknowsB [here](https://github.com/huggingface/transformers/pull/16043#issuecomment-1079644824), fixes it.
Fixes https://github.com/huggingface/transformers/issues/16587 and https://github.com/huggingface/transformers/issues/14375
## Who can review?
@LysandreJik
@sgugger
| 04-16-2022 17:19:48 | 04-16-2022 17:19:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> It also properly deals with the `dev0` and the likes, so all good to me, thanks for the fix! Could you just apply the same to the line before (`is_torch_less_than_1_8`) so that next time someone adds a new check like this, they don't hesitate and follow the same model?
Done. |
transformers | 16,805 | closed | run_translation.py: resize decoder based on tokenizer | # What does this PR do?
Changes the `run_translation.py` script to resize the decoder based on the tokenizer.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 04-16-2022 16:49:57 | 04-16-2022 16:49:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16805). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,804 | closed | Blenderbot export issue in ONNX and C++ | # π Migration
Hi everyone, my issue is regarding blenderbot: facebook/blenderbot-400M-distill. I did the exportation with:
`python -m transformers.onnx --model=facebook/blenderbot-400M-distill onnx/`
And I get:
```
Some weights of the model checkpoint at facebook/blenderbot-400M-distill were not used when initializing BlenderbotModel: ['final_logits_bias', 'lm_head.weight']
- This IS expected if you are initializing BlenderbotModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BlenderbotModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Using framework PyTorch: 1.10.0+cu102
Overriding 1 configuration item(s)
- use_cache -> False
...\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:219: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
...\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:225: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
...\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:256: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
...\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:845: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
Validating ONNX model...
-[β] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[β] (2, 8, 1280) matches (2, 8, 1280)
-[β] all values close (atol: 1e-05)
All good, model saved at: onnx/model.onnx
```
**The warning: "TracerWarning: Converting a tensor to a Python boolean...", I think it is the classic problem of control flow**. I have tried to export blenderbot with torch.jit.script(model). However, I get:
```
return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
~ <--- HERE
```
**I understand that TORCHSCRIPT dosen't support the function: any. Therefore, there isn't possibility of export blenderbot as a script and overcomes the problems of control flow**.
Anyway, I wrote the next onnx programm in C++:
```
#include <iostream>
#include <onnxruntime_cxx_api.h>
using namespace std;
void max(std::vector<float>& input, int64_t size_input_ids, int64_t size_embedding, std::vector<int>& output) {
double bigger = 0.0;
int index = 0;
int k = 0;
output.clear();
for (size_t i = 0; i < size_input_ids; i++) {//128
bigger = 0.0;
index = 0;
for (size_t j = 0; j < size_embedding; j++) {//1280 or 8008
if (input[k] > bigger) {
bigger = input[k];
index = j;
}
k++;
}
output.push_back(index);
}
}
int main()
{
Ort::Env env;
Ort::RunOptions runOptions;
Ort::Session session(nullptr);
auto modelPath = L"D:\\Nexus\\NexusV_1_0_0\\x64\\Release\\Conversational\\Model\\model.onnx";
// create session
session = Ort::Session(env, modelPath, Ort::SessionOptions{ nullptr });
constexpr int64_t size_input = 128;
constexpr int64_t size_output = 128;
constexpr int64_t output_01 = 1280;//8008
constexpr int64_t numOutputElements = size_output * output_01;
constexpr int64_t output_02 = 1280;
constexpr int64_t numOutputElements_02 = size_output * output_02;
// define shape
const array<int64_t, 2> inputShape = { 1, size_input};
const array<int64_t, 3> outputShape = { 1, size_output, output_01 };
const array<int64_t, 3> outputShape_02 = { 1, size_output, output_02 };
std::vector<int64_t> input_ids(size_input);
std::vector<int64_t> attention_mask(size_input);
std::vector<int64_t> decoder_input_ids(size_input);
std::vector<int64_t> decoder_attention_mask(size_input);
std::vector<float> last_hidden_state(numOutputElements);
std::vector<float> last_output_02(numOutputElements_02);
for (size_t i = 0; i < input_ids.size(); i++) {
input_ids[i] = 0;
attention_mask[i] = 0;
decoder_input_ids[i] = 0;
decoder_attention_mask[i] = 0;
}
//IDs: I want to order a Pizza
input_ids[0] = 281;
input_ids[1] = 538;
input_ids[2] = 287;
input_ids[3] = 1831;
input_ids[4] = 265;
input_ids[5] = 440;
input_ids[6] = 4425;
input_ids[7] = 2;
attention_mask[0] = 1;
attention_mask[1] = 1;
attention_mask[2] = 1;
attention_mask[3] = 1;
attention_mask[4] = 1;
attention_mask[5] = 1;
attention_mask[6] = 1;
attention_mask[7] = 1;
for (size_t i = 0; i < last_hidden_state.size(); i++)
last_hidden_state[i] = 0;
for (size_t i = 0; i < last_output_02.size(); i++)
last_output_02[i] = 0;
// define Tensor
auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
std::vector<Ort::Value> inputTensors;
inputTensors.push_back(Ort::Value::CreateTensor<int64_t>(memory_info, input_ids.data(), input_ids.size(), inputShape.data(), inputShape.size()));
inputTensors.push_back(Ort::Value::CreateTensor<int64_t>(memory_info, attention_mask.data(), attention_mask.size(), inputShape.data(), inputShape.size()));
inputTensors.push_back(Ort::Value::CreateTensor<int64_t>(memory_info, decoder_input_ids.data(), decoder_input_ids.size(), inputShape.data(), inputShape.size()));
inputTensors.push_back(Ort::Value::CreateTensor<int64_t>(memory_info, decoder_attention_mask.data(), decoder_attention_mask.size(), inputShape.data(), inputShape.size()));
std::vector<Ort::Value> outputTensors;
outputTensors.push_back(Ort::Value::CreateTensor<float>(memory_info, last_hidden_state.data(), last_hidden_state.size(), outputShape.data(), outputShape.size()));
outputTensors.push_back(Ort::Value::CreateTensor<float>(memory_info, last_output_02.data(), last_output_02.size(), outputShape_02.data(), outputShape_02.size()));
// define names
Ort::AllocatorWithDefaultOptions allocator;
std::vector < const char*> inputNames;
inputNames.push_back(session.GetInputName(0, allocator));
inputNames.push_back(session.GetInputName(1, allocator));
inputNames.push_back(session.GetInputName(2, allocator));
inputNames.push_back(session.GetInputName(3, allocator));
std::vector < const char*> outputNames;
outputNames.push_back(session.GetOutputName(0, allocator));
outputNames.push_back(session.GetOutputName(1, allocator));
// run inference
try {
session.Run(Ort::RunOptions{ nullptr }, inputNames.data(), inputTensors.data(), 4, outputNames.data(), outputTensors.data(), 2);
}
catch (Ort::Exception& e) {
cout << e.what() << endl;
return 1;
}
std::vector<int> output_ids;
max(last_hidden_state, size_input, output_01, output_ids);
for (size_t i = 0; i < output_ids.size(); i++) {
cout << output_ids[i] << ", ";
}
cout << endl;
return 0;
}
```
I get the output:
`652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652, 652,`
**I think the reason why the ids are the same is regarding the control flow problem** (or maybe there is something wrong in my program).
I understand that blenderbot is supported: [(https://huggingface.co/docs/transformers/main/en/serialization#torchscript)]
**Please, could you give me a suggestion to export correctly blenderbot or get the correct output ids?**
Note: I also use this tutorial ([https://www.kaggle.com/code/danieliusv/blenderbot1-0-test-and-export/notebook]), but I get the same results.
**Environment**
Windows 10
PyCharm 2021.3 (Community Edition)
Visual Studio 2019
| 04-15-2022 22:18:16 | 04-15-2022 22:18:16 | cc @lewtun, maybe also @michaelbenayoun and @mfuntowicz <|||||>> Hi everyone, my issue is regarding blenderbot: facebook/blenderbot-400M-distill. I did the exportation with:
>
> `python -m transformers.onnx --model=facebook/blenderbot-400M-distill onnx/`
Hi @Zapotecatl , could you try to add the right feature to your export command? I see that `facebook/blenderbot-400M-distill` is for `ConditionalGeneration` in the [model config file](https://huggingface.co/facebook/blenderbot-400M-distill/blob/main/config.json#L8) but you use the default feature while exporting.
Here are all available features for Blenderbot ONNX conversion: `default`, `default-with-past`, `causal-lm`, `causal-lm-with-past`, `seq2seq-lm` and `seq2seq-lm-with-past`.
It seems that the output of `BlenderbotForConditionalGeneration` is `Seq2SeqLMOutput` as you can see on [the source code](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/blenderbot/modeling_blenderbot.py#L1342)
Something like this command could help:
```bash
python -m transformers.onnx --model=facebook/blenderbot-400M-distill --feature=seq2seq-lm onnx/
```<|||||>
Hi @ChainYo, I appreciate your help. I executed the command. However, there is a issue regarding the "Outputs values". The ouput is this:
```
python -m transformers.onnx --model=facebook/blenderbot-400M-distill --feature=seq2seq-lm onnx/
Using framework PyTorch: 1.10.0+cu102
Overriding 1 configuration item(s)
- use_cache -> False
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:219: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:225: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:256: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:845: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
Validating ONNX model...
-[β] ONNX model output names match reference model ({'logits'})
- Validating ONNX Model output "logits":
-[β] (2, 8, 8008) matches (2, 8, 8008)
-[x] values not close enough (atol: 1e-05)
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\__main__.py", line 99, in <module>
main()
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\__main__.py", line 92, in main
validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\convert.py", line 416, in validate_model_outputs
"Outputs values doesn't match between reference model and ONNX exported model: "
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 2.276897430419922e-05
```
I get the same results with: seq2seq-lm-with-past, default-with-past, causal-lm and causal-lm-with-past. Of course, I get the original result with the option default.
<|||||>By default `atol` is set to 1e-05, you should try to set it to 1e-04 in your case.
Here is the new command:
```bash
$ python -m transformers.onnx --model=facebook/blenderbot-400M-distill --feature=seq2seq-lm --atol=1e-04 onnx/
```
<|||||>Yes, thanks, now the model is exported correctly (I changed the value of my variable constexpr int64_t output_01 = 8008).
However, the issue about the repetition continues. Let me show you:
```
python -m transformers.onnx --model=facebook/blenderbot-400M-distill --feature=seq2seq-lm --atol=1e-04 onnx/
Using framework PyTorch: 1.10.0+cu102
Overriding 1 configuration item(s)
- use_cache -> False
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:219: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:225: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:256: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:845: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
Validating ONNX model...
-[β] ONNX model output names match reference model ({'logits'})
- Validating ONNX Model output "logits":
-[β] (2, 8, 8008) matches (2, 8, 8008)
-[β] all values close (atol: 0.0001)
All good, model saved at: onnx/model.onnx
```
Now, I get in my C++ program the next output:
`2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 21, 21, 21, 2, 2, 2, 2, 2, 2, 2, 2, 21, 21, 21, 21, 2, 2, 2, 2, 2, 21, 21, 21, 2, 2, 2, 2, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 2, 2, 2, 21, 21, 21, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2`
The output that I expect is something like that:
1, 281, 913, 4955, 8, 714, 287, 413, 1252, 361, 304, 398, 38, 281, 398, 6180, 3870, 19, 383, 1230, 19, 298, 4014, 340, 1043, 21, 2.
(Decode: I love pizza! What toppings do you like? I like vegetables, meats, and condiments.)
<|||||>I never used onnx model with C++, but I think you should check the way you load the model, how you tokenize inputs and what is your InferenceSession code.<|||||>Thanks for your suggestions @ChainYo.
I checked my C++ code with care. I didn't find errors.
I did the equivalent program in phyton. Please, let me show you:
```
import torch.onnx
import onnx
import onnxruntime as ort
from transformers import BlenderbotTokenizer
model = onnx.load("onnx/model.onnx")
onnx.checker.check_model(model)
#print(onnx.helper.printable_graph(model.graph))
print('Blender check')
tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
utterance = "I want to order a Pizza"
tokens_ids = tokenizer(utterance, return_tensors="pt")
input_ids = torch.cat((tokens_ids['input_ids'], torch.zeros(1, 120, dtype=torch.int64)), dim=1)
attention_mask = torch.cat((tokens_ids['attention_mask'], torch.zeros(1, 120, dtype=torch.int64)), dim=1)
decoder_input_ids = torch.zeros(1, 128, dtype=torch.int64)
decoder_attention_mask = torch.zeros(1, 128, dtype=torch.int64)
#print(input_ids)
#print(attention_mask)
ort_session = ort.InferenceSession('onnx/model.onnx')
args = {'input_ids': input_ids.cpu().detach().numpy(), 'attention_mask': attention_mask.cpu().detach().numpy(), 'decoder_input_ids': decoder_input_ids.cpu().detach().numpy(), 'decoder_attention_mask': decoder_attention_mask.cpu().detach().numpy()}
outputs = ort_session.run(None, args,)
#outputs: logits and 711; outputs[0]: logits; outputs[0][0]: output 128 X 8008
res = outputs[0][0].argmax(axis=1)
print(res)
# Decoding the model output
deco = tokenizer.decode(res)
print(deco)
print("End BlenderBot")
```
The output that I get is the same that in my C++ program:
```
Blender check
[ 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 21 21 21 2 2 2
2 2 2 2 2 21 21 21 21 2 2 2 2 2 21 21 21 2 2 2 2 21 21 21
21 21 21 21 21 21 21 21 21 21 21 21 21 21 21 2 2 2 21 21 21 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2]
</s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s>...</s></s></s></s></s></s></s></s>....</s></s></s></s></s>...</s></s></s></s>..................</s></s></s>...</s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s>
End BlenderBot
```
I really want to overcome this, any suggestion is welcome!
<|||||>Do you get the same result with another Blenderbot model if you have one to test?
Your python code looks good, I must try it to see how it goes<|||||>Hi, I tried to export blenderbot-3B and blenderbot-1B-distill whit the command:
`python -m transformers.onnx --model=facebook/blenderbot-1B-distill --feature=seq2seq-lm --atol=1e-04 onnx/`
However, I can't export that models. I get the next:
```
feature=seq2seq-lm --atol=1e-04 onnx/
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using framework PyTorch: 1.10.0+cu102
Overriding 1 configuration item(s)
- use_cache -> False
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:219: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:225: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:256: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\models\blenderbot\modeling_blenderbot.py:845: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
Validating ONNX model...
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\__main__.py", line 99, in <module>
main()
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\__main__.py", line 92, in main
validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\transformers\onnx\convert.py", line 350, in validate_model_outputs
session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\Jorge\PycharmProjects\inteBERT\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Deserialize tensor model.decoder.layers.18.self_attn.v_proj.bias failed.open file model.decoder.layers.18.self_attn.v_proj.bias fail, errcode = 2 - El sistema no puede encontrar el archivo especificado.
```
<|||||>Excuse me @lewtun , I know you did the PR about the Blenderbot exportation, maybe you have some advice? I really appreciate any suggestion.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I had also the same issue with blenderbot onnx, the problem was actually in control flow. Exactly in this place `if input_shape[-1] > 1`. You have to export it manually by passing seq_len = 2, I used [1, 1](two BOS tokens). At the beginning when you have only one input token for the decoder it expands the attention mask one way, further in a second way. I hope it helps |
transformers | 16,803 | closed | ValueError: too many values to unpack (expected 2) | hi
i tried run the code bi_LSTM for summerization but in Prediction got this Error
i cheeked all thing but i didn't understand where the problem, here the Cod:
--------------------------------------------------------------------------------------
# Encode the input sequence to get the feature vector
encoder_model = Model(inputs=encoder_inputs,outputs=[encoder_inputs] + encoder_states1)
# Decoder setup
# Below tensors will hold the states of the previous time step
dec_h_state_f = tf.keras.layers.Input(shape=(latent_dim))
dec_h_state_r = tf.keras.layers.Input(shape=(latent_dim))
dec_c_state_f = tf.keras.layers.Input(shape=(latent_dim))
dec_c_state_r = tf.keras.layers.Input(shape=(latent_dim))
decoder_hidden_state_input = Input(shape=(max_text_len,latent_dim * 2))
# Get the embeddings of the decoder sequence
dec_emb2= dec_emb_layer(decoder_inputs)
# To predict the next word in the sequence, set the initial states to the states from the previous time step
decoder_outputs2, decoder_fwd_state_h2, decoder_fwd_state_c2, decoder_back_state_h2, decoder_back_state_c2 = decoder_lstm(
dec_emb2, initial_state=[dec_h_state_f, dec_h_state_r, dec_c_state_f, dec_c_state_r])
#attention inference
attn_out_inf, attn_states_inf = attn_layer([decoder_hidden_state_input, decoder_outputs2])
decoder_inf_concat = Concatenate(axis=-1, name='concat')([decoder_outputs2, attn_out_inf])
# A dense softmax layer to generate prob dist. over the target vocabulary
decoder_outputs2 = decoder_dense(decoder_inf_concat)
# Final decoder model
decoder_model = Model(
[decoder_inputs] + [decoder_hidden_state_input]+ [dec_h_state_f, dec_h_state_r, dec_c_state_f, dec_c_state_r],
[decoder_outputs2] + [decoder_fwd_state_h2, decoder_fwd_state_c2, decoder_back_state_h2, decoder_back_state_c2])
---------------------------------------------------------------------
def decode_sequence(input_seq):
# Encode the input as state vectors.
e_out, state_values = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
# Populate the first word of target sequence with the start word.
target_seq[0, 0] = target_word_index['sostok']
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, decoder_states = decoder_model.predict([target_seq] + [e_out] + state_values)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_token = reverse_target_word_index[sampled_token_index+1]
if(sampled_token!='eostok'):
decoded_sentence += ' '+sampled_token
# Exit condition: either hit max length or find stop word.
if (sampled_token == 'eostok' or len(decoded_sentence.split()) >= (max_summary_len-1)):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1,1))
target_seq[0, 0] = sampled_token_index
# Update internal states
state_values = decoder_states
return decoded_sentence
------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-69-0e69bf8de83c>](https://localhost:8080/#) in <module>()
2 print("Review:",seq2text(x_tr[i]))
3 print("Original summary:",seq2summary(y_tr[i]))
----> 4 print("Predicted summary:",decode_sequence(x_tr[i].reshape(1,max_text_len)))
5 print("\n")
[<ipython-input-67-d9c528c43f6a>](https://localhost:8080/#) in decode_sequence(input_seq)
1 def decode_sequence(input_seq):
2 # Encode the input as state vectors.
----> 3 e_out, state_values = encoder_model.predict(input_seq)
4
5 # Generate empty target sequence of length 1.
**ValueError: too many values to unpack (expected 2)**
| 04-15-2022 20:20:01 | 04-15-2022 20:20:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,802 | closed | Add support for moving Trainer out of GPU memory | # π Feature request
Hello! I couldn't find any way of doing this, so I'm opening this issue. I'm sorry if there is already support for this and I couldn't find it.
It would be useful to have the ability to completely remove a Trainer and all associated objects (model, optimizer, etc..) out of the GPU memory without having to destroy the Trainer object. This should also allow the reverse operation, that is, once the Trainer is on CPU, it should be possible to move it back on GPU -- including all components and accounting for multi-GPU parallelism.
Essentially, it would be nice to have a `trainer.move_to_device()` function that took care of everything.
## Motivation
In my particular use case I have 2 relatively small GPUs and I would like to train multiple models in the same script in an interleaved fashion. Something like:
``` python
# Ugly semi-pseudocode
while not done:
for trainer in trainer_list:
trainer.move_to_device(cuda)
for epoch in range(epochs):
trainer.train()
trainer.move_to_device(cpu)
```
I suspect it could also be useful for other scenarios where VRAM is limited and someone could want to train multiple models.
ps: I'm aware of `_move_model_to_device()`, but that seems to be limited to the model weights.
## Your contribution
I don't know much about how the Trainer handles device placement, but I'd be happy to test any proposed solution.
| 04-15-2022 19:14:50 | 04-15-2022 19:14:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for not seeing this earlier! Pinging @sgugger for advice.<|||||>This might be useful indeed, but is not something I will have time to work on any time soon. Always happy to review a PR however!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,801 | closed | [SegFormer, BEiT] Clean up tests | # What does this PR do?
BEiT and SegFormer still had some code that I had to remove, as semantic segmentation models are now created in the `_prepare_for_class` method in `test_modeling_common.py` (which was added in #15991). | 04-15-2022 13:42:54 | 04-15-2022 13:42:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,800 | closed | πDiverse Beam Search BUG | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
2022-04-15 20:47:39.551372: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2022-04-15 20:47:39.605017: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2494135000 Hz
2022-04-15 20:47:39.605529: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55f0c01a5f30 executing computations on platform Host. Devices:
2022-04-15 20:47:39.605558: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-centos-8.2.2.2004-Core
- Python version: 3.7.6
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.2.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <no>
- Using distributed or parallel set-up in script?: <no>
### Who can help
@patrickvonplaten @narsil
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids,num_return_sequences=5, num_beam_groups=5, diversity_penalty=1.0, num_beams=5)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded)
```
```
/home/chengxin/anaconda3/envs/simcls/lib/python3.7/site-packages/transformers/generation_beam_search.py:197: UserWarning: Passing `max_length` to BeamSearchScorer is deprecated and has no effect. `max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`, or `group_beam_search(...)`.
"Passing `max_length` to BeamSearchScorer is deprecated and has no effect. "
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_2692803/1384243548.py in <module>
5 input_ids = tokenizer.encode(input, return_tensors="pt")
6 # gen_kwargs = dict(num_return_sequences=16, num_beam_groups=5, diversity_penalty=1.0, num_beams=16)
----> 7 outputs = model.generate(input_ids,num_return_sequences=5, num_beam_groups=5, diversity_penalty=1.0, num_beams=5)
8 decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
9 print(decoded)
~/anaconda3/envs/simcls/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs)
47 def decorate_no_grad(*args, **kwargs):
48 with self:
---> 49 return func(*args, **kwargs)
50 return decorate_no_grad
51
~/anaconda3/envs/simcls/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs)
1432 return_dict_in_generate=return_dict_in_generate,
1433 synced_gpus=synced_gpus,
-> 1434 **model_kwargs,
1435 )
1436
~/anaconda3/envs/simcls/lib/python3.7/site-packages/transformers/generation_utils.py in group_beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2877
2878 next_token_scores_processed = logits_processor(
-> 2879 group_input_ids, next_token_scores, current_tokens=current_tokens, beam_group_idx=beam_group_idx
2880 )
2881 next_token_scores = next_token_scores_processed + beam_scores[batch_group_indices].unsqueeze(-1)
~/anaconda3/envs/simcls/lib/python3.7/site-packages/transformers/generation_logits_process.py in __call__(self, input_ids, scores, **kwargs)
88 f"{processor.__class__} are passed to the logits processor."
89 )
---> 90 scores = processor(input_ids, scores, **kwargs)
91 else:
92 scores = processor(input_ids, scores)
~/anaconda3/envs/simcls/lib/python3.7/site-packages/transformers/generation_logits_process.py in __call__(self, input_ids, scores, current_tokens, beam_group_idx)
586 ]
587 token_frequency = torch.bincount(previous_group_tokens, minlength=vocab_size).to(scores.device)
--> 588 scores[batch_idx * group_size : (batch_idx + 1) * group_size] -= self._diversity_penalty * token_frequency
589
590 return scores
RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Long
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
| 04-15-2022 12:53:02 | 04-15-2022 12:53:02 | Hey @Hannibal046,
I cannot reproduce the bug, but I also don't think that we are still supporting PyTorch 1.2.0. Could you upgrade to PyTorch library to a newer version and try again?<|||||>Hi @patrickvonplaten ,
Thanks your time. I upgrade my PyTorch version, and the bug is gone. |
transformers | 16,799 | closed | [ViT, BEiT, DeiT, DPT] Improve code | # What does this PR do?
This PR cleans up some code of ViT, BEiT, DeiT and DPT.
Most importantly, it corrects the tuple outputs, in case no pooler was added (as shown by #16760).
Next to that, it cleans up the tests, by
- removing `to_2tuple`, and instead leveraging an `expected_seq_len` attribute of the `ModelTester`
- removing `chunk_length` and `is_encoder_decoder` statements, which don't apply for these encoder-only models. | 04-15-2022 09:47:02 | 04-15-2022 09:47:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,798 | closed | DataCollatorForLanguageModeling using the wrong symbol for masking | ## Environment info
- `transformers` version: 4.17.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- Tokenizers: @SaulLu
- Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import torch
import tempfile
import os
from transformers import BertTokenizer, DataCollatorForLanguageModeling
AMINO_ACIDS_WITH_ALL_ADDITIONAL = "RHKDESTNQCUGPAVILMFYW$.?|*"
IDX_AA = {aa: i for i, aa in enumerate(AMINO_ACIDS_WITH_ALL_ADDITIONAL)}
torch.manual_seed(146)
class myDS(torch.utils.data.Dataset):
def __init__(self, seqs, tok):
self.seqs = seqs
self.tok = tok
def __getitem__(self,i: int):
return self.tok.encode(self.seqs[i])
def __len__(self):
return len(self.seqs)
data = ['C A S S L A Q G L N E Q F']
with tempfile.TemporaryDirectory() as tempdir:
path = os.path.join(tempdir, "vocab.txt")
with open(path, "w") as f:
for v in IDX_AA:
f.write(v + "\n")
tok = BertTokenizer(
path,
do_lower_case=False,
do_basic_tokenize=True,
tokenize_chinese_chars=False,
pad_token="$",
mask_token=".",
unk_token="?",
sep_token="|",
cls_token="*",
model_max_len=16,
padding_side="right",
)
data_collator = DataCollatorForLanguageModeling(tokenizer=tok, mlm=True, mlm_probability=0.15)
train_dataset = myDS(data, tok)
data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, collate_fn=data_collator)
for i in data_loader:
for j in tok.batch_decode(i['input_ids']):
print(j)
```
## Expected behavior
Should mask tokens with "." but sometimes uses other symbols. In the above code, the output is "* C A S S L A Q * L N E Q F |" but should be something like "* C A S S L A Q . L N E Q F |". This issue occurs rarely and randomly (146 is the first seed where it occurs) since the masking process is random.
| 04-15-2022 09:14:29 | 04-15-2022 09:14:29 | Hi @KangarooChief ,
It's indeed the expected behavior :slightly_smiling_face: The data collator for language modeling does not only replace with a mask. You can see the logic of the `DataCollatorForLanguageModeling` here:
https://github.com/huggingface/transformers/blob/9a24b97b7f304fa1ceaaeba031241293921b69d3/src/transformers/data/data_collator.py#L768-L778
As you can see, 10% of the replacements made are with a random word (including `*`). :slightly_smiling_face: <|||||>Thanks |
transformers | 16,797 | closed | [Image classification script] Update README.md | # What does this PR do?
Some small README improvements for the image classification script. | 04-15-2022 08:22:06 | 04-15-2022 08:22:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,796 | closed | Small Bug: text2text_generation and text_generation result in duplicate results | - `transformers` version: 4.17.0
- Platform: Darwin-20.1.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Library:
- Text generation: @patrickvonplaten @narsil
- Pipelines: @Narsil
## Information
I would like to report a small bug inside the text2text_generation.py and the text_generation.py. When I am using 'return_tensors=True' and 'do_sample=True' inside the __call__, it will result in duplicated tensors.
The bug locates at the following lines:
Line 229: transformers.pipelines.text_generation.py
Line 172: transformers.pipelines.text2text_generation.py
For example at Line 229
```python
generated_sequence = generated_sequence.numpy().tolist()
records = []
for sequence in generated_sequence:
if return_type == ReturnType.TENSORS:
record = {"generated_token_ids": generated_sequence}
# record = {"generated_token_ids": sequence} # should be this?
elif return_type in {ReturnType.NEW_TEXT, ReturnType.FULL_TEXT}:
...
```
The generated_sequence has length which equals to num_return_sequences. And For each sequence, it would either do nothing and add the tensor into the records, or do decoding. However, the whole generated_sequence is added into the record at each loop. This also happens in Line 172: transformers.pipelines.text2text_generation.py
```python
records = []
for output_ids in model_outputs["output_ids"][0]:
if return_type == ReturnType.TENSORS:
record = {f"{self.return_name}_token_ids": model_outputs}
# record = {f"{self.return_name}_token_ids": output_ids} # should be this?
elif return_type == ReturnType.TEXT:
```
Model I am using (GPT2):
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import pipeline
text_generator = pipeline("text-generation")
result = text_generator(
"As far as I am concerned, I will",
max_length=10,
do_sample=True,
num_return_sequences=3,
return_tensors=True
)
```
```
[{'generated_token_ids':
[[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 691]]},
{'generated_token_ids':
[[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 691]]},
{'generated_token_ids':
[[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307],
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 691]]}]
```
## Expected behavior
```
[{'generated_token_ids':
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307]},
{'generated_token_ids':
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 307]},
{'generated_token_ids':
[1722, 1290, 355, 314, 716, 5213, 11, 314, 481, 691]}]
```
This is the first time I am contributing to the great transformers, very excited. Should I make a pull request?
Regrads
Xiangyang Ni
| 04-15-2022 07:03:20 | 04-15-2022 07:03:20 | Hey @FrankDataAnalystPython,
Great job discovering the bug :slightly_smiling_face: !
It would be great if you could open a PR to fix it<|||||>@FrankDataAnalystPython
Thanks for the report, opened a PR for the fix !
https://github.com/huggingface/transformers/pull/16828<|||||>Dear @Narsil:
I saw you already finished the PR. Should I close the current issue now?
Many Thanks!
Regards
Xiangyang Ni |
transformers | 16,795 | closed | Problem at using CLIPFeatureExtractor from transformers.models.clip.feature_extraction_clip | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Models:
CLIPFeatureExtractor (which is used to provide feature to CLIPModel)
## Information
When I use CLIPFeatureExtractor(), (https://huggingface.co/transformers/v4.6.0/_modules/transformers/models/clip/feature_extraction_clip.html#CLIPFeatureExtractor) at the __call__ function, after images pass through resize, center_crop, and normalize, it is given to `BatchFeature` of `feature_extraction_utils` and then problem happens.
The problem happens inside `convert_to_tensors` function, inside the for loop starting from line 166, the code tries to change value to tensor, but sometimes the type of value is a list or something else, so it goes to except statement. So I had to write an extra snippet for the case when the value is list.
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
`clip_feature_extractor = CLIPFeatureExtractor()`
`# images are list of images in numpy array or some other types such as PIL or tensor`
`clip_feature_extractor(images=images, return_tensors='pt')` | 04-15-2022 05:50:34 | 04-15-2022 05:50:34 | Hi,
> images are list of images in numpy array or some other types such as PIL or tensor
clip_feature_extractor(images=images, return_tensors='pt')
If you pass a list of images, make sure they are all of the same type.<|||||>Yes I did passed a list of images whose images are all in same type, but the problem happens<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Can you elaborate, or provide a minimal reproducer?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm also having this issue. It only works if I pass in PIL images. It does not work on tensors _or_ lists of tensors. |
transformers | 16,794 | closed | [Request] Add Wav2Vec support for onnx conversion | Hi, I'd like to request support for Wav2Vec on the transformers-onnx library.
Thank you in advance. | 04-15-2022 00:33:17 | 04-15-2022 00:33:17 | I believe @lewtun is working on that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is it still to do ?
I would be interested too or is it something for the optimum library now ? |
transformers | 16,793 | closed | Quick question about efficiently initializing model parameters | Hi,
First of all, thanks for your great work on building such an amazing pytorch package!
I am trying to modify the [modeling_bart.py](https://github.com/huggingface/transformers/blob/v4.11.3/src/transformers/models/bart/modeling_bart.py) file, specifically [BartDecoderLayer](https://github.com/huggingface/transformers/blob/v4.11.3/src/transformers/models/bart/modeling_bart.py#L331). I am planning to add a new cross-attention layer which would attend to the graph nodes. The following snippet is adapted from the original codes.
```
self.graph_attn = BartAttention(
self.embed_dim,
config.decoder_attention_heads,
dropout=config.attention_dropout,
is_decoder=True,
)
```
My question is about how to **efficiently** initialize these parameters to be the same as the parameters of original BartDecoder Cross-attention layer. By "efficiently", I mean some simple and fast way for the initialization process. A brutal-force way I could easily think of right now is to download [pytorch_model.bin](https://huggingface.co/facebook/bart-large/blob/main/pytorch_model.bin) and then manually assign values by doing key-matching. | 04-14-2022 21:01:39 | 04-14-2022 21:01:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,792 | closed | Add `LongT5` model | # What does this PR do?
Fixes #16681
This PR adds `PyTorch` and `Flax` implementation of the `LongT5` model. (TensorFlow implementation is omitted for this PR as it requires another round of reviews. However, I'm willing to work on the TF side in another PR as well)
This PR adds `LongT5` model according to the original Google's paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916).
### PyTorch implementation
- [x] Local Attention
- [x] Transient-Global Attention
### Flax implementation
- [x] Local Attention
- [x] Transient-Global Attention
### t5x - HF equivalence
Model equivalence is investigated in my repo [here](https://github.com/stancld/longt5-eval).
- [x] Local Attention (looks promising right now)
- [x] Transient-Global Attention (it looks like there's a problem with the calculation of `side_position_bias`)
### Other features
- [ ] Compatibility with a standard T5 model checkpoints
Original checkpoints converted to the HF format can be temporarily found on the HF hub:
- [x] [**LongT5-Local-Base**](https://huggingface.co/Stancld/LongT5-Local-Base) (250 million parameters) - (PT, Flax)
- [x] [**LongT5-Local-Large**](https://huggingface.co/Stancld/LongT5-Local-Large) (780 million parameters) - (PT, Flax)
- [x] [**LongT5-TGlobal-Base**](https://huggingface.co/Stancld/LongT5-TGlobal-Base) (250 million parameters) - (PT, Flax)
- [x] [**LongT5-TGlobal-Large**](https://huggingface.co/Stancld/LongT5-TGlobal-Large) (780 million parameters) - (PT, Flax)
- [ ] **LongT5-TGlobal-XL** (3 billion parameters) *(pushing to the hub is frozen)*
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests for the **Local** model?
- [x] Did you write any new necessary tests for the **TGlobal** model?
- [x] Did you update the results of slow/tooslow tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @patil-suraj
*More information will be added* | 04-14-2022 20:28:40 | 04-14-2022 20:28:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @patrickvonplaten and @patil-suraj. I will need to go through the code a few times again to polish it. However, as this is the very first time for me to add the new model into `transformers`, I'd like to kindly ask you if you can provide me with preliminary feedback just to know if there's anything missing or so :]
As indicated in the PR description, there's a glitch regarding the calculation `side_position_bias`. I'll try to think about this part more.
**Q1**: Afaik, `LongT5` uses the same tokenizer as `T5`. Is it, therefore, right not to add any new tokenizer?
**Q2**: Is it okay to have a single model class both for a model with local attention and transient-global attention, or is it prefered to split this into separate classes?<|||||>Sorry for being a bit late here - answering tomorrow!<|||||>@stancld Do you have any plans for a LongT5 release? I'm really looking forward to being able to replace the LED model with LongT5. Thank you so much for your effort. <|||||>@stancld I tried to use this PR for seq2seq training, I got this bug, can you check this ?
My code:
```python
tokenizer = AutoTokenizer.from_pretrained("t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("Stancld/LongT5-Local-Base")
rouge = load_metric("rouge")
bleu = load_metric("bleu")
train_dataset = Seq2SeqDataset("../data/train.pkl", tokenizer)
val_dataset = Seq2SeqDataset("../data/val.pkl", tokenizer)
# instantiate trainer
trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
```
This error that I got:
```
Traceback (most recent call last):
File "seq2seq_train.py", line 105, in <module>
trainer.train()
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/trainer.py", line 1428, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/trainer.py", line 2019, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/trainer.py", line 2051, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 797, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/models/longt5/modeling_longt5.py", line 2285, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/models/longt5/modeling_longt5.py", line 1636, in forward
extended_attention_mask = _get_local_attention_mask(attention_mask, self.block_len, inputs_embeds.device)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/models/longt5/modeling_longt5.py", line 204, in _get_local_attention_mask
local_attention_mask = _mask_local_attention_mask(local_attention_mask, block_len)
File "/opt/conda/lib/python3.8/site-packages/transformers-4.19.0.dev0-py3.8.egg/transformers/models/longt5/modeling_longt5.py", line 189, in _mask_local_attention_mask
return torch.logical_and(local_attention_mask, locality_mask)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```<|||||>@PhungVanDuy - Thanks for the pointer! I haven't tested my code on a GPU before. Should work fine with the new commit :]<|||||>Is this model supposed to be able to load any T5 checkpoint using `from_pretrained()`? If not from pre-trained, does this PR provide any other ways to do it?
Although I might not have fully understood how to use the model yet, I wanted to check out if it was supported,
but it seems that it is not able to load any of the attention weights:
`model = LongT5ForConditionalGeneration.from_pretrained("allenai/unifiedqa-t5-base")`
```
You are using a model of type t5 to instantiate a model of type longt5. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at allenai/unifiedqa-t5-base were not used when initializing LongT5ForConditionalGeneration: ['encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.SelfAttention.o.weight', 'encoder.block.11.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.SelfAttention.k.weight', 'encoder.block.10.layer.0.SelfAttention.o.weight', 'encoder.block.4.layer.0.SelfAttention.v.weight', 'encoder.block.8.layer.0.SelfAttention.k.weight',
...
```<|||||>> Is this model supposed to be able to load any T5 checkpoint using `from_pretrained()`? If not from pre-trained, does this PR provide any other ways to do it? Although I might not have fully understood how to use the model yet, I wanted to check out if it was supported, but it seems that it is not able to load any of the attention weights:
>
> `model = LongT5ForConditionalGeneration.from_pretrained("allenai/unifiedqa-t5-base")`
>
> ```
> You are using a model of type t5 to instantiate a model of type longt5. This is not supported for all configurations of models and can yield errors.
> Some weights of the model checkpoint at allenai/unifiedqa-t5-base were not used when initializing LongT5ForConditionalGeneration: ['encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.SelfAttention.o.weight', 'encoder.block.11.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.SelfAttention.k.weight', 'encoder.block.10.layer.0.SelfAttention.o.weight', 'encoder.block.4.layer.0.SelfAttention.v.weight', 'encoder.block.8.layer.0.SelfAttention.k.weight',
> ...
> ```
It's a good point to ensure compatibility with `T5` checkpoints. I have some ideas in my mind on how to make this possible, but maybe let's wait for some code review first. But thanks a lot for pointing this out! :]<|||||>@stancld Thank for you quick fix, I also found some same error with Large model, I also fixed it, but I guess still have few problem with model, when I tried to train a seq2seq model with same code base above, I got this logs.
```
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 0.0009765625 [55/1886]
0%| | 10/338508 [00:32<281:22:30, 2.99s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 0.000244140625
0%| | 11/338508 [00:35<279:30:47, 2.97s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 3.0517578125e-05
0%| | 12/338508 [00:38<287:14:53, 3.05s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 3.814697265625e-06
0%| | 13/338508 [00:41<286:40:39, 3.05s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 9.5367431640625e-07
0%| | 14/338508 [00:44<283:07:13, 3.01s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.1920928955078125e-07
0%| | 15/338508 [00:47<281:09:21, 2.99s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 5.960464477539063e-08
0%| | 16/338508 [00:50<279:44:32, 2.98s/it]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 3.725290298461914e-09
0%| | 17/338508 [00:53<280:01:09, 2.98s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4.656612873077393e-10
0%| | 18/338508 [00:56<279:24:12, 2.97s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 5.820766091346741e-11
0%| | 19/338508 [00:59<281:12:31, 2.99s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 3.637978807091713e-12
{'loss': 4615.4496, 'learning_rate': 9.99940917201366e-05, 'epoch': 0.0}
0%| | 20/338508 [01:02<285:15:29, 3.03s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.8189894035458565e-12
0%| | 21/338508 [01:05<284:58:54, 3.03s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4.547473508864641e-13
0%| | 22/338508 [01:08<282:52:19, 3.01s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.1368683772161603e-13
0%| | 23/338508 [01:11<281:55:16, 3.00s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.4210854715202004e-14
0%| | 24/338508 [01:14<281:53:37, 3.00s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.7763568394002505e-15
0%| | 25/338508 [01:21<396:36:35, 4.22s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2.220446049250313e-16
0%| | 26/338508 [01:24<361:24:43, 3.84s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 5.551115123125783e-17
0%| | 27/338508 [01:27<336:14:37, 3.58s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 6.938893903907228e-18
0%| | 28/338508 [01:30<319:43:38, 3.40s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8.673617379884035e-19
0%| | 29/338508 [01:33<306:31:46, 3.26s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.0842021724855044e-19
0%| | 30/338508 [01:36<299:47:18, 3.19s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 5.421010862427522e-20
0%| | 31/338508 [01:39<293:40:56, 3.12s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2.710505431213761e-20
0%| | 32/338508 [01:42<288:47:18, 3.07s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.3552527156068805e-20
0%| | 33/338508 [01:45<288:19:34, 3.07s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1.6940658945086007e-21
0%| | 34/338508 [01:48<288:35:55, 3.07s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2.117582368135751e-22
0%| | 35/338508 [01:51<291:35:38, 3.10s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 5.293955920339377e-23
0%| | 36/338508 [01:54<286:56:54, 3.05s/it]
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 6.617444900424222e-24
0%| | 37/338508 [01:57<289:17:22, 3.08s/it```<|||||>> Amazing work! Looks like you made great progress.
>
> Left some statements:
>
> * let's remove cache logic from encoder-only layers
> * think we should remove the fp16 hacks in PyTorch
> * think we should also remove the "not-well-working" parallelization logic (device map)
>
> Happy to dive deeper into the model next week, if you're stuck with something @stancld (your comment above) - feel free to also ping me on Slack :-)
Hello @patrickvonplaten, thanks a lot for your thorough review!!
Completely with dropping caching logic from the new attention modules as, I believe, there's no use case to use these modules in decoders.<|||||>@stancld I just tested this PR with your latest commit (I am trying to train a code generation model), for some reason I don't know why the training process working well with fp32 but for fp16 with apex backend it induced the problem that I mentioned above at https://github.com/huggingface/transformers/pull/16792#issuecomment-1105785095. Can you check this, please? <|||||>> @stancld I just tested this PR with your latest commit (I am trying to train a code generation model), for some reason I don't know why the training process working well with fp32 but for fp16 with apex backend it induced the problem that I mentioned above at [#16792 (comment)](https://github.com/huggingface/transformers/pull/16792#issuecomment-1105785095). Can you check this, please?
Ooh, thanks for letting me know. I've reverted it then :]<|||||>> > @stancld I just tested this PR with your latest commit (I am trying to train a code generation model), for some reason I don't know why the training process working well with fp32 but for fp16 with apex backend it induced the problem that I mentioned above at [#16792 (comment)](https://github.com/huggingface/transformers/pull/16792#issuecomment-1105785095). Can you check this, please?
>
> Ooh, thanks for letting me know. I've reverted it then :]
Oh I think it's seem have problem at some where, even with previous commit it still got that problem. Can you try to fine-tune a model with fp16? <|||||>> > > @stancld I just tested this PR with your latest commit (I am trying to train a code generation model), for some reason I don't know why the training process working well with fp32 but for fp16 with apex backend it induced the problem that I mentioned above at [#16792 (comment)](https://github.com/huggingface/transformers/pull/16792#issuecomment-1105785095). Can you check this, please?
> >
> >
> > Ooh, thanks for letting me know. I've reverted it then :]
>
> Oh I think it's seem have problem at some where, even with previous commit it still got that problem. Can you try to fine-tune a model with fp16?
We're definitely planning to do some fine-tuning before merging this PR, but there are some other tasks to finish yet:]<|||||>I did some fine tuning over the weekend. Somehow, the fine-tuned model just keeps repeating the same token<|||||>@stancld , let me know if you need help with anything :-)<|||||>Hello, thanks for adding these new models. I was wondering about the model equivalence for some of the models uploaded to the model hub. I have tried finetuning the TGlobal-base, TGlobal-large, and local-base, and local-large and the only one that seems to give results i would expect during the start of training is TGlobal-base. The other ones all seem to start with losses that would seem more consistent with random initialization. I have seqlens of 4096 and 512 and have tried it with BF16 and float32 using deepspeed zero2 with the sameish results. I know this pull request is a WIP but thought this information might be useful in some way and if there's any finetuning tests that you would like me to try I have access to pretty beefy hardware (A100s) and am happy to help as this type of model is of great interest to me. Thanks for all your hard work!<|||||>@patil-suraj could you take a look here?<|||||>Hello @patil-suraj, have you had an opportunity to have a look at that bug with TGlobal attention? I've tried to investigate that issue for a while the last weekend, however, haven't made any progress :[ <|||||>Hey @stancld ! Yes, I've been digging into it since past week, and I know where the issue is coming from in `TGlobal` attention. I have a hackish fix which I'm verifying now. Also found and fixed couple of small bugs in the local attention layer. Will post about it soon once the fix is ready :) <|||||>Hey @stancld, @patrickvonplaten ! Finally, was able to find and fix the subtle bugs. Here the changes I made.
#### In both `local` and `TGlobal` attention
- set masked value to -1e10 to match the encoder output.
The `logits` will still match even with `-1e4` but the `encoder_outputs` won't. Since the encoder can handle really large text I think it's important that the `encouder_outputs` match, if someone wants to use only the encoder. wdyt @patrickvonplaten ?
#### In Local attention
- Fixed computing `relative_position` to match the encoder outputs and original implementation.
- In the ported model, the `lm_head` weights didn't match. To verify the outputs we need to set the correct weights for `lm_head`
#### In TGlobal attention
- Fix `global_segment_ids`:
In the original flax codebase the `global_segment_ids` are always either 1 or 0.
The `global_segment_ids` are set 0 for orphan tokens and padded tokens and the rest are set to 1. Instead of 0 to `_sequence_block_ids_max`.
- fix `global_block_ids`.
The `global_block_ids` are not computed correctly when `seq_length >= 16384` and `attention_mask` is passed with 0's in it. This change explicitly sets the padded position to -1 to match the original implementation.
If you want to verify:
- Follow the instructions in @stancld's [repo](https://github.com/stancld/longt5-eval).
- Set the `activation_dtype` in LongT5 gin configs to `float32` to compare the outputs with torch.
https://github.com/google/flaxformer/blob/826c45c9cc14cee0f906b7c4b6d041f08f8ece5d/flaxformer/t5x/configs/longt5/architectures/longt5_1_1_transient_global_flaxformer.gin#L37
https://github.com/google/flaxformer/blob/main/flaxformer/t5x/configs/longt5/architectures/longt5_1_1_flaxformer.gin#L37
```python
import gin
import jax.numpy as jnp
import numpy as np
import torch
import t5x
from transformers import AutoModelForSeq2SeqLM, FlaxAutoModelForSeq2SeqLM
# modify this path according to your setup
home = "/home/suraj_huggingface_co/longt5-debug/longt5-eval"
config_file = f"{home}/flaxformer/t5x/configs/longt5/models/longt5_1_1_transient_global_base.gin"
checkpoint_dir = f"{home}/google-checkpoints/LongT5-TGlobal-Base"
hf_model_path = "Stancld/LongT5-TGlobal-Base"
# Parse config file
with open(config_file) as bindings:
gin.parse_config(bindings)
gin.finalize()
# Get model
model_config_ref = gin.query_parameter("%MODEL")
model = model_config_ref.scoped_configurable_fn()
# Load checkpoint
t5x_checkpoint = t5x.checkpoints.load_t5x_checkpoint(checkpoint_dir)
pt_model = AutoModelForSeq2SeqLM.from_pretrained(hf_model_path)
# for local attention model set the correct weights for `lm_head`
# pt_model.lm_head.weight.data = torch.from_numpy(t5x_checkpoint["target"]["decoder"]["logits_dense"]["kernel"].T)
enc_seq_length = 2048
seq_length = 10
enc_shape = [2, enc_seq_length]
shape = [2, seq_length]
encoder_input_tokens = np.ones(enc_shape, dtype=np.int32)
decoder_input_tokens = np.ones(shape, dtype=np.int32)
decoder_target_tokens = np.ones(shape, dtype=np.int32)
attention_mask = np.ones(enc_shape, dtype=np.int32)
# # add some zeros as padding tokens
import random
mask_idx = random.randrange(10, enc_seq_length)
encoder_input_tokens[0, mask_idx:] = 0
attention_mask[0, mask_idx:] = 0
mask_idx = random.randrange(10, enc_seq_length)
encoder_input_tokens[1, mask_idx:] = 0
attention_mask[1, mask_idx:] = 0
decoder_input_tokens[:, seq_length-2:] = 0
decoder_target_tokens[:, seq_length-2:] = 0
# Run forward pass
print("~~~~~~~~~~ FlaxForrmer ~~~~~~~~~~~~")
t5x_logits, mod_vars = model.module.apply(
{"params": t5x_checkpoint["target"]},
encoder_input_tokens=encoder_input_tokens,
decoder_input_tokens=decoder_input_tokens,
decoder_target_tokens=decoder_target_tokens,
enable_dropout=False,
mutable='intermediates'
)
print("~~~~~~~~~ HF PyTorch ~~~~~~~~~~~~~")
with torch.no_grad():
pt_output = pt_model(
# encoder_outputs=(torch.from_numpy(encoder_output).float(),),
input_ids=torch.from_numpy(encoder_input_tokens).long(),
attention_mask=torch.from_numpy(attention_mask).long(),
decoder_input_ids=torch.from_numpy(decoder_target_tokens).long(),
output_hidden_states = True,
output_attentions = True,
)
# print(pt_output.shape)
print("~~~~~~~~~~~~~~~~~~~~~~")
# verify if `logits` match
np.allclose(pt_output.logits.numpy()[:, :-mask_idx, ...], t5x_logits[:, :-mask_idx, ...], atol=1e-3)
```
Let me know, if you try it find some issues with it. Now going to check the flax implementation. Once that's done will ping you for review @patrickvonplaten :) <|||||>> Hey @stancld, @patrickvonplaten ! Finally, was able to find and fix the subtle bugs. Here the changes I made.
>
> #### In both `local` and `TGlobal` attention
> * set masked value to -1e10 to match the encoder output.
>
> The `logits` will still match even with `-1e4` but the `encoder_outputs` won't. Since the encoder can handle really large text I think it's important that the `encouder_outputs` match, if someone wants to use only the encoder. wdyt @patrickvonplaten ?
>
> #### In Local attention
> * Fixed computing `relative_position` to match the encoder outputs and original implementation.
> * In the ported model, the `lm_head` weights didn't match. To verify the outputs we need to set the correct weights for `lm_head`
>
> #### In TGlobal attention
> * Fix `global_segment_ids`:
> In the original flax codebase the `global_segment_ids` are always either 1 or 0.
> The `global_segment_ids` are set 0 for orphan tokens and padded tokens and the rest are set to 1. Instead of 0 to `_sequence_block_ids_max`.
> * fix `global_block_ids`.
> The `global_block_ids` are not computed correctly when `seq_length >= 16384` and `attention_mask` is passed with 0's in it. This change explicitly sets the padded position to -1 to match the original implementation.
>
> If you want to verify:
>
> * Follow the instructions in @stancld's [repo](https://github.com/stancld/longt5-eval).
> * Set the `activation_dtype` in LongT5 gin configs to `float32` to compare the outputs with torch.
>
> https://github.com/google/flaxformer/blob/826c45c9cc14cee0f906b7c4b6d041f08f8ece5d/flaxformer/t5x/configs/longt5/architectures/longt5_1_1_transient_global_flaxformer.gin#L37
>
> https://github.com/google/flaxformer/blob/main/flaxformer/t5x/configs/longt5/architectures/longt5_1_1_flaxformer.gin#L37
>
> ```python
> import gin
> import jax.numpy as jnp
> import numpy as np
> import torch
>
> import t5x
>
> from transformers import AutoModelForSeq2SeqLM, FlaxAutoModelForSeq2SeqLM
>
> # modify this path according to your setup
> home = "/home/suraj_huggingface_co/longt5-debug/longt5-eval"
>
> config_file = f"{home}/flaxformer/t5x/configs/longt5/models/longt5_1_1_transient_global_base.gin"
> checkpoint_dir = f"{home}/google-checkpoints/LongT5-TGlobal-Base"
> hf_model_path = "Stancld/LongT5-TGlobal-Base"
>
>
> # Parse config file
> with open(config_file) as bindings:
> gin.parse_config(bindings)
> gin.finalize()
>
> # Get model
> model_config_ref = gin.query_parameter("%MODEL")
> model = model_config_ref.scoped_configurable_fn()
>
>
> # Load checkpoint
> t5x_checkpoint = t5x.checkpoints.load_t5x_checkpoint(checkpoint_dir)
>
> pt_model = AutoModelForSeq2SeqLM.from_pretrained(hf_model_path)
>
> # for local attention model set the correct weights for `lm_head`
> # pt_model.lm_head.weight.data = torch.from_numpy(t5x_checkpoint["target"]["decoder"]["logits_dense"]["kernel"].T)
>
>
> enc_seq_length = 2048
> seq_length = 10
>
> enc_shape = [2, enc_seq_length]
> shape = [2, seq_length]
>
> encoder_input_tokens = np.ones(enc_shape, dtype=np.int32)
> decoder_input_tokens = np.ones(shape, dtype=np.int32)
> decoder_target_tokens = np.ones(shape, dtype=np.int32)
> attention_mask = np.ones(enc_shape, dtype=np.int32)
>
> # # add some zeros as padding tokens
> import random
> mask_idx = random.randrange(10, enc_seq_length)
> encoder_input_tokens[0, mask_idx:] = 0
> attention_mask[0, mask_idx:] = 0
>
> mask_idx = random.randrange(10, enc_seq_length)
> encoder_input_tokens[1, mask_idx:] = 0
> attention_mask[1, mask_idx:] = 0
>
> decoder_input_tokens[:, seq_length-2:] = 0
> decoder_target_tokens[:, seq_length-2:] = 0
>
>
> # Run forward pass
> print("~~~~~~~~~~ FlaxForrmer ~~~~~~~~~~~~")
> t5x_logits, mod_vars = model.module.apply(
> {"params": t5x_checkpoint["target"]},
> encoder_input_tokens=encoder_input_tokens,
> decoder_input_tokens=decoder_input_tokens,
> decoder_target_tokens=decoder_target_tokens,
> enable_dropout=False,
> mutable='intermediates'
> )
>
>
> print("~~~~~~~~~ HF PyTorch ~~~~~~~~~~~~~")
> with torch.no_grad():
> pt_output = pt_model(
> # encoder_outputs=(torch.from_numpy(encoder_output).float(),),
> input_ids=torch.from_numpy(encoder_input_tokens).long(),
> attention_mask=torch.from_numpy(attention_mask).long(),
> decoder_input_ids=torch.from_numpy(decoder_target_tokens).long(),
> output_hidden_states = True,
> output_attentions = True,
> )
>
> # print(pt_output.shape)
> print("~~~~~~~~~~~~~~~~~~~~~~")
>
>
> # verify if `logits` match
> np.allclose(pt_output.logits.numpy()[:, :-mask_idx, ...], t5x_logits[:, :-mask_idx, ...], atol=1e-3)
> ```
>
> Let me know, if you try it find some issues with it. Now going to check the flax implementation. Once that's done will ping you for review @patrickvonplaten :)
That looks good to me!<|||||>@stancld looks like the pipeline failures are **not** flaky and related to this PR - do you need help with solving them? They can be a bit tricky!<|||||>Will take a look at pipeline failures. |
transformers | 16,791 | closed | Change no_trainer scripts to force an output_dir if tracking is enabled | `TensorBoard` requires an output directory that isn't `None`, so this PR forces an `output_dir` if tracking is used in all the `no_trainer` scripts. | 04-14-2022 17:07:55 | 04-14-2022 17:07:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,790 | closed | Batching does not work with any XGLM model (simple script to reproduce issue included) | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-4.4.0-140-generic-x86_64-with-LinuxMint-18.1-serena
- Python version: 3.7.10
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.3.4 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten, @anton-l
Models:
- XGLM
## To reproduce
Here is a simple python script to reproduce the issue. When using any XGLM model when I start using batching the behaviour changes and I get gibberish results often. You can also reproduce this issue by padding an input text with a pad token and using attention mask 0 here because it is something to do with handling of attention mask.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
βfrom transformers import pipeline
# it fails with either of these models which are both xglm
MODEL = 'KoboldAI/fairseq-dense-125M'
MODEL = 'facebook/xglm-564M'
β
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForCausalLM.from_pretrained(MODEL)
β
text = 'What is the only '
β
p = pipeline('text-generation', model=model, tokenizer=tokenizer)
β
print('expected')
print(p(text, do_sample=False, return_full_text=False, max_new_tokens=64))
β
print('without batching it works as expected and gives same response as above')
print(p([text] * 2 + ['some random text which is longer than the other'], do_sample=False, return_full_text=False, max_new_tokens=64))
β
print('with batching its broken and output is different even though the operation is deterministic')
print(p([text] * 2 + ['some random text which is longer than the other'], do_sample=False, return_full_text=False, max_new_tokens=64, batch_size=3))
``` | 04-14-2022 16:24:03 | 04-14-2022 16:24:03 | Super keen to resolve this issue and am willing to contribute if given suggestions.<|||||>cc @patil-suraj <|||||>Hey @ri938 ! This is because the inputs are padded to right by default, but for batch generation the padding side should be left. If you set `tokenizer.padding_side = "left"` then it should work as expected.<|||||>Thanks this indeed did fix the issue with the facebook model but not the kobold model. I guess in this case its an issue with the kobold model and I shoud raise the issue with the provider of this instead.<|||||>Can you provide some Q&A demo for the xglm model,Thanks<|||||>@htthYjh XGLM is an auto-regressive model like (GPT-2, 3) so you could try to do QA by providing some QA prompts and then feeding a que and asking it to generate the answer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,789 | closed | CI: non-remote GH Actions now use a python venv | # What does this PR do?
This PR changes the cached python environment in non-remote GH actions to fresh virtual environments, that are rebuilt whenever `setup.py` gets updated. This ensures our test environment is closer to what a new user would see in the wild, and that we don't have package constraints due to other installed libraries (that come bundled in `ubuntu-latest`).
As a side effect, because the cached venv requires no installation of python packages on a cache hit, we also got slightly faster CI:
- \>3 mins faster in `Add new model like template tests`
- \>3 mins faster in `Model templates runner`
(total of ~7 mins per full CI run when there is a cache hit)
Examples of runs:
cache miss -> create new venv: https://github.com/huggingface/transformers/runs/6027339356?check_suite_focus=true
cache hit -> load cached venv: https://github.com/huggingface/transformers/runs/6028381114?check_suite_focus=true | 04-14-2022 16:20:48 | 04-14-2022 16:20:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(cc @ydshieh ) |
transformers | 16,788 | closed | Add semantic script no trainer, v2 | # What does this PR do?
This is a clean version of #16630. | 04-14-2022 16:18:43 | 04-14-2022 16:18:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,787 | closed | Special Tokens Not Working as Expected in Bert Tokenizer | ## Environment info
- `transformers` version: 4.18.0
- Platform: macOS-12.0.1-arm64-arm-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Tokenizers: @SaulLu
## Information
Using the BERT tokenizer and wanted to add my own special tokens, but am not getting the expected behavior
## To reproduce
```
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
n_added_tokens = tokenizer.add_tokens(["<CONCEPT>", "</CONCEPT>"], special_tokens=True)
print(n_added_tokens)
context = "Hi I am taking <CONCEPT> Advil </CONCEPT>"
context_tokens = tokenizer(context).input_ids
print(tokenizer.convert_ids_to_tokens(context_tokens))
context = "<CONCEPT>"
context_tokens = tokenizer(context).input_ids
print(tokenizer.convert_ids_to_tokens(context_tokens))
```
**Output:**
```
2
['[CLS]', 'hi', 'i', 'am', 'taking', '<', 'concept', '>', 'ad', '##vil', '<', '/', 'concept', '>', '[SEP]']
['[CLS]', '<CONCEPT>', '[SEP]']
```
## Expected behavior
**Expected Output:**
```
2
['[CLS]', 'hi', 'i', 'am', 'taking', '<CONCEPT>', 'ad', '##vil', '</CONCEPT>', '[SEP]']
['[CLS]', '<CONCEPT>', '[SEP]']
```
Seems like when we pass the special token alone it somehow works, but not in more text . | 04-14-2022 15:50:34 | 04-14-2022 15:50:34 | Thank you very much for sharing this issue!
It looks like you've hit on a problem in the data workflow in our slow tokenizers (which has been around for a very long time apparently :scream_cat: )!
To solve your problem, I suggest you do instead:
```python
from transformers import BertTokenizer
model_name = "bert-base-uncased"
tokenizer_s = BertTokenizer.from_pretrained(model_name)
n_added_tokens = tokenizer_s.add_special_tokens({"additional_special_tokens":["<CONCEPT>", "</CONCEPT>"]})
print(n_added_tokens)
context = "Hi I am taking <CONCEPT> Advil </CONCEPT>"
context_tokens = tokenizer_s(context).input_ids
print(tokenizer_s.convert_ids_to_tokens(context_tokens))
# ['[CLS]', 'hi', 'i', 'am', 'taking', '<CONCEPT>', 'ad', '##vil', '</CONCEPT>', '[SEP]']
```
_I'll take this opportunity to share my analysis on the origin of the error (**you can ignore this section if you're not interested, I prefer to keep track of it for the futur me)**:_
This behavior problem is due to the fact that being a special token for a slow tokenizer means at least 2 things:
1. the token must be listed in one of the `SPECIAL_TOKENS_ATTRIBUTES` keys
https://github.com/huggingface/transformers/blob/ee209d4d016e2ef1b2e73c4be64ad43895bc7e27/src/transformers/tokenization_utils_base.py#L775-L784
2. the token must be matched before the ("simplified") normalization step if it exists, i.e. the lowercase pass.
https://github.com/huggingface/transformers/blob/ee209d4d016e2ef1b2e73c4be64ad43895bc7e27/src/transformers/tokenization_utils.py#L448-L455
What happens here is that the `add_tokens` method has no impact on feature 1 - which makes sense because we wouldn't know what kind of token special it is. What is annoying, however, is that since this token is never associated with any of the special token attributes it is not listed in `self.all_special_tokens` and so in the `_create_trie` method its occurrence cannot be noticed.
I would need to think a bit more about how to avoid other users having the same problem as you (without breaking the rest) :smile:
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,786 | closed | Raise error and suggestion when using custom optimizer with Fairscale or Deepspeed | # What does this PR do?
1st Issues: OOM when saving the optimizer, https://github.com/huggingface/transformers/issues/14542
This issue happens in consolidating the optimizer, we add an argument `save_optimizer_state` to give an option on whether we want to save it.
2nd issue: Using a custom optimizer has a problem with Fairscale and Deepspeed, https://github.com/huggingface/transformers/issues/15784
We simply raise an error and warn the user to proceed with different solution.
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
trainer: @sgugger
| 04-14-2022 15:27:02 | 04-14-2022 15:27:02 | I'm not really in favor of adding an argument to not save the optimizer as there is no point checkpointing if the optimizer is not saved. For the fairscale problem of OOM, there is an option that was detailed in #14542 to use ` force_broadcast_object=True` with the newest version of fairscale.<|||||>Oh, I just saw that solution. Maybe the second part where we raise the error is enough for this PR? Do you think it is necessary
```python3
if (self.sharded_ddp is not None or args.deepspeed) and (self.optimizer is not None or self.lr_scheduler is not None):
raise RuntimeError(
"Passing `optimizers` is not allowed if Fairscale or Deepspeed is enabled."
"You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method."
)
```<|||||>This change I agree with :-) If you want to remove the others, we can merge the PR.
Make sure to run `make style` after so that the quality check passes.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Yeah, I will remove the others and make a check<|||||>There is still one error of quality script, so you'll need to run `make style` on your branch :-)
Could you also rename the PR so that the title reflects the changes you actually did?<|||||>Thanks! You have to mark the PR as ready for review, as GitHub won't let me merge a draft PR :-)<|||||>Thanks. It is ready for review now. |
transformers | 16,785 | closed | [Research] Speed up evaluation for XTREME-S | # What does this PR do?
This adds a couple of improvements to the evaluation parts of the XTREME-S script:
* fix the bug where filtering by language happened multiple times for parallel workers (redundantly)
* use `preprocess_logits_for_metrics` to transform the logits into pred_ids before concatenating them to avoid OOMs
* add the `--language_group` parameter to train on the FLEURS dataset in batches of languages (west/eastern european languages, south asian languages etc.)
Misc:
* add `--ctc_zero_infinity` to handle the noisy FLEURS transcriptions | 04-14-2022 14:58:19 | 04-14-2022 14:58:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten these are ready to merge now I think
Also cc @sanchit-gandhi: the fixes should make your life much easier if you decide to do a run of multilingual translation :) |
transformers | 16,784 | closed | Resuming Language Model Pre-training spends too much time skipping data | # π Feature request
Speed-up data skipping process when resuming from a checkpoint
## Motivation
Hi, currently resuming a language model pre-training from a checkpoint spends too much time skipping data.
<img width="1641" alt="Screenshot 2022-04-13 at 1 51 58 PM" src="https://user-images.githubusercontent.com/14203368/163404033-1232510a-efe7-4d9b-b4b0-699d49be234e.png">
While we can use `--ignore_data_skip` , the model would be trained on the already seen data. This would be problematic when we are training on huge corpus where I want the model to be trained on data not encountered to train till now.
If anyone has found a simpler solution, please let me know.
| 04-14-2022 13:50:47 | 04-14-2022 13:50:47 | Hello! Could you please provide information on your environment, on the script used, etc? Thanks.<|||||>Hi, Thanks for the reply.
Please find the details
- `transformers` version: 4.18.0
- Platform: Red Hat Enterprise Linux release 8.5 (Ootpa)
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.1 (A100)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No. I am currently training on a single GPU
Please find the script used to train the model. I am using the language model training script from [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). I am training on a dataset containing `117 M` sentences
```bash
TRANSFORMERS_CACHE=/tmp/ PYTORCH_TRANSFORMERS_CACHE=/tmp/ PYTHONIOENCODING=utf-8 python src/lm/run_mlm.py \
--model_type bert \
--config_overrides="max_position_embeddings=512" \
--remove_unused_columns False \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 8 \
--train_file train.txt \
--validation_file dev.txt \
--line_by_line \
--do_train \
--do_eval \
--preprocessing_num_workers 64 \
--pad_to_max_length \
--evaluation_strategy steps \
--num_train_epochs 1 \
--output_dir ./models/bert \
--report_to tensorboard \
--cache_dir /tmp/ \
--logging_steps 10000 \
--save_steps 10000 \
--save_total_limit 2 \
--tokenizer_name ./models/bert_tokenizer
```<|||||>cc @sgugger <|||||>I don't have a simpler solution.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,783 | closed | Fix nightly CI Accelerate failures | # Fix Nightly CI Build failures for Accelerate examples
## What does this add?
Changes the `run_examples_torch_all` CircleCI workflow to also install Accelerate from git for the time being, until the next release is performed.
The CI was failing due to accelerate being tested with unreleased fixes and improvements | 04-14-2022 13:34:21 | 04-14-2022 13:34:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16783). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,782 | closed | add wav2vec2_alignment |
# What does this PR do?
Generate character level alignment with wav2vec2 models for audio and text pairs.
https://github.com/huggingface/transformers/issues/16570
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/16570
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
| 04-14-2022 12:31:14 | 04-14-2022 12:31:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @patrickvonplaten can you please review this PR ?<|||||>>
> This looks very nice already! Great job :-)
>
> Could we maybe also add a README.md that gives an example on how to use it?
Thanks @patrickvonplaten for the suggestion, am not sure where to add, should I update the readme.md at huggingface/transformers/tree/main/examples/research_projects/wav2vec2 ??<|||||>> >
>
> > This looks very nice already! Great job :-)
> > Could we maybe also add a README.md that gives an example on how to use it?
>
> Thanks @patrickvonplaten for the suggestion, am not sure where to add, should I update the readme.md at huggingface/transformers/tree/main/examples/research_projects/wav2vec2 ??
Hey @arijitx,
Exactly it would be great if you could update the README.md of `research_projects/wav2vec2` and add a section about alignment there<|||||>> > >
> >
> >
> > > This looks very nice already! Great job :-)
> > > Could we maybe also add a README.md that gives an example on how to use it?
> >
> >
> > Thanks @patrickvonplaten for the suggestion, am not sure where to add, should I update the readme.md at huggingface/transformers/tree/main/examples/research_projects/wav2vec2 ??
>
> Hey @arijitx,
>
> Exactly it would be great if you could update the README.md of `research_projects/wav2vec2` and add a section about alignment there
Hi @patrickvonplaten have added a part in the end of readme for the force alignment.<|||||>Very cool work @arijitx! Could you run:
```
make style
```
once to fix the last failing test? :-)
@anton-l could you also take a final (quick) look here?<|||||>> Very cool work @arijitx! Could you run:
>
> ```
> make style
> ```
>
> once to fix the last failing test? :-)
>
> @anton-l could you also take a final (quick) look here?
Hi @patrickvonplaten I did make style locally it modified the alignment.py file and I have added that in this PR, but still its failing the style check any idea why ? <|||||>Agree that it'd be nice to have the example live on the HF Hub and it's probably indeed better to create a new directory called `alignment`. <|||||>> Hey @arijitx! I took the liberty of updating your branch with some style fixes, hope you don't mind :) Now it should pass all of the checks. Also, could you upload the example files (transcriptions and speech) to a repository on HF Hub, so that the example is more reproducible?
>
> As a suggestion: it would probably be tidier if we moved the example to `research_projects/wav2vec2/alignment/` @patrickvonplaten wdyt?
Thanks @anton-l for fixing the style :) @patrickvonplaten don't have much idea about how to setup a live example, can you please point me to any documentation if there is any ? Thanks in advance :) <|||||>> > Hey @arijitx! I took the liberty of updating your branch with some style fixes, hope you don't mind :) Now it should pass all of the checks. Also, could you upload the example files (transcriptions and speech) to a repository on HF Hub, so that the example is more reproducible?
> > As a suggestion: it would probably be tidier if we moved the example to `research_projects/wav2vec2/alignment/` @patrickvonplaten wdyt?
>
> Thanks @anton-l for fixing the style :) @patrickvonplaten don't have much idea about how to setup a live example, can you please point me to any documentation if there is any ? Thanks in advance :)
> > Hey @arijitx! I took the liberty of updating your branch with some style fixes, hope you don't mind :) Now it should pass all of the checks. Also, could you upload the example files (transcriptions and speech) to a repository on HF Hub, so that the example is more reproducible?
> > As a suggestion: it would probably be tidier if we moved the example to `research_projects/wav2vec2/alignment/` @patrickvonplaten wdyt?
>
> Thanks @anton-l for fixing the style :) @patrickvonplaten don't have much idea about how to setup a live example, can you please point me to any documentation if there is any ? Thanks in advance :)
I have the same question here @lhoestq @polinaeterna - do we have docs on how to create a short sample audio dataset ?<|||||>@arijitx actually it's quite easy with the following commands:
```python
from datasets import Audio, Dataset
ds = Dataset.from_dict({"audio": my_list_of_audio_files})
ds = ds.cast_column("audio", Audio())
ds.push_to_hub("my_dataset_name")
```
Wanna give it a try? Otherwise happy to help here :-)<|||||>Hi @patrickvonplaten do you mean to update the example with datasets ? I can do that I guess there is bengali openslr as well, also I can create a small toy dataset and use that for the example. I will try to do that. Btw seems like the last Commit messed up the style fix by anton :( Any idea how to fix that ? I just merged anton changes in my fork before updating the last commit
<|||||>@anton-l - could you take a look here? <|||||>@arijitx could you sync with the master branch on your fork please? Looks like there were doc style updates to `transformers` since you've opened the PR. Run the following:
```bash
git fetch upstream
git merge upstream/main
```
And then
```
make style
make quality
```
After this you should be able to commit and push as usual, with all of the style fixes applied :) <|||||>Hello everyone! It's a great idea to add alignment functionality to wav2vec models (really handy stuff), @arijitx are you going to finish this PR? It seems that it is a version of [example](https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html) from torch docs, I would suggest to take a look at [ctc-segmentation](https://github.com/lumaku/ctc-segmentation) package which does exactly same thing, except it might be more stable and a bit more elegant. Probably adding gist/code example like following in README for wav2vec model will be great and enough for alignment procedure, however it also can be added to source:
```python
import torch
import numpy as np
from typing import List
import ctc_segmentation
from datasets import load_dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer
# load model, processor and tokenizer
model_name = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
processor = Wav2Vec2Processor.from_pretrained(model_name)
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
# load dummy dataset and read soundfiles
SAMPLERATE = 16000
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio = ds[0]["audio"]["array"]
transcripts = ["A MAN SAID TO THE UNIVERSE", "SIR I EXIST"]
def align_with_transcript(
audio : np.ndarray,
transcripts : List[str],
samplerate : int = SAMPLERATE,
model : Wav2Vec2ForCTC = model,
processor : Wav2Vec2Processor = processor,
tokenizer : Wav2Vec2CTCTokenizer = tokenizer
):
assert audio.ndim == 1
# Run prediction, get logits and probabilities
inputs = processor(audio, return_tensors="pt", padding="longest")
with torch.no_grad():
logits = model(inputs.input_values).logits.cpu()[0]
probs = torch.nn.functional.softmax(logits,dim=-1)
# Tokenize transcripts
vocab = tokenizer.get_vocab()
inv_vocab = {v:k for k,v in vocab.items()}
unk_id = vocab["<unk>"]
tokens = []
for transcript in transcripts:
assert len(transcript) > 0
tok_ids = tokenizer(transcript.replace("\n"," ").lower())['input_ids']
tok_ids = np.array(tok_ids,dtype=np.int)
tokens.append(tok_ids[tok_ids != unk_id])
# Align
char_list = [inv_vocab[i] for i in range(len(inv_vocab))]
config = ctc_segmentation.CtcSegmentationParameters(char_list=char_list)
config.index_duration = audio.shape[0] / probs.size()[0] / samplerate
ground_truth_mat, utt_begin_indices = ctc_segmentation.prepare_token_list(config, tokens)
timings, char_probs, state_list = ctc_segmentation.ctc_segmentation(config, probs.numpy(), ground_truth_mat)
segments = ctc_segmentation.determine_utterance_segments(config, utt_begin_indices, char_probs, timings, transcripts)
return [{"text" : t, "start" : p[0], "end" : p[1], "conf" : p[2]} for t,p in zip(transcripts, segments)]
def get_word_timestamps(
audio : np.ndarray,
samplerate : int = SAMPLERATE,
model : Wav2Vec2ForCTC = model,
processor : Wav2Vec2Processor = processor,
tokenizer : Wav2Vec2CTCTokenizer = tokenizer
):
assert audio.ndim == 1
# Run prediction, get logits and probabilities
inputs = processor(audio, return_tensors="pt", padding="longest")
with torch.no_grad():
logits = model(inputs.input_values).logits.cpu()[0]
probs = torch.nn.functional.softmax(logits,dim=-1)
predicted_ids = torch.argmax(logits, dim=-1)
pred_transcript = processor.decode(predicted_ids)
# Split the transcription into words
words = pred_transcript.split(" ")
# Align
vocab = tokenizer.get_vocab()
inv_vocab = {v:k for k,v in vocab.items()}
char_list = [inv_vocab[i] for i in range(len(inv_vocab))]
config = ctc_segmentation.CtcSegmentationParameters(char_list=char_list)
config.index_duration = audio.shape[0] / probs.size()[0] / samplerate
ground_truth_mat, utt_begin_indices = ctc_segmentation.prepare_text(config, words)
timings, char_probs, state_list = ctc_segmentation.ctc_segmentation(config, probs.numpy(), ground_truth_mat)
segments = ctc_segmentation.determine_utterance_segments(config, utt_begin_indices, char_probs, timings, words)
return [{"text" : w, "start" : p[0], "end" : p[1], "conf" : p[2]} for w,p in zip(words, segments)]
print(align_with_transcript(audio,transcripts))
# [{'text': 'A MAN SAID TO THE UNIVERSE', 'start': 0.08124999999999993, 'end': 2.034375, 'conf': 0.0},
# {'text': 'SIR I EXIST', 'start': 2.3260775862068965, 'end': 4.078771551724138, 'conf': 0.0}]
print(get_word_timestamps(audio))
# [{'text': 'a', 'start': 0.08124999999999993, 'end': 0.5912715517241378, 'conf': 0.9999501323699951},
# {'text': 'man', 'start': 0.5912715517241378, 'end': 0.9219827586206896, 'conf': 0.9409108982174931},
# {'text': 'said', 'start': 0.9219827586206896, 'end': 1.2326508620689656, 'conf': 0.7700278702302796},
# {'text': 'to', 'start': 1.2326508620689656, 'end': 1.3529094827586206, 'conf': 0.5094435178226225},
# {'text': 'the', 'start': 1.3529094827586206, 'end': 1.4831896551724135, 'conf': 0.4580493446392211},
# {'text': 'universe', 'start': 1.4831896551724135, 'end': 2.034375, 'conf': 0.9285054256219009},
# {'text': 'sir', 'start': 2.3260775862068965, 'end': 3.036530172413793, 'conf': 0.0},
# {'text': 'i', 'start': 3.036530172413793, 'end': 3.347198275862069, 'conf': 0.7995760873559864},
# {'text': 'exist', 'start': 3.347198275862069, 'end': 4.078771551724138, 'conf': 0.0}]
```
The code might be ran as is. One only needs to `pip install ctc-segmentation` <|||||>Think @anton-l is currently also working on alignment<|||||>@anton-l - let's maybe move forward here by doing the following:
- 1. Test this code once a small dataset like: https://huggingface.co/datasets/patrickvonplaten/librispeech_asr_dummy: take longest audio file and align with text
- 2. Add this test to the README as an example of how to use it
If this goes well, let merge :rocket: <|||||>Gentle ping @anton-l :-)<|||||>Merging this now without too much testing since it's in the research folders. <|||||>The failure seems unrelated as it points to the docs only - will monitor master<|||||>@patrickvonplaten @arijitx Awesome work y'all, this is super useful!! A few things I'd like to understand better - am I right in thinking this would be extremely compute intensive for longer files, even if the ctc-segmentation (sequential process, no GPU speedups here) package is used, with memory usage blowing up?
And to mitigate that, I would want to use an RNN based CTC model instead, which has character based vocab if I'm thinking about this correctly, right?
Could you offer any suggestions on a model/model architecture that would be particularly suited for such a task? I'm trying to obtain word level forced alignments.<|||||>Hopefuly, this could then be implemented to work with any CTC based models in Huggingface.<|||||>Hi @patrickvonplaten , is there a sample dataset ( script.txt, and .wavs ) so I can reproduce running the `alignment.py`? I followed the README.MD documentation and I cant run this alignment script.
here is the command I am running:
```.py
python alignment.py --model_name="./facebook/wav2vec2_finetuned" --wav_dir="./wavs" --text_file="./script.txt" --input_wavs_sr=16000 --output_dir="./out_alignment"
```
content of the `script.txt`:
```
0000 first sentence content
0001 second sentence content
0002 third setence content
0003 foruth sentence content
0004 5th sentence content
```
`.\wavs` folder content:
<img width="144" alt="image" src="https://github.com/huggingface/transformers/assets/36199397/f3eda83b-f033-4423-96a8-63260391272f">
the audio is sampled at 16,000khz as provided in the input parameter.
I am getting the following error:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Blank Token id [PAD]/<pad> 29
Script must be in format: 00001 this is my sentence
```
This error was related to the following line of code: https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/examples/research_projects/wav2vec2/alignment.py#L187
after reading the `.txt` the tab character `\t` is not being recognized on my machine. I changed the code to look for a `,` separator and changed that line of code accordingly. After fixing that I ran into a new error:
```
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 1, 2, 276587]
```
Was this code tested? how can I reproduce this alignment script? I keep running into error after error. Maybe the issue I am running into is because the model I am trying to use is a CTC based model?
|
transformers | 16,781 | closed | wav2vec2_alignment |
# What does this PR do?
Generate character level alignment with wav2vec2 models for audio and text pairs.
https://github.com/huggingface/transformers/issues/16570
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/16570
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
| 04-14-2022 12:22:59 | 04-14-2022 12:22:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16781). All of your documentation changes will be reflected on that endpoint.<|||||><del>Hey @arijitx </del>
<del>The PR looked good! :-) Feel free to open it again if you want</del>
Superseded by https://github.com/huggingface/transformers/pull/16782 |
transformers | 16,780 | closed | Fix GPT-J onnx conversion | # What does this PR do?
Fix some problems encountered while converting a GPT-J model to Onnx.
Thanks to @ri938 who found where to fix bugs (on π€ Discord).
Models:
gpt2: @patrickvonplaten, @LysandreJik
and @lewtun because you reviewed the first PR for GPT-J Onnx Config, here #16274
I'm currently uploading a fully converted `EleutherAI/gpt-j-6B` model to the hub which demonstrate that the conversion command line worked with these fixes. Find it [here](https://huggingface.co/OWG/gpt-j-6B)
Here is the command I used (I had to fix `atol` to 1e-04 because 1e-05 was not true while validating the model):
```bash
python -m transformers.onnx --model=EleutherAI/gpt-j-6B --feature=causal-lm --atol=1e-04 onnx/
``` | 04-14-2022 11:49:09 | 04-14-2022 11:49:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Fine I'm going to accept @lewtun 's suggestion. :hugs:
I see there are some failing checks, and I also see that there are problems with linting is it normal ?<|||||>The `run_tests_torch_and_tf` failure is unrelated, to fix the `check_code_quality` test, run the `make fixup` command and push again.<|||||>> The `run_tests_torch_and_tf` failure is unrelated, to fix the `check_code_quality` test, run the `make fixup` command and push again.
Last time I tried to run `make fixup` it changed linting on more than 87 files not related to the PR so I reverted the fixup<|||||>@ChainYo
hi, thanks for your contribution. I tested it with my basic gptj model and I think it's working pretty well.
But I don't think it's working well when I tested it with a model that was extracted using a method called 'use cache' or 'use past'. Can you give me an example or check if there's anything wrong with my code??
Below is the test i did
```python
ort_session = onnxruntime.InferenceSession(onnx_path, providers=['CUDAExecutionProvider'])
#check session's input
for ort_session_input in ort_session.get_inputs():
print(ort_session_input.name, ort_session_input.shape, ort_session_input.type)
#input_ids ['batch', 'sequence'] tensor(int64)
#past_key_values.0.key ['batch', 16, 'past_sequence + sequence', 256] tensor(float)
#past_key_values.0.value ['batch', 16, 'past_sequence + sequence', 256] tensor(float)
#...
#past_key_values.27.key ['batch', 16, 'past_sequence + sequence', 256] tensor(float)
#past_key_values.27.value ['batch', 16, 'past_sequence + sequence', 256] tensor(float)
#attention_mask ['batch', 'past_sequence + sequence'] tensor(float)
input_txt_list = [
'text for test',
'gptj'
]
ort_input = make_onnx_inputs(input_txt_list)
for k,v in ort_input.items():
print(k,v.size(),v.dtype)
#input_ids torch.Size([2, 3]) torch.int64
#past_key_values.0.key torch.Size([2, 16, 0, 256]) torch.float32
#past_key_values.0.value torch.Size([2, 16, 0, 256]) torch.float32
#...
#past_key_values.27.key torch.Size([2, 16, 0, 256]) torch.float32
#past_key_values.27.value torch.Size([2, 16, 0, 256]) torch.float32
#attention_mask torch.Size([2, 3]) torch.float32
#TypeError
ort_output = ort_session.run(None, ort_input)
```
And this is the error message
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_1166/2757688042.py in <module>
----> 1 ort_output = ort_session.run(None, ort_input)
/opt/conda/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
190 output_names = [output.name for output in self._outputs_meta]
191 try:
--> 192 return self._sess.run(output_names, input_feed, run_options)
193 except C.EPFail as err:
194 if self._enable_fallback:
TypeError: run(): incompatible function arguments. The following argument types are supported:
1. (self: onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession, arg0: List[str], arg1: Dict[str, object], arg2: onnxruntime.capi.onnxruntime_pybind11_state.RunOptions) -> List[object]
Invoked with: <onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession object at 0x7f319b7e3030>, ['logits', 'present.0.key', 'present.0.value', 'present.1.key', ...'present.27.key', 'present.27.value'], {{'input_ids': tensor([[24496, 1956, 15560], [ 3, 3, 18566]]), 'past_key_values.0.key': tensor([], size=(2, 16, 0, 256)), 'past_key_values.0.value': tensor([], size=(2, 16, 0, 256)),
...
'past_key_values.27.key': tensor([], size=(2, 16, 0, 256)), 'past_key_values.27.value': tensor([], size=(2, 16, 0, 256)), 'attention_mask': tensor([[1.,1.,1.],[0.,0.,1.]])}, None
```
<|||||>> @ChainYo hi, thanks for your contribution. I tested it with my basic gptj model and I think it's working pretty well.
>
> But I don't think it's working well when I tested it with a model that was extracted using a method called 'use cache' or 'use past'. Can you give me an example or check if there's anything wrong with my code??
Hi @lsn1106 could you try to add your model to [netron.app](https://netron.app/) and check the expected inputs by clicking on the first layer ?
I'm not sure, but maybe `use_past` is a feature that is only implemented under the hood in the Transformers library which is not available in the Onnxruntime library.
<|||||>@ChainYo
Thank you for your kind advice. Maybe I should refer to this github code [[link](https://github.com/microsoft/onnxruntime/blob/70d97bdf532502cb9da8ff8711fe3b0ff11cfec3/onnxruntime/python/tools/transformers/gpt2_helper.py#L56)]<|||||>Hey @lsn1106 looking at your error in ORT
```
TypeError: run(): incompatible function arguments. The following argument types are supported:
```
it seems that you're not passing inputs with the correct types. What happens if you cast your inputs to NumPy arrays and ensure that `ort_input` truly is a `dict`?
<|||||>@lewtun
i've already tried that but same error occured. thank you :)<|||||>> Last time I tried to run `make fixup` it changed linting on more than 87 files not related to the PR so I reverted the fixup
Maybe you can rebase on `main` and run `make fixup` again? I'm not entirely sure why it should lint so many files, but this might resolve the problem<|||||>@lsn1106 would you mind sharing a reproducible code snippet that shows how you export the model, are creating the inputs for ORT, etc?<|||||>> > Last time I tried to run `make fixup` it changed linting on more than 87 files not related to the PR so I reverted the fixup
>
> Maybe you can rebase on `main` and run `make fixup` again? I'm not entirely sure why it should lint so many files, but this might resolve the problem
Well it seems to be solved, thanks!<|||||>I think the last thing we need to do is run `make style && make quality` and then this should be good to go π !<|||||>> I think the last thing we need to do is run `make style && make quality` and then this should be good to go rocket !
Yes sorry the first `make fixup` didn't run black, it should be good now!<|||||>Great, thanks for fixing the style issues! Merging this since the issue reported by @lsn1106 is unrelated to the fix provided by this PR |
transformers | 16,779 | closed | Why do we need to expand the token_type_ids? | in modeling_bert.py, foward function of bertembeddings, line224, we can just write like this:
```python
if token_type_ids is None:
if hasattr(self, "token_type_ids"):
token_type_ids = self.token_type_ids[:, :seq_length]
else:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
```
instead of:
```python
if token_type_ids is None:
if hasattr(self, "token_type_ids"):
buffered_token_type_ids = self.token_type_ids[:, :seq_length]
buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
token_type_ids = buffered_token_type_ids_expanded
else:
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
``` | 04-14-2022 11:45:47 | 04-14-2022 11:45:47 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,778 | closed | Long QuestionAnsweringPipeline fix. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This handles CLS index even on splitted QA context.
TODO update the tests for one that actually showcases the bug.
Fixes #16769
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 04-14-2022 10:53:02 | 04-14-2022 10:53:02 | @LysandreJik This is not fully ready yet as the test does not really check that the bug is fixed. Do you have any ideas on how to check that behavior ? (Previous code was just silently not doing anything since the np.arrays are an array of `np.list(..)` in the splitted case.)
Edit: It's not covered by a slow test. I am hesitant to remove the fast test because it does not cover this.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I'll remove the fast test, don't want to leave a false sense of security here. |
transformers | 16,777 | closed | Question about google/bigbird-pegasus-large-pubmed | Hi, I have a question about the BigBird seq2seq model [google/bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed).
How was this model trained?
Was it pretrained with just word-level or sentence-level masking?
Or was it further fine-tuned by the summarization task?
Thank you. | 04-14-2022 09:08:17 | 04-14-2022 09:08:17 | From the model card:
> This checkpoint is obtained after fine-tuning BigBirdPegasusForConditionalGeneration for summarization on pubmed dataset from [scientific_papers](https://huggingface.co/datasets/scientific_papers)[](https://huggingface.co/google/bigbird-pegasus-large-pubmed#bibtex-entry-and-citation-info).
Also, please use our [forum](https://discuss.huggingface.co/) for such questions. We'd like to keep Github issues for bugs/feature requests.
Thanks! |
transformers | 16,776 | closed | How to set different LOCAL_RANK env variable values for multiple GPUs of a single node machine with Accelerate | This is more of a question than a bug report.
I am adopting the script with Accelerate from https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py.
I intend to parallel on two gpus of a single machine, one on GPU 0 and the other on GPU 1. As I notice that
`Line 203: self.local_process_index = int(os.environ.get("LOCAL_RANK", -1))`
(https://github.com/huggingface/accelerate/blob/209db19dc885887682b07ff88fb6c840cbeb3c1c/src/accelerate/state.py#L128).
Since the LOCAL_RANK env variable is a single integer value (say, I set it to 0), I am just wondering how Accelerate to pick up a second GPU rank (i.e. 1). I don't see that from the above code.
I must have not fully understood the mechanism quite right yet, and haven't found an FAQ of it. Could anyone explain me a bit more.
Thanks | 04-14-2022 09:07:55 | 04-14-2022 09:07:55 | Hi,
Same problem here, did you find any solution to this problem? @chris-opendata <|||||>Here is the solution:
1. Activate your venv environment.
2. Install accelerate package.
3. Type "accelerate config" at terminal to go through multigpu configuration.
4. In your running bash script, use sth like "accelerate launch main_train.py ...." instead of "python -m main_train.py ..."
Note that you may not need '&' at the end in order to run it at the background depending on computing env. I am on Slurm.
That is it. Accelerate will work out the rest itself.
|
transformers | 16,775 | closed | How to initialize BigBird Encoder-Decoder model with weights of full_attention Transformer Encoder model like BERT | Hi, I am trying to train summarization model with [run_summarization.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py).
As a starting point, I want to initialize BigBird Encoder-Decoder model with weights of full_attention Transformer Encoder model (e.g. BERT, [SPECTER](https://huggingface.co/allenai/specter))
What should I do?
Thanks. | 04-14-2022 05:55:10 | 04-14-2022 05:55:10 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,774 | closed | Add doc tests for Albert and Bigbird | # What does this PR do?
Add doc tests for Albert and Bigbird, a part of issue https://github.com/huggingface/transformers/issues/16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @ydshieh
Documentation: @sgugger
| 04-14-2022 02:38:49 | 04-14-2022 02:38:49 | @ydshieh Could you please take a look at it? I think we still have a problem with `AlbertTokenizer ` as we have discussed on the Discord channel `the AlbertTokenizer will add an extra "_" just after the "[MASK]" token` which will lead to the different shape between `input_text` and `target_text`. This is the code snippet for checking the output.
```python
from transformers import AlbertTokenizer, AlbertForMaskedLM
import torch
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
model = AlbertForMaskedLM.from_pretrained("albert-base-v2")
input_text = "The capital of France is [MASK]."
target_text = "The capital of France is Paris."
tokenizer.tokenize(input_text)
# ['βthe', 'βcapital', 'βof', 'βfrance', 'βis', ' [MASK]', 'β', '.']
tokenizer.tokenize(target_text )
# ['βthe', 'βcapital', 'βof', 'βfrance', 'βis', 'βparis', '.']
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @vumichien
I won't be available for the next few days. Will check when I am back, or my colleague could check this PR :-)
Regarding the Albert tokenizer, do you encounter any runtime error due to the shape issue? I understand that the shapes are different, and had a short discussion with the team. But we thought it should still work. Sorry for not responding this part earlier, but if you see errors due to these shapes, could you post it here, please?<|||||>@ydshieh When I run the test for doc for **modeling_albert.py** in local, the error will show like the following (sorry for very long error)
```
======================================================================================= FAILURES =======================================================================================
____________________________________________________ [doctest] transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward ____________________________________________________
1034 >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
1035 >>> tokenizer.decode(predicted_token_id)
1036 'reims'
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
UNEXPECTED EXCEPTION: RuntimeError('The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 1')
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[10]>", line 1, in <module>
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 1
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1043: UnexpectedException
1036 'reims'
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
1044
1045 >>> outputs = model(**inputs, labels=labels)
UNEXPECTED EXCEPTION: ValueError('Expected input batch_size (10) to match target batch_size (9).')
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[11]>", line 1, in <module>
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py", line 964, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1163, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/functional.py", line 2996, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (10) to match target batch_size (9).
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1045: UnexpectedException
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
1044
1045 >>> outputs = model(**inputs, labels=labels)
1046 >>> round(outputs.loss.item(), 2)
UNEXPECTED EXCEPTION: NameError("name 'outputs' is not defined")
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[12]>", line 1, in <module>
NameError: name 'outputs' is not defined
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1046: UnexpectedException
=================================================================================== warnings summary ===================================================================================
venv/lib/python3.8/site-packages/flatbuffers/compat.py:19
/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================== short test summary info ================================================================================
FAILED src/transformers/models/albert/modeling_albert.py::transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward
======================================================================= 1 failed, 6 passed, 1 warning in 52.18s ========================================================================
```
The error log is the same when I run test with doc for **modeling_tf_albert.py**<|||||>Maybe a quick easy way is just to overwrite the examples for AlbertForMaskedLM in the model files. Something similar to https://github.com/huggingface/transformers/pull/16565#discussion_r843972010
But that case is reversed: masked input has fewer tokens. So you need to have some different operations.
Let's wait @patrickvonplaten to see if he has better suggestion.<|||||>@vumichien @ydshieh, I'd be in favor of overwriting both Albert (so that MLM is correct) as well as BigBird (to show that it's long-range). What do you think?<|||||>@ydshieh @patrickvonplaten I have overwritten both the doc-test examples of Albert and Bigbird. What do you think about them?<|||||>> @ydshieh @patrickvonplaten I have overwritten both the doc-test examples of Albert and Bigbird. What do you think about them?
That's great! The classification and QA example could be made even much longer for BigBird :-) The examples look already great though. Happy to merge as is as well :-) <|||||>I have changed the longer examples for doctest. The examples are quite long, but in my opinion, they are good to show that Bigbird is long-range model<|||||>Can we put that text in some dataset instead? The documentation will become a bit unreadable with such a long text, where as we could just load a dataset in one line and take the first sample.<|||||>@sgugger Thank you for your suggestion. I have changed to use the examples from squad datasets. How do you think about that?<|||||>Way better, and great that you're showing the shape! Good for me if @patrickvonplaten is okay.<|||||>@ydshieh I have revised as your suggestion. Please let me know if I need to revise something.<|||||>> @ydshieh I have revised as your suggestion. Please let me know if I need to revise something.
Love it! Thank you.
I will let @patrickvonplaten to have a final look (if any) & click the merge button π―
<|||||>~~Running the last time -> will merge if all tests pass~~
Merged π Thanks again! |
transformers | 16,774 | closed | Add doc tests for Albert and Bigbird | # What does this PR do?
Add doc tests for Albert and Bigbird, a part of issue https://github.com/huggingface/transformers/issues/16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @ydshieh
Documentation: @sgugger
| 04-14-2022 02:38:49 | 04-14-2022 02:38:49 | @ydshieh Could you please take a look at it? I think we still have a problem with `AlbertTokenizer ` as we have discussed on the Discord channel `the AlbertTokenizer will add an extra "_" just after the "[MASK]" token` which will lead to the different shape between `input_text` and `target_text`. This is the code snippet for checking the output.
```python
from transformers import AlbertTokenizer, AlbertForMaskedLM
import torch
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
model = AlbertForMaskedLM.from_pretrained("albert-base-v2")
input_text = "The capital of France is [MASK]."
target_text = "The capital of France is Paris."
tokenizer.tokenize(input_text)
# ['βthe', 'βcapital', 'βof', 'βfrance', 'βis', ' [MASK]', 'β', '.']
tokenizer.tokenize(target_text )
# ['βthe', 'βcapital', 'βof', 'βfrance', 'βis', 'βparis', '.']
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @vumichien
I won't be available for the next few days. Will check when I am back, or my colleague could check this PR :-)
Regarding the Albert tokenizer, do you encounter any runtime error due to the shape issue? I understand that the shapes are different, and had a short discussion with the team. But we thought it should still work. Sorry for not responding this part earlier, but if you see errors due to these shapes, could you post it here, please?<|||||>@ydshieh When I run the test for doc for **modeling_albert.py** in local, the error will show like the following (sorry for very long error)
```
======================================================================================= FAILURES =======================================================================================
____________________________________________________ [doctest] transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward ____________________________________________________
1034 >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
1035 >>> tokenizer.decode(predicted_token_id)
1036 'reims'
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
UNEXPECTED EXCEPTION: RuntimeError('The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 1')
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[10]>", line 1, in <module>
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 1
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1043: UnexpectedException
1036 'reims'
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
1044
1045 >>> outputs = model(**inputs, labels=labels)
UNEXPECTED EXCEPTION: ValueError('Expected input batch_size (10) to match target batch_size (9).')
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[11]>", line 1, in <module>
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py", line 964, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1163, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/torch/nn/functional.py", line 2996, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (10) to match target batch_size (9).
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1045: UnexpectedException
1037
1038 ```
1039
1040 ```python
1041 >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
1042 >>> # mask labels of non-[MASK] tokens
1043 >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
1044
1045 >>> outputs = model(**inputs, labels=labels)
1046 >>> round(outputs.loss.item(), 2)
UNEXPECTED EXCEPTION: NameError("name 'outputs' is not defined")
Traceback (most recent call last):
File "/usr/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward[12]>", line 1, in <module>
NameError: name 'outputs' is not defined
/home/vumichien/Detomo/transformers/src/transformers/models/albert/modeling_albert.py:1046: UnexpectedException
=================================================================================== warnings summary ===================================================================================
venv/lib/python3.8/site-packages/flatbuffers/compat.py:19
/home/vumichien/Detomo/transformers/venv/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================== short test summary info ================================================================================
FAILED src/transformers/models/albert/modeling_albert.py::transformers.models.albert.modeling_albert.AlbertForMaskedLM.forward
======================================================================= 1 failed, 6 passed, 1 warning in 52.18s ========================================================================
```
The error log is the same when I run test with doc for **modeling_tf_albert.py**<|||||>Maybe a quick easy way is just to overwrite the examples for AlbertForMaskedLM in the model files. Something similar to https://github.com/huggingface/transformers/pull/16565#discussion_r843972010
But that case is reversed: masked input has fewer tokens. So you need to have some different operations.
Let's wait @patrickvonplaten to see if he has better suggestion.<|||||>@vumichien @ydshieh, I'd be in favor of overwriting both Albert (so that MLM is correct) as well as BigBird (to show that it's long-range). What do you think?<|||||>@ydshieh @patrickvonplaten I have overwritten both the doc-test examples of Albert and Bigbird. What do you think about them?<|||||>> @ydshieh @patrickvonplaten I have overwritten both the doc-test examples of Albert and Bigbird. What do you think about them?
That's great! The classification and QA example could be made even much longer for BigBird :-) The examples look already great though. Happy to merge as is as well :-) <|||||>I have changed the longer examples for doctest. The examples are quite long, but in my opinion, they are good to show that Bigbird is long-range model<|||||>Can we put that text in some dataset instead? The documentation will become a bit unreadable with such a long text, where as we could just load a dataset in one line and take the first sample.<|||||>@sgugger Thank you for your suggestion. I have changed to use the examples from squad datasets. How do you think about that?<|||||>Way better, and great that you're showing the shape! Good for me if @patrickvonplaten is okay.<|||||>@ydshieh I have revised as your suggestion. Please let me know if I need to revise something.<|||||>> @ydshieh I have revised as your suggestion. Please let me know if I need to revise something.
Love it! Thank you.
I will let @patrickvonplaten to have a final look (if any) & click the merge button π―
<|||||>~~Running the last time -> will merge if all tests pass~~
Merged π Thanks again! |
transformers | 16,773 | closed | loading roberta from local file | I get this message: ```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertModel: [ -['roberta.encoder.layer.9.attention.self.value.bias', 'lm_head.layer_norm.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'lm_head.bias', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.embeddings.position_ids', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.embeddings.position_embeddings.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'lm_head.dense.bias', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.7.attention.self.query.weight', 'lm_head.dense.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'lm_head.layer_norm.weight', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.weight']
- This IS expected if you are initializing TFBertModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
-
```
How to make sure that roberta is saved correctly and locally?
| 04-13-2022 23:31:45 | 04-13-2022 23:31:45 | You're loading RoBERTa in a `TFBertModel`, you should load it in a `TFRobertaModel`! |
transformers | 16,772 | closed | Refactor issues with yaml | This PR:
- Refactors the issue templates using the newly introduced YAML issues.
- Introduces a `config.yml` for issues which enables a redirect to the forum
- Removes the benchmarking issue which has not been used at all and introduces noise in the issue selection process. With its removal, I'm aiming for a lower number of blank issues.
See below for the comparison across issues. To see the issue templates yourself and play with the `config.yml`, head over to [my fork](https://github.com/LysandreJik/transformers/issues/new/choose).
Old | New
:-------------------------:|:-------------------------:
Bug report (MD) | Bug report (YML)
<img width="659" alt="image" src="https://user-images.githubusercontent.com/30755778/163278600-b8fd6b21-88bd-426c-976c-5c93115c0e44.png"> | <img width="660" alt="image" src="https://user-images.githubusercontent.com/30755778/163278655-8ad5ccd0-f7f9-41e4-8334-a5b59d10f55f.png">
Old feature request | New feature request
<img width="797" alt="image" src="https://user-images.githubusercontent.com/30755778/163278892-d7736e07-e66d-4a3c-a4d6-8b546f2efb43.png"> | <img width="796" alt="image" src="https://user-images.githubusercontent.com/30755778/163278913-bd83c945-d125-4739-9eee-a7ff8eb5932b.png">
The new index will now contain additional links, including a link to the forum:
<img width="1044" alt="image" src="https://user-images.githubusercontent.com/30755778/163278991-c5ad6cd8-c08a-4a45-b3c7-560ad2c28180.png">
| 04-13-2022 22:14:38 | 04-13-2022 22:14:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks!
For the new bug report screenshot, looks like the section Who can help is not editable?
The new layout looks good! I would like to have a section 'Code Snippet' if it is not in the new layout yet. π (Haven't really tried it yet, just looking the screenshots)<|||||>> The new layout looks good! I would like to have a section 'Code Snippet' if it is not in the new layout yet. slightly_smiling_face (Haven't really tried it yet, just looking the screenshots)
@ydshieh, yes the reproducible code example is available in the issue, I just migrated the existing issues so everything is still included.
Thanks for your reviews, merging! |
transformers | 16,771 | closed | Some tests misusing assertTrue for comparisons fix | Fixes #16770 | 04-13-2022 21:15:56 | 04-13-2022 21:15:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>tests are failing now because the comparisons are actually being performed by the tests. Before the tests were just checking if the first argument was truthy, because that's what `assertTrue` does.
I'm just a static analysis bot (boop beep) so I'm not sure if the comparisons in the tests are wrong and therefore the tests need updating, or if the logic under test need fixing. Can someone please confirm.<|||||>Hey @code-review-doctor,
It seems like you uncovered some bugs in the testing - thanks a lot! Do you want to fix them or should I go ahead and fix them? <|||||>@patrickvonplaten glad to be of service :)
If you could go ahead and fix them that would be great. My "thing" is running static analysis checkers to detect common easily missed mistakes on hundreds of open source projects and then letting them know so they can fixed properly :)
FYI and FWIW I have [GitHub integration](https://github.com/marketplace/django-doctor/) so I can review your PRs to prevent problems like this in future :)
<|||||>Thanks a lot @code-review-doctor ! |
transformers | 16,770 | closed | Some tests misusing assertTrue for comparisons | `assertTrue` is not for comparing arguments, should use `assertEqual` for that.
The developer's intent of the test was to compare argument 1 with argument 2, which is not happening. Really what is happening is the test is passing because first argument is truthy. The correct method to use is `assertEqual`
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_modeling_flax_wav2vec2.py#L389
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_modeling_flax_wav2vec2.py#L431
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_modeling_wav2vec2.py#L1064
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_modeling_wav2vec2.py#L1101
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L205
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L216
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L217
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L492
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L499
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2/test_tokenization_wav2vec2.py#L506
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py#L371
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py#L388
https://github.com/huggingface/transformers/blob/main/tests/test_sequence_feature_extraction_common.py#L188
https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer_utils.py#L100
https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer_utils.py#L102
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_tf_longformer.py#L416
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_tf_longformer.py#L489
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_tf_longformer.py#L526
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_longformer.py#L419
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_longformer.py#L423
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_longformer.py#L448
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_longformer.py#L496
https://github.com/huggingface/transformers/blob/main/tests/longformer/test_modeling_longformer.py#L534
https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L812
https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L822
https://github.com/huggingface/transformers/blob/main/tests/wav2vec2_phoneme/test_tokenization_wav2vec2_phoneme.py#L268
I found this issue automatically, see other issues [here](https://codereview.doctor/huggingface/transformers) | 04-13-2022 21:14:49 | 04-13-2022 21:14:49 | |
transformers | 16,769 | closed | Invalid CLS masking in question answer pipelines top K calculation | ## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.10.102-99.473.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@Narsil
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Using any question and answering model which leverages CLS token and run it with pipelines.
2. Supply an input record which has (con)text longer than the max_seq_length set to cause it to chunk the input record.
3. If you inspect the results of p_mask after line https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/question_answering.py#L307
4. You will see the token position used by CLS in the model (typically the first) has not been set to False and remains True meaning it will be masked out as an undesired token later on.
This will work if the input record is not chunked. The reason is the code before that constructs p_mask doesn't work as expected when the chunk record are different lengths (when calling np.asarray) and the lines pointed to above then fail silently to correctly mask the CLS token position.
This then causes differences in the calculations of the results answer spans and their associated probabilities.
## Expected behavior
The CLS token position should be correctly set to False so that it is considered a valid token for consideration in answer calculations regardless of whether the input record was chunked or not.
As a simple but possibly not an efficient fix I replaced the two lines in the if pointed to above with the following:
```
for span_id in range(num_spans):
cls_index = np.nonzero(np.array(encoded_inputs["input_ids"][span_id]) == self.tokenizer.cls_token_id)
p_mask[span_id][cls_index] = 0
```
| 04-13-2022 21:09:10 | 04-13-2022 21:09:10 | Hi @antonyscerri ,
Thank you very much for the report !
This definitely seems like a legitimate issue.
Do you have an example where this triggers an error really in the pipeline ? (This is to craft a test to make sure this is tested against.)
I will try to find one on a dummy example, but it's always better if we can have a real world example.
<|||||>Unfortunately I cannot share the data I observed it with. However any text block with a question which produces an answer should show a change in its score between it being fixed or not, assuming an appropriate model that uses CLS token is used. If you have an example where the NIL answer (based on CLS token) is the "best" answer you may see the top answer change from some other span to the NIL answer. See below for a quick example i just put together, which i tested it using "deepset/roberta-base-squad2" model.
And sorry but for completeness i also realised my quick fix involved changing the construction of p_mask to the following (moving the asarray inside the outer array:
```
p_mask = [
np.asarray([tok != 1 if question_first else 0 for tok in encoded_inputs.sequence_ids(span_id)])
for span_id in range(num_spans)
]
```
Running the following example with the original code yields the top answer of:
Answer: orth individuals in >Europe< after Paris and the
Score: 0.1588
With my quick fix i get:
Answer: ><London is the capita
Score: 0.9948
The other answer is 2nd place with a score of 0.000.
The data used was a passage taken from a wikipedia page on London and was run with the handle_impossible_answer set to True and max_seq_length=512:
```
{"context": "London is the capital and largest city of England and the United Kingdom. It stands on the River Thames in south-east England at the head of a 50-mile (80 km) estuary down to the North Sea, and has been a major settlement for two millennia. The City of London, its ancient core and financial centre, was founded by the Romans as Londinium and retains boundaries close to its medieval ones. Since the 19th century, \"London\" has also referred to the metropolis around this core, historically split between the counties of Middlesex, Essex, Surrey, Kent, and Hertfordshire, which largely comprises Greater London, governed by the Greater London Authority. The City of Westminster, to the west of the City of London, has for centuries held the national government and parliament. As one of the world's global cities, London exerts strong influence on its arts, commerce, education, entertainment, fashion, finance, health care, media, tourism, and communications, and has sometimes been called the capital of the world. Its GDP (β¬801.66 billion in 2017) makes it the biggest urban economy in Europe, and it is one of the major financial centres in the world. In 2019 it had the second-highest number of ultra high-net-worth individuals in Europe after Paris and the second-highest number of billionaires in Europe after Moscow. As of 2021, London has the most millionaires of any city. With Europe's largest concentration of higher education institutions, it includes Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London. The city is home to the most 5-star hotels of any city in the world. In 2012, London became the first city to host three Summer Olympic Games. London is the capital and largest city of England and the United Kingdom. It stands on the River Thames in south-east England at the head of a 50-mile (80 km) estuary down to the North Sea, and has been a major settlement for two millennia. The City of London, its ancient core and financial centre, was founded by the Romans as Londinium and retains boundaries close to its medieval ones. Since the 19th century, \"London\" has also referred to the metropolis around this core, historically split between the counties of Middlesex, Essex, Surrey, Kent, and Hertfordshire, which largely comprises Greater London, governed by the Greater London Authority. The City of Westminster, to the west of the City of London, has for centuries held the national government and parliament. As one of the world's global cities, London exerts strong influence on its arts, commerce, education, entertainment, fashion, finance, health care, media, tourism, and communications, and has sometimes been called the capital of the world. Its GDP (β¬801.66 billion in 2017) makes it the biggest urban economy in Europe, and it is one of the major financial centres in the world. In 2019 it had the second-highest number of ultra high-net-worth individuals in Europe after Paris and the second-highest number of billionaires in Europe after Moscow. As of 2021, London has the most millionaires of any city. With Europe's largest concentration of higher education institutions, it includes Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London. The city is home to the most 5-star hotels of any city in the world. In 2012, London became the first city to host three Summer Olympic Games.", "question": "What country is Paris the capital of?"}
```<|||||>Thank you so much for this test, definitely helps a lot !
Can't replicate with small random models (for obvious reasons) but at least we now have a slow tests covering this. |
transformers | 16,768 | closed | Missing commas causing concatenation fix | Fixes #16767 | 04-13-2022 21:06:15 | 04-13-2022 21:06:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,767 | closed | Missing commas causing concatenation | Missing comma on results in strings being implicitly concatenated together. Probably not what was intended
This is the affected line:
https://github.com/huggingface/transformers/blob/main/tests/bert_japanese/test_tokenization_bert_japanese.py#L176
https://github.com/huggingface/transformers/blob/main/tests/bert_japanese/test_tokenization_bert_japanese.py#L249
I found this issue automatically, see other issues [here](https://codereview.doctor/huggingface/transformers) | 04-13-2022 21:05:39 | 04-13-2022 21:05:39 | |
transformers | 16,766 | closed | Fix PT TF ViTMAE | # What does this PR do?
Fix PT TF ViTMAE: just use some settings both in PT/TF (instead of in only one model). Otherwise, the PT/TF equivalence tests for them won't use something like `std = 0.02` , and gets larger (init) weights --> larger diff in outputs.
Also, **the `eps` for `layer norm` layers should be the same in PT/TF**.
(not a real big deal in practice, since here is `1e-5` v.s. `1e-12` -> but it also affects the tests) | 04-13-2022 19:13:50 | 04-13-2022 19:13:50 | Let's merge since CI fails quite a lot on this one<|||||>> Let's merge since CI fails quite a lot on this one
Ok! |
transformers | 16,765 | closed | Fixup no_trainer examples scripts and add more tests | # Fixup `no_trainer` Examples and Bolster their tests
## What does this add?
This changes the logging behavior inside the `no_trainer` scripts, slightly changes how the initial configuration is stored, and adds tests for the tracking API.
## Who is it for?
Users of `transformers` who want to try out `Accelerate` quickly
## Why is this needed?
I was made aware that the scripts were laggy when it came to how logs were sent to weights and biases when using the `no_trainer` scripts, and this was due to the step being passed in as a parameter, causing a lag in when it gets uploaded.
To follow akin to the original Accelerate scripts, these are now passed in as a `"step"` parameter to the overall dictionary logged via `accelerate.log()`
`TensorBoard` also does not like when `Enum`'s are logged, so there is a manual adjustment rightr before saving the hyperparemeters to get the enum value from the LR Scheduler type.
Finally, as `TensorBoard` is a test requirement, I added in tests for tracking inside the no_trainer tests, as `TensorBoard` is also how we test that behavior in the CI in Accelerate proper. | 04-13-2022 18:31:09 | 04-13-2022 18:31:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,764 | closed | NER training crash | transformers version: 4.17.0
Hi, I run the script for NER task for [few-nerd](https://huggingface.co/datasets/dfki-nlp/few-nerd) dataset
https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py
```
CUDA_VISIBLE_DEVICES=3 python -u run_ner_no_trainer.py \
--model_name_or_path roberta-large \
--dataset_name dfki-nlp/few-nerd \
--dataset_config_name "supervised" \
--output_dir /scratch/w/wluyliu/yananc/finetunes/roberta_nerd_fine \
--text_column_name "tokens" \
--label_column_name "fine_ner_tags" \
--num_train_epochs 7 --local_files_only --debug --debug_cnt 50000
```
When I use small samples, for example below 30000, things go smoothly and the precision, recall and F1 are in good alignment with the original paper.
However, when I increase the samples used for training, for example, 50000, or full set, the metrics become zero, where the predictions from model are all "O". It is quite weird.
I also try the conll2003 dataset, it is the same.
Am I miss something ?
Thanks.
| 04-13-2022 18:02:57 | 04-13-2022 18:02:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Was that issue resolved? I am facing a similar problem with the HF implementation of LUKE: https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke |
transformers | 16,763 | closed | Fix batch size in evaluation loop | # What does this PR do?
The batch size used in the evaluation loop is wrong: it's using the per device batch size, which is different from the actual batch size when using DataParallel with more than one GPU. As a result, the `test_evaluate` test is failing for 2 GPUs (see #16716).
This PR fixes that. | 04-13-2022 17:16:57 | 04-13-2022 17:16:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,762 | closed | [Flax `.from_pretrained`] Raise a warning if model weights are not in float32 | The Flax `.from_pretrained` method respects the dtype of the model weights from which it is loaded. For model weights stored in bfloat16/float16, Flax models are instantiated with parameter weights in bfloat16/float16 respectively (see #16736). The general assumption is that all Flax model weights are in float32. Loading and storing model weights in a lower precision (bfloat16/float16) is likely to lead to undesirable behaviour and model instabilities. This PR adds a warning to the `.from_pretrained` method should any of the model weights not be in float32, and advices the user to upcast the weights to float32 prior to use. | 04-13-2022 17:05:19 | 04-13-2022 17:05:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>As an example, loading a set of PyTorch float16 Bart model weights into a FlaxBartForCausalLM model produces the following warning:
```python
from transformers import FlaxBartForCausalLM
model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart-fp16', from_pt=True)
```
```
Some of the weights of FlaxBartForCausalLM were initialized in float16 precision from the model checkpoint at sanchit-gandhi/tiny-random-bart-fp16:
[('model', 'decoder', 'embed_positions', 'embedding'), ('model', 'decoder', 'embed_tokens', 'embedding'), ('model', 'decoder', 'layernorm_embedding', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'scale'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '0', 'fc1', 'bias'), ('model', 'decoder', 'layers', '0', 'fc1', 'kernel'), ('model', 'decoder', 'layers', '0', 'fc2', 'bias'), ('model', 'decoder', 'layers', '0', 'fc2', 'kernel'), ('model', 'decoder', 'layers', '0', 'final_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'final_layer_norm', 'scale'), ('model', 'decoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'fc1', 'bias'), ('model', 'decoder', 'layers', '1', 'fc1', 'kernel'), ('model', 'decoder', 'layers', '1', 'fc2', 'bias'), ('model', 'decoder', 'layers', '1', 'fc2', 'kernel'), ('model', 'decoder', 'layers', '1', 'final_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'final_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn_layer_norm', 'scale')]
You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.
```<|||||>Sorry this a super nitty question, but I just wanted to ask to make sure we're all on the same page for best practice! Should one not ideally merge their own PR's rather than the reviewer?<|||||>Aah, yes! One should merge their own PRs, I rushed a bit this one. |
transformers | 16,761 | closed | CI: pip install now updates | # What does this PR do?
Follow up from https://github.com/huggingface/transformers/pull/16751: `pip install` in non-remote GHA workflows now updates packages, to allow us to override pre-installed versions. | 04-13-2022 16:53:36 | 04-13-2022 16:53:36 | This PR actually needs further changes, the adding `-U` doesn't solve it (possibly because of other version requirements in the packages installed in the image).
Should I update the dependencies of click (to `click>=8.0`, required by black) and protobuf (to `protobuf>=3.8.0`, required by tensorflow)? cc @LysandreJik @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>It would be nice to find a general solution that won't have us needing to add a new dependency update in three weeks.<|||||>(closing the PR after some offline discussion -- going to attempt to change to cache fresh venvs instead) |
transformers | 16,760 | closed | [Data2Vec] Add data2vec vision | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Finishes the Data2Vec integration by adding https://huggingface.co/models?other=data2vec-vision from https://github.com/facebookresearch/data2vec_vision/tree/main/beit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-13-2022 16:48:56 | 04-13-2022 16:48:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Uploaded checkpoints are here: https://huggingface.co/models?other=data2vec-vision . Will add a README to those ones and all other data2vec ones after this PR is merged |
transformers | 16,759 | closed | How to use Wav2Vec2ProcessorWithLM in pipeline? | I created a `Wav2Vec2ProcessorWithLM` as described in the blog post (https://huggingface.co/blog/wav2vec2-with-ngram).
How can I use it in the `pipeline`?
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
vocab_dict = processor.tokenizer.get_vocab()
from pyctcdecode import build_ctcdecoder
unigrams_file = open("language_model/vocabulary.txt", "r")
unigrams_list = unigrams_file.readlines()
decoder = build_ctcdecoder(
labels=list(vocab_dict.keys()),
kenlm_model_path="language_model/5gram.bin",
unigrams=unigrams_list
)
from transformers import Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model=processor_with_lm, device=0)
```
outputs
```
[AttributeError: 'Wav2Vec2ProcessorWithLM' object has no attribute 'config']()
``` | 04-13-2022 16:44:23 | 04-13-2022 16:44:23 | cc @patrickvonplaten <|||||>Hey @gxbag,
Please make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `"language_model/vocabulary.txt"`.
Regarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following:
```py
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
vocab_dict = processor.tokenizer.get_vocab()
from pyctcdecode import build_ctcdecoder
unigrams_file = open("language_model/vocabulary.txt", "r")
unigrams_list = unigrams_file.readlines()
decoder = build_ctcdecoder(
labels=list(vocab_dict.keys()),
kenlm_model_path="language_model/5gram.bin",
unigrams=unigrams_list
)
from transformers import Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-large-960h-lv60-self", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0)
```
This should correctly initialize the pipeline.<|||||>Hi @patrickvonplaten, thank you very much for answering. Your above provided code does not seem to use the language model. Here is the minimal working example to reproduce the error:
```python
from transformers import Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)
```
I believe I should be able to just
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
```
if I had set it up correctly, which I unfortunately have not.
I manually copied config.json from https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/main/config.json and added it to my repository as I believed this missing file could be the cause of the problem but I think there are possibly more problems. Could you help me out?<|||||>Hey @gxbag,
Yeah, your model here: https://huggingface.co/gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm/tree/main should indeed work out of the box.
Why doesn't it work? Can you show a codesnippet that shows how it doesn't work?<|||||>Hey @patrickvonplaten,
When I run this snippet:
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
```
The error output is exactly the following:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/media/home/run.ipynb Cell [1](vscode-notebook-cell://ssh-remote%2Bstudent2/media/home/run.ipynb#ch0000000vscode-remote?line=0)' in <cell line: 2>()
1[ from transformers import pipeline
----> ]()[2](vscode-notebook-cell://ssh-remote%2Bstudent2/media/home/run.ipynb#ch0000000vscode-remote?line=1)[ pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
File ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py:549, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, pipeline_class, **kwargs)
]()[545](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=544)[ # Infer the framework from the model
]()[546](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=545)[ # Forced if framework already defined, inferred if it's None
]()[547](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=546)[ # Will load the correct model if possible
]()[548](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=547)[ model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> ]()[549](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=548)[ framework, model = infer_framework_load_model(
]()[550](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=549)[ model,
]()[551](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=550)[ model_classes=model_classes,
]()[552](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=551)[ config=config,
]()[553](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=552)[ framework=framework,
]()[554](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=553)[ revision=revision,
]()[555](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=554)[ task=task,
]()[556](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=555)[ **model_kwargs,
]()[557](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=556)[ )
]()[559](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=558)[ model_config = model.config
]()[561](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=560)[ load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None
File ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py:255, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
]()[252](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=251)[ continue
]()[254](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=253)[ if isinstance(model, str):
--> ]()[255](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=254)[ raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
]()[257](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=256)[ framework = "tf" if model.__class__.__name__.startswith("TF") else "pt"
]()[258](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=257)[ return framework, model
ValueError: Could not load model gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>).]()
```
When I run the longer snippet:
```python
from transformers import Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)
```
The output is exactly the same:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/media/home/run.ipynb Cell 2' in <cell line: 5>()
2 processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
4 from transformers import pipeline
----> 5 pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)
File ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py:549, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, pipeline_class, **kwargs)
545 # Infer the framework from the model
546 # Forced if framework already defined, inferred if it's None
547 # Will load the correct model if possible
548 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> 549 framework, model = infer_framework_load_model(
550 model,
551 model_classes=model_classes,
552 config=config,
553 framework=framework,
554 revision=revision,
555 task=task,
556 **model_kwargs,
557 )
559 model_config = model.config
561 load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None
File ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py:255, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
252 continue
254 if isinstance(model, str):
--> 255 raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
257 framework = "tf" if model.__class__.__name__.startswith("TF") else "pt"
258 return framework, model
ValueError: Could not load model gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>).
```<|||||>Hey @gxbag,
Your repo does not have a model PyTorch file. Could you add the correct `pytorch_model.bin` to your folder here: https://huggingface.co/gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm/tree/main ? <|||||>Hey @patrickvonplaten,
Thank you so much for the hint! With this almost everything is solved as the model with the above snippet can now produce a result and it correctly uses the language model.
Something still seems off: When I use a longer audio file and use the striding method (as per this blog post: https://huggingface.co/blog/asr-chunking) of the pipeline to process longer audio, the last bit of text output is cut off.
To reproduce:
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm")
output = pipe("/any/long/audio/file.wav", chunk_length_s=30, stride_length_s=(6, 3))
output
```
This will cut off the last 3 seconds (as specified in stride_length_s) of the audio file from generating output. I didn't see this behavior when I used regular models without an added language model.
What could be the cause here?<|||||>Ah yeah this was a bug in Transformes that we recently fixed I think :-) See https://github.com/huggingface/transformers/pull/16730 . Could you check whether everything works correctly on master? :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten Is it possible to use Wav2Vec2ProcessorWithLM during _training_ to use a LM at train time? Or, is there another way to do this with some other HF tool?<|||||>Sure this is possibleb @kaleko - you just need to adapt the training script a bit, but it should be pretty trivial :-) <|||||>> Hey @gxbag,
>
> Please make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `"language_model/vocabulary.txt"`.
>
> Regarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following:
>
> ```python
> from transformers import AutoProcessor
> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
> vocab_dict = processor.tokenizer.get_vocab()
>
> from pyctcdecode import build_ctcdecoder
> unigrams_file = open("language_model/vocabulary.txt", "r")
> unigrams_list = unigrams_file.readlines()
> decoder = build_ctcdecoder(
> labels=list(vocab_dict.keys()),
> kenlm_model_path="language_model/5gram.bin",
> unigrams=unigrams_list
> )
>
> from transformers import Wav2Vec2ProcessorWithLM
> processor_with_lm = Wav2Vec2ProcessorWithLM(
> feature_extractor=processor.feature_extractor,
> tokenizer=processor.tokenizer,
> decoder=decoder
> )
>
> from transformers import pipeline
> pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-large-960h-lv60-self", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0)
> ```
>
> This should correctly initialize the pipeline.
Hi @patrickvonplaten ,
I've just tried your solution. However, it does not use the LM for decoding. `self.type` is always `"ctc"` as `feature_extractor._processor_class` is alway `None`. See here:
https://github.com/huggingface/transformers/blob/b487096b02307cd6e0f132b676cdcc7255fe8e74/src/transformers/pipelines/automatic_speech_recognition.py#L127
And this is my code:
``` python
model = Wav2Vec2ForCTC.from_pretrained("./results/checkpoint-11600").to("cuda")
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
from pyctcdecode import build_ctcdecoder
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="lm.small_3gram_correct.arpa",
)
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
pipe = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor_with_lm.tokenizer,
feature_extractor=processor_with_lm.feature_extractor,
decoder=processor_with_lm.decoder,
device=0)
```
Any clues?<|||||>Hey @anderleich, sorry could you add a new issue for the problem? It's always a bit hard to keep track of already answered issues :sweat_smile: <|||||>Done! ;) |
transformers | 16,758 | closed | Add onnx export of models with a multiple choice classification head | This PR adds the export support of models with a multiple-choice classification head, resolving [#16695](https://github.com/huggingface/transformers/issues/16695)
This includes the following additions:
* The `"multiple-choice"` feature was added to the corresponding model topologies
* The dummy inputs are generated to match the expected inputs shape which includes an extra dimension corresponding to the number of candidates answers
* The `inputs` method of the models corresponding `OnnxConfig` were modified to support the additional dynamic axis corresponding to the number of candidates
| 04-13-2022 16:35:06 | 04-13-2022 16:35:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review as well as the reminder @sgugger !<|||||>Thanks for the reviews @lewtun and @michaelbenayoun.
I have added some comments to make things clearer as well as the BigBird, Data2VecText, Electra and FlauBERT models support. Also when running the command line `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py`, all the tests are passing.<|||||>Thanks for iterating on this @echarlaix - it looks great! |
transformers | 16,757 | closed | [self-scheduled ci] explain where dependencies are | as discussed on slack, explain where deps are when docker images are used.
@LysandreJik | 04-13-2022 16:17:38 | 04-13-2022 16:17:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,756 | closed | Add warning when using older version of torch for ViltFeatureExtractor | Closes https://github.com/huggingface/transformers/issues/16637 | 04-13-2022 15:58:22 | 04-13-2022 15:58:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'd update to use `logging.warning` instead of `warnings.warn`, but other than that it looks like a sound approach.<|||||>Ok I made the change<|||||>Sorry should have been clearer, in this instance you should move the `logger` instantiation above, and use `logger.warning`. See an example here:
https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/base.py#L234-L237<|||||>@LysandreJik Ok done<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ah, before merging we'll need to update the code quality.
Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:
```
pip install -e ".[quality]"
```
And then run them with:
```
make fixup
```<|||||>@LysandreJik just updated it<|||||>@xhlulu it seems that the CI still isn't green, you can click on the check "check_code_quality" above to see why it's failing.<|||||>```
Traceback (most recent call last):
File "/home/circleci/.local/bin/doc-builder", line 8, in <module>
sys.exit(main())
File "/home/circleci/.local/lib/python3.6/site-packages/doc_builder/commands/doc_builder_cli.py", line 43, in main
args.func(args)
File "/home/circleci/.local/lib/python3.6/site-packages/doc_builder/commands/style.py", line 28, in style_command
raise ValueError(f"{len(changed)} files should be restyled!")
ValueError: 30 files should be restyled!
Exited with code exit status 1
CircleCI received exit code 1
```<|||||>@NielsRogge i just updated to match upstream, should work now<|||||>Hmm @xhlulu I checked and there's no `meshgrid` being used in `ViltFeatureExtractor`.
It's only the model that requires torch 1.10 or higher, right? Not the feature extractor?<|||||>It's in `ViltEmbeddings`:
https://github.com/huggingface/transformers/blob/39f8eafc1b6f00769240f714e2df5b2c5f111c32/src/transformers/models/vilt/modeling_vilt.py#L115-L151<|||||>I moved the error message, but when trying to run the make fixup, it gives this:
```
(venv) xhlu@XHL-Desktop:~/dev/transformers$ make fixup
make: execvp: /bin/sh: Argument list too long
make: *** [Makefile:10: modified_only_fixup] Error 127
```<|||||>Unrelated error, merging! |
transformers | 16,755 | closed | Kill async pushes when calling push_to_hub with blocking=True | # What does this PR do?
This PR fixes a bug that sometimes appear in the `Trainer` when `push_to_hub=True`: if one of the async pushes finishes after a regular non-async push, the history gets messed up and we end up with an error like this:
```
The push command with PID 1468 failed.
remote: error: cannot lock ref 'refs/heads/main': is at 07c85fd69cd46a7daee6323c5a5eefc3e6a886da but expected 1fd7f122ef725f8e340a14bc97d537812de44076
```
To fix this, when the `Trainer` (or the user) calls `push_to_hub` with `blocking=True`, we interrupt any push in progress. The commit history will still be good since the commit don't take time.
cc @philschmid who had the error. | 04-13-2022 15:53:29 | 04-13-2022 15:53:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think this is the right solution, but I wonder if we can't test it in a reproducible manner. Maybe we could do something like the following:
- Commit a largish file, and push that
- Reset the head without keeping the changes so that we're back on the previous commit
- Commit a small file, and push that.
Now the first commit will likely fail with the error above. Unfortunately, this is likely a very heavy test, so I'm not sure it should be part of the CI. It can stil be used to test the validity of the solution above, though!<|||||>Actually @philschmid but let's say we are the same person ;-) <|||||>For more context tested with `roberta-large` |
transformers | 16,754 | closed | Update GPT2 I/O definition to be recognized by ORT optimizer. | This PR aims at making the GPT2 + past more compatible with ONNX Runtime optimizer (especially attention fusion).
Past key values should be presented as a single tensor with both key and value stacked on the leading axis.
It also provide a concat mecanism to merge the resulting past key values so it's a single tensor too.
- [x] I/O shapes changes
- [x] I/O dtype changes
- [ ] Past keys wrapper
- [ ] Monkey Patching
- [ ] Unittests | 04-13-2022 15:40:38 | 04-13-2022 15:40:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16754). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.