repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 23,753 | closed | fix Whisper tests on GPU | # What does this PR do?
The daily CI showed that Whisper has new test failures, related to the recent merge of the prompting feature. This PR fixes those test failures.
The tests ran OK on CPU but failed on GPU because the input data wasn't moved to the GPU.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 05-25-2023 12:25:20 | 05-25-2023 12:25:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also skipping a few tests in `WhisperModelTest` that were previously skipped in `WhisperEncoderModelTest`, see https://github.com/huggingface/transformers/pull/22060
Although I just saw there's another open PR dealing with the same issue, so maybe none of these should be skipped: https://github.com/huggingface/transformers/pull/22803<|||||>Thanks again! |
transformers | 23,752 | closed | Fix is_ninja_available() | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
i noticed that ninja cannot be detected by importlib because it is not a python package, so i fixed is_ninja_available() with an [implementation comes from pytorch](https://github.com/pytorch/pytorch/blob/4882cd08013733a5dbe299871ad7e974bce074b3/torch/utils/cpp_extension.py#L1629).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-25-2023 11:55:34 | 05-25-2023 11:55:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR! Can you just run `make style` on your branch to fix the quality issue?
i have passed all checks now!!<|||||>Thanks! |
transformers | 23,751 | closed | Fix psuh_to_hub in Trainer when nothing needs pushing | # What does this PR do?
This PR fixes `push_to_hub` in the `Trainer`. Since `Repository.push_to_hub`can return `None` or a tuple, we have to do a small test before unpacking the output.
Fixes #23712 | 05-25-2023 11:54:35 | 05-25-2023 11:54:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,750 | open | sequences and scores dimensions are mismatch when using generate() | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
generation_config = GenerationConfig(
temperature=0.7,
top_p=1,
top_k=40,
num_beams=1,
do_sample = True,
num_return_sequences = 8,
max_length = 109,
return_dict_in_generate = True,
output_scores = True
)
generator_outputs = generator_model.generate(
inputs["input_ids"],
attention_mask=inputs["attention_mask"],
generation_config=generation_config,
synced_gpus=True
) # (B*num_candidates, L)
generated_tokens = generator_outputs.sequences # (B*num_candidates, L+1)
generated_logits = torch.stack(generator_outputs.scores, dim=1)
generated_seqs = self.generator_tokenizer.batch_decode(
generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
# get the probability of generated tokens
seq_len = generated_logits.size(1)
vocab_size = generated_logits.size(-1)
generated_probs = nn.functional.softmax(generated_logits, dim=-1)
new_generated_probs = generated_probs.contiguous().view(-1, vocab_size)
generated_tokens_indices = generated_tokens.contiguous().view(-1).unsqueeze(1)
new_generated_probs = torch.gather(new_generated_probs, 1, generated_tokens_indices)
### Expected behavior
when i run the code to `new_generated_probs = torch.gather(new_generated_probs, 1, generated_tokens_indices)`, the error arise:
`RuntimeErrorRuntimeError: : Size does not match at dimension 0 expected index [1408, 1] to be smaller than self [1232, 250680] apart from dimension 1Size does not match at dimension 0 expected index [1968, 1] to be smaller than self [1744, 250680] apart from dimension 1`
I look carefully about the generate method, looks like the dimension of generator_outputs.sequences are different of generator_outputs.scores, cause in `transformers/src/transformers/generation/utils.py` 1363 lines:
`generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length`
however in `class SampleDecoderOnlyOutput`, it says:
`sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)):
The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter
if all batches finished early due to the eos_token_id.`
and
`scores (tuple(torch.FloatTensor) *optional*, returned when output_scores=True is passed or when config.output_scores=True):
Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for
each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size)`.
`
that means, the length of sequences is equal to max_length, the scores is equal to max_new_tokens. The difference between the two values is `input_ids_seq_length`.
However, I can't use max_length and max_new_tokens together cause the max_new_tokens are more priority.
Is there anyway to deal with it?
I use my own dataset on Bloom. But I think this problem you can use any model on any dataset to reproduct it. | 05-25-2023 10:48:42 | 05-25-2023 10:48:42 | btw i think it's probably a mistake cause by `generation/utils.py` in `sample()` function, that when calculate the scores, it stops early. cause I look up my tensor dimension, it's always differ 208 dimensions (the sequences is 208 longer than the scores), that's equal to 208/8=26, which is my `input_ids_seq_length`
the sequences should be equal to scores, right?<|||||>Hi there, I am also having some issues with the shapes between sequences and scores. In my case, I am finding the length of the scores tuple is longer than the sequence length? Any idea why this would be? Seems like it's the opposite of your issue<|||||>Hey @GasolSun36 @aburns4 👋
The behavior you see is what is expected. The docstrings are a bit outdated and in need of a retouch 🤗 In a nutshell:
1. In `generate`, the scores are exclusively related to new tokens (tokens not in the prompt)
2. The output sequence for decoder-only models (like BLOOM) includes the prompt as well
3. If you want to obtain the logits for the prompt tokens, you should do run a model forward pass with your prompt (see below). Please note that the logits always refer to the next token, so the logits with index 0 correspond to the token with index 1.
4. If you want to get the logits for the whole sequence (prompt + generated tokens), you have to concatenate these two sets of logits :)
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
inputs = tokenizer(["The quick brown"], return_tensors="pt")
logits = model(**inputs).logits
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same error <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,749 | closed | [LongFormer] code nits, removed unused parameters | # What does this PR do?
LongformerEmbeddings "position_embedding_type" parameter are not used.
Fixes #23730
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-25-2023 08:53:15 | 05-25-2023 08:53:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Okay, seems there was this #23343 but they are compatible |
transformers | 23,748 | closed | LION optimizer calling error | TypeError: Lion.__init__() got an unexpected keyword argument 'is_paged' | 05-25-2023 08:34:55 | 05-25-2023 08:34:55 | As said previously, there is nothing we can do if you do not follow the issue template and post a code reproducer.<|||||>@sgugger hello, this is what I done:
```
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
deepspeed=deepspeed,
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_ratio=0.1,
num_train_epochs=num_epochs,
learning_rate=learning_rate,
# fp16=True,
fp16=not int8_train,
logging_steps=10,
# optim="adamw_torch",
optim="paged_lion_32bit",
evaluation_strategy="steps" if val_set_size > 0 else "no",
save_strategy="steps",
eval_steps=50 if val_set_size > 0 else None,
save_steps=50,
output_dir=output_dir,
save_total_limit=5,
load_best_model_at_end=True if val_set_size > 0 else False,
ddp_find_unused_parameters=False if ddp else None,
group_by_length=group_by_length,
report_to="wandb" if use_wandb else None,
run_name=wandb_run_name if use_wandb else None,
),
data_collator=transformers.DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
),
)
```
the newly commited LION seems can not load properly, please have a test<|||||>This is not something I can reproduce as many of the objects you use are undefined.<|||||>I believe this is because of you're using previous version of `bitsandbytes`. In the latest version, there's an additional arguments called `is_paged`
https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/optim/lion.py#L9<|||||>@louisowen6 Hi, does there any minimal examples on how to enable LION optimizer with llama along with training with deepspeed?
Just can't found such a detailed describe on this, it could be better if a experiment result compare with AdamW and LION on llama model<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,747 | closed | Fix `pip install --upgrade accelerate` command in modeling_utils.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Really anyone, I don't want to waste Tim Dettmers' time
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-25-2023 04:23:30 | 05-25-2023 04:23:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,746 | closed | tf run_clm.py calls model.get_input_embeddings().weight which does not exist | ### System Info
seems to happen on mac, linux, and windows, both python 3.9 and 3.11. currently using stable from pip.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
if you run run_clm.py from scratch it will produce an error saying that embeddings.weight doesn't exist.
https://github.com/huggingface/transformers/blob/e45e756d22206ca8fa9fb057c8c3d8fa79bf81c6/examples/tensorflow/language-modeling/run_clm.py#L486
Looks like it details a temporary workaround, I'm guessing the get_input_embeddings function's contract has changed.
### Expected behavior
I would expect the weight variable to exist. | 05-25-2023 02:44:41 | 05-25-2023 02:44:41 | cc @Rocketknight1 <|||||>I was able to fix it locally by calling embeddings.hidden_size. Not sure if that is correct.<|||||>Hi @wesboyt, can you paste the exact command you called `run_clm.py` with?<|||||>--overwrite_output_dir --do_train --model_type gpt2 --tokenizer_name gpt2-it --train_file encoded.txt --do_eval --dataloader_num_workers 1 --output_dir out --block_size 256 --save_steps 100000 --validation_file features.txt --learning_rate 0.001 --num_train_epochs 1 --optim adamw_torch --per_device_train_batch_size 8 --config_overrides num_hidden_layers=14,n_head=16,vocab_size=13,hidden_size=1024<|||||>from what i see in the debugger the type of the actual embeddings variable is TFSharedEmbeddings
I was able to fix it by using hidden_size it was succesfully training after that change
<|||||>the actual hidden size variable in the debugger did not align with the config overrides hidden_size parameter, it was like 728 or 768 or something along those lines.<|||||>Hi @wesboyt, I can't reproduce the issue here - I was able to train `gpt2` using the TF `run_clm.py` script and didn't encounter these errors. Can you try the command I used and confirm that there isn't some environment issue on your machine?
```python run_clm.py --model_name_or_path gpt2 --output_dir output --dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --block_size 128```<|||||>Yea that runs for me, maybe its related to some of the other parameters. Its all good i fixed it locally for me and if people encounter it in the future they should be able to find this.<|||||>I will close this, sorry to distract the team. I believe it happened because i used torch adamw inside of tf. Funny how it still trained after my hiddensize fix. |
transformers | 23,745 | closed | Bug using revision param in AutoModelForCausalLM.from_pretrained | ### System Info
2023-05-24 23:09:53.575434: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:63: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-05-24 23:10:05.261610: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to use the new shiny mpt model from the huggingface hub from a revision :
```python
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
import torch
import transformers
import accelerate
model_name = 'mosaicml/mpt-7b'
model = AutoModelForCausalLM.from_pretrained(model_name,
trust_remote_code=True,
revision="refs/pr/23",
device_map="auto"
)
```
But I stumble on this error after the using the above code :
`ValueError: MPTForCausalLM does not support `device_map='auto'` yet.`
The "auto" was indeed not supported in the main branch but we add a correction in the PR branch (so the argument revision="refs/pr/23")
I did some investigation and the model was indeed loading the main .py files :
```
Downloading (…)main/modeling_mpt.py: 100%
17.4k/17.4k [00:00<00:00, 1.12MB/s]
Downloading (…)in/param_init_fns.py: 100%
12.6k/12.6k [00:00<00:00, 971kB/s]
Downloading (…)resolve/main/norm.py: 100%
2.56k/2.56k [00:00<00:00, 131kB/s]
A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b:
- norm.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
A new version of the following files was downloaded from https://huggingface.co/mosaicml/mpt-7b:
- param_init_fns.py
- norm.py
```
You can see the main/ here. I did manually check the modeling_mpt.py file it didn't have the PR changes.
So I did try to find where the bug where inside the transformers package ... (first time looking at the code).
I am a bit surprised !
Basicly the code rewrite the config values after having read it (it adds the information about the repo ids (in add_model_info_to_auto_map in generic.py in utils/ from the transformers package) something that seems normal.
```
"auto_map": {
"AutoConfig": "mosaicml/mpt-7b--configuration_mpt.MPTConfig",
"AutoModelForCausalLM": "mosaicml/mpt-7b--modeling_mpt.MPTForCausalLM"
}
```
It notably add the "--" string.
then in get_class_from_dynamic_module (in dynamic_module_utils.py) it has :
```
if "--" in class_reference:
repo_id, class_reference = class_reference.split("--")
# Invalidate revision since it's not relevant for this repo
revision = "main"
```
So the revision become "main" and from here we are done.
I suppose if i do a PR removing the revision overide some people will not be happy ?
### Expected behavior
The expected behaviour is to load the file from the PR branch. (not the main/) | 05-24-2023 23:03:06 | 05-24-2023 23:03:06 | cc @sgugger who is more familiar with this, I won't have bandwidth to dive into this now. <|||||>The revision argument is supported for weights but not for the code at the moment. Support will be added soon, but in the meantime you can download the revision for this repo and then use `from_pretrained` with a local folder and it will work.<|||||>Nice thanks you @sgugger !<|||||>@sgugger isn't this a security issue? When using `trust_remote_code=True`, there is a warning to explicitly pass a revision to make sure that you are running code that you have looked at. But IIUC, if you pass a `revision="commit SHA I have verified"` it will actually load whatever code is on the `main` branch?<|||||>@samhavens This comes from the recent change we made to avoid duplicating the code files in all repos (now there is one source of truth). As I said we're working on a fix, should come tomorrow/early next week.<|||||>If you want to give it a try, the PR linked above should fix your issue.<|||||>Thanks @sgugger! |
transformers | 23,744 | closed | ImportError: cannot import name 'PartialState' from 'transformers.trainer_pt_utils' | ### System Info
`transformers` version: 4.28.0
`accelerate` version: 0.19.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers.trainer_pt_utils import PartialState`
### Expected behavior
it cannot import the class, eventhough I tried to downgrade transformers and install accelerate | 05-24-2023 22:27:50 | 05-24-2023 22:27:50 | That class is defined in Accelerate, not in `trainer_utils`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,743 | closed | BertTokenizer.save_vocabulary does not save the full vocab | ### System Info
The current version still has the issue.
The [documentation](https://huggingface.co/transformers/v4.8.2/model_doc/bert.html?highlight=berttokenizer#transformers.BertTokenizer.save_vocabulary) states that `save_vocabulary` "save[s] only the vocabulary of the tokenizer **(vocabulary + added tokens)**" but the [code](https://github.com/huggingface/transformers/blob/e45e756d22206ca8fa9fb057c8c3d8fa79bf81c6/src/transformers/models/bert/tokenization_bert.py#L358) only deals with the vocab. I believe changing this line to:
```
for token, token_index in sorted(dict(self.vocab, **self.added_tokens_encoder).items(), key=lambda kv: kv[1]):
```
would solve it.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running `save_vocabulary` as is reproduces the issue.
### Expected behavior
The full (new) vocabulary should be saved to file. | 05-24-2023 22:21:22 | 05-24-2023 22:21:22 | Hey! Not sure which part of the documentation says that, but it is in the wrong! The additional special tokens are saved somewhere else, and properly handled. `Bert` is also one of our core model, but also an old one, which is why the doc might not be up to date. When loading the vocab, only the actual vocabulary is expected.
Tell me if this does not solve your confusion! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,742 | closed | Remove the multi step tokenization warning when using HF Data Collators | We create a new function that pads without warning the user to instead calling `tokenizer.forward` because it's faster.
We use it for Transformers' own DataCollator calls.
It doesn't make much sense that a DataCollator would change the state of a tokenizer imho, so every time:
- we save the state of the tokenizer with regards to the warning
- disable the warning
- pad
- restore the state of whether we want to warn or not.
See https://github.com/huggingface/transformers/issues/22638 | 05-24-2023 21:58:17 | 05-24-2023 21:58:17 | @sgugger<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23742). All of your documentation changes will be reflected on that endpoint.<|||||>Doing things this way could lead to weird (though not dangerous) race behaviour in threaded code.
The only way I see to not get that race behavior is to let `pad` take an extra argument to disable the warning, though I understand that API providers are very hesitant to alter APIs in any way
<|||||>The race behavior would just be that the warning would not be displayed ofc<|||||>The data collator cannot really be in threaded code. Different processes? Definitely for distributed training but then they will each have their own tokenizer. So I think it's a risk we can take.<|||||>I'm sorry @sgugger, you mean the branch name? Or the pull request name? Or the function name? I changed the name of the pull request.
Also, am I supposed to run `make quality; make style` first? & am I supposed to run some tests?<|||||>I synced the branch<|||||>The PR now deletes 26 doc files, so it's not really something we can merge :grimacing: <|||||>OK I fixed the weird document files deletion. Really not sure how that happened. Sorry about that.<|||||>Could you just run `make style` to fix the formatting issues? Thanks!<|||||>done<|||||>There is still an error. Can you make sure to do `pip install transformers["quality"] --upgrade` (to make sure you have a proper version of all the necessary libs)?<|||||>Mmm now we're back to 43 files changed :cry: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,741 | closed | asd | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-24-2023 16:57:42 | 05-24-2023 16:57:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23741). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,740 | closed | add type hint in pipeline model argument | # What does this PR do?
This PR adds type hints for model argument in pipeline function
## Who can review?
@Narsil
@amyeroberts
| 05-24-2023 16:53:18 | 05-24-2023 16:53:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think I should add PretrainedModel and TFPretrainedModel in string form just like PreTrainedTokenizerFast was given in tokenizer argument. TYPE_CHECKING is false by default<|||||>LGTM.
|
transformers | 23,739 | closed | Add DINOv2 to Transformers | ### Feature request
Add DINOv2 to the transformers library.
Weights are available in [DINOv2 repo](https://github.com/facebookresearch/dinov2)
### Motivation
Currently, DINOv2 can be used through `torch.hub.load`, but having it ported to transformers directly would be nice to have, and since DINOv1 is already in the library it might not be that difficult to do
### Your contribution
I would love to do a PR to make this addition in the case that it's actually not something hard to do and someone could point me the way that I should look at. | 05-24-2023 16:09:58 | 05-24-2023 16:09:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,738 | closed | Remove the last few TF serving sigs | A couple of serving signatures arrived while the PR was open - this should remove the last of them. | 05-24-2023 16:08:35 | 05-24-2023 16:08:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,737 | closed | Revamp test selection for the example tests | # What does this PR do?
After the revamp of the test_fetcher, the test collection of the example tests wasn't working anymore: if only an example is changed, the corresponding tests are not run. This PR adapts the example tests collection to the new test fetcher and makes sure they are appropriately run. It's more fine-grained than the previous approach which ran all example tests as soon as when a diff was discovered: here the tests are run when the modifications impact the test examples.
To see this in action:
- at the [first commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65204) the diff only impacts repo utils, so only the test repo utils job is ran (no example or other test jobs).
- at the [second commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65207) the diff has a change in the PyTorch `run_glue`, so the PyTorch example test job is ran, but only on the trainer examples (not the no_trainer ones since they are not touched).
- at the [third commit](https://app.circleci.com/pipelines/github/huggingface/transformers/65208), we remove the fake example modif and add a fake change in the Trainer, which impacts all examples.
cc @ydshieh for when you're back from vacation. | 05-24-2023 15:36:45 | 05-24-2023 15:36:45 | |
transformers | 23,736 | closed | [Whisper] Reduce batch size in tests | # What does this PR do?
The slowest PyTorch tests as of 10th May were reported as follows:
```
68.24s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_pair_input
64.24s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_single_input
62.17s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_internal_consistency
60.57s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_add_special_tokens
55.37s call tests/models/whisper/test_modeling_whisper.py::WhisperEncoderModelTest::test_model_outputs_equivalence
52.59s call tests/models/mobilebert/test_modeling_mobilebert.py::MobileBertModelTest::test_save_load_fast_init_from_base
...
```
Source: https://huggingface.slack.com/archives/C01NE71C4F7/p1683734816738159
Taking a deeper look, we see that the first of these four tests were Whisper tokeniser tests. Running locally, the Whisper tokenisation tests take a fraction of the time reported above. For instance, I can run the **entire** Whisper tokenisation test suite in 48s, with the longest tests from the CI taking less than 10x the time locally:
```
3.43s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_pair_input
3.52s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_maximum_encoding_length_single_input
3.43s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_internal_consistency
0.34s call tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_add_special_tokens
```
Checking more recent CI runs, we see that these tokenisation tests have disappeared from the top tests by time, e.g. for the PR [#23223](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/636881279/6382baaf-cb48-4b59-ad00-ab8ab3d42763/0/~/transformers/reports/tests_torch/durations.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEOOAPIWJIW%2F20230524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230524T150859Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMj%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIFSTbUmgA5nKTMzcsI888FWe4%2BqsFRrE42wJsJkxpXrjAiEA7NGVGvb6DVyu%2FmuTm%2BLg07L6KkB5v%2Fv4yFpxIqql4AUqtAII8P%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARADGgwwNDU0NjY4MDY1NTYiDHUW%2F9eGKhbQqo67WyqIAt%2FowvbsfkuLoidqZLHQeXrqZl55RY4AJnMGRfT0xY12jCvpjZ7F89p9cJbln9stqj6NpUSSaqzrO4L7XaHhmJ8I8ZRoUo7Uwk4J7ll3CvohzUJqVzYnEzeLvinEF0%2Bi0mx6ZT2DyP8bYjJI%2BWAu25gTByeny05l6xH9PNy7kxurax9scDnBc7Be0Gs56y74F2%2FVvrdCaxigY0wSNfgCXyasfX%2FIq87UP7y%2BVZIxDoL5zD1gbZmUulf3gL5VAcaOwOtkmhCwisjc%2BbTbUSHMTpJZ1D77U3mSmJVXUSQnzpNavl%2FVHXY3DOh45KFoQETemsTehhUlOCMyV91IrO%2BwLacXlzbLsQUMBDCo0LijBjqdAVgdasvAQc5feVYL4SuV5Re4TIrGh6cLLj689oFfoVilLj8AjcOj5GSFHUBCzH792fYCmTZIF%2B66qJ3ieBTup5C%2B1PkaoEZ3mQjAFi8fjUiDDwrFGWlCwTtOklha1HAMKPPmEIW4Q5nhp5OkvPU5CPrZcIm90VNLGCy7CunrrugV74llZpDAU5UwwkUJRYLp3aP8nifKGMDj924ubj8%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=9c473476aa6883dc476f641125a3b9ea33a8c5df489676e0ff1bab13129831b0) from the 18th May. Note that these are just generic tokenisation tests, and the Whisper tokeniser is one-to-one the same as GPT2, so we’d expect the same runtime here.
In this PR, we speed-up the Whisper modelling tests by a factor of ~4x by reducing the batch size from 13 -> 2, which should address the slow modelling tests. We'll monitor the Whisper tokenisation tests to see if they keep cropping up as the slowest PyTorch tests in the future and amend as necessary. | 05-24-2023 15:19:41 | 05-24-2023 15:19:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,735 | closed | fix: delete duplicate sentences in `document_question_answering.mdx` | # What does this PR do?
There are duplicate sentences in `document_question_answering.mdx` from line number 40 to 45, so delete the sentences from line number 43 to 45.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #23729
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-24-2023 14:55:28 | 05-24-2023 14:55:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,734 | closed | Is it possible to use the transformers library with models, e.g. t5-small, commercially? | Hi!
I would like to use the transformers library with models, e.g. t5-small, commercially. I checked the license of the transformers library is Apache 2.0. And the license of the model, e.g. t5-small (https://huggingface.co/t5-small), is also Apache 2.0. So, can I use the library with such models, commercially? | 05-24-2023 14:25:57 | 05-24-2023 14:25:57 | Questions like this are better suited on the [forums](https://discuss.huggingface.co/) as we keep issues for feature requests and bugs only. The Transformers library is Apache 2.0 so there is no problem using it for commercial use. Then it's up to the model you are using. As you noted `t5-small` should be fine.<|||||>Thanks for the quick reply.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,733 | closed | High memory usage for BigBirdForPreTraining | ### System Info
DGX-A100
transformers 4.29.2
### Who can help?
@ArthurZucker
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run BigBirdForPreTraining with the BigBirdConfig left at default (except vocab size set to 32k). Essentially I recreated pretraining from section F.1 of the original publication https://arxiv.org/pdf/2007.14062.pdf
### Expected behavior
Should be able to fit a batch size of 4 in ~16 GB according to the paper but in reality a batch size of 4 exceeds 40GB on the forward call. | 05-24-2023 14:15:28 | 05-24-2023 14:15:28 | Sorry, but this is lacking a lot of information, especially a reproducing script. Lots of things can impact the RAM taken by the model, this should be asked on the [forum](https://discuss.huggingface.co/top?period=weekly), not here. <|||||>I added a reproducing script here:
https://github.com/kuben-joz/bigbird-example/tree/master
I thought it would be better to make a submission here rather than the forums as it concerns the implementation details of the model. If nevertheless you prefer for this to be moved to the forum I can do so.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,732 | closed | TF SAM memory reduction | Extremely small PR to use smaller dummy inputs to build SAM, which might help with memory issues on smaller devices. | 05-24-2023 14:09:37 | 05-24-2023 14:09:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,731 | closed | MBART/MBART-50 PreTraining ? | I am trying to pre-train a MBART-50 model from scratch(using the IndicCorpus dataset), similar to the one given in this blog post(https://huggingface.co/blog/how-to-train). There seems to be no issue when I do for RoBERTa and BART, but for MBART and MBART-50 architecture, the model loading fails, and I get the following error message :
`OSError: Can't load tokenizer for '/path/to/mbart-model. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './assamese_MBART' is the correct path to a directory containing all relevant files for a MBartTokenizerFast tokenizer.`
I wanted to ask, whether it is an issue with the way I have implemented the tokenizer(similar to the one given the blog post(without adding language codes and special tokens)) ; or the way the data is there(Line by Line in the language) or it is something else altogether.
If there is any other clarification from my side, I'm more than happy to justify. @patrickvonplaten | 05-24-2023 08:51:44 | 05-24-2023 08:51:44 | cc @ArthurZucker <|||||>Could you share a minimal reproducing script to isolate the bug that you are getting?
Seems like the path that is trying to be reached is `./assamese_MBART`, check that the local file has a tokenizer, maybe use `use_local` only.<|||||>Hi !
Similar to the one given in the blog post, I started with something like :
```
from tokenizers import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
tokenizer = ByteLevelBPETokenizer(
"/content/assamese_BART/vocab.json",
"/content/assamese_BART/merges.txt",)
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
```
This seems to run fine, but here :
```
config = MBartConfig( vocab_size=50000,max_position_embeddings=32,num_attention_heads=2,
num_hidden_layers=1,type_vocab_size=1,
)
tokenizer = MBart50TokenizerFast.from_pretrained("./assamese_BART", max_len=64)
model = AutoModelForMaskedLM.from_config(config=config)
print(model.num_parameters())
```
In the `MBart50TokenizerFast.from_pretrained` line, it is showing that it doesn't recognize the path, but I used the same path to train two other models(RoBERTa and BART), and this exception was not raised.
Also, the local file has the tokenizer(`vocab.json` and `merges.txt`) in the same folder `./assamese_MBART`
I'll try to use the `use_local` argument and let you know shortly..<|||||>The`use_local` argument didn't work either. I reckon the issue is due to the `BertProcessing`, which worked for BART and RoBERTa, but not for MBart50.
I look forward to your answer. <|||||>The blogpost is 3 years old, so very much outdated. If you look at the script you gave me, you are never saving the tokenizer to the `./assamese_BART` repo. <|||||>I agree that the blog post is a little to old, but I couldn't find any relevant articles anywhere.
> The blogpost is 3 years old, so very much outdated. If you look at the script you gave me, you are never saving the tokenizer to the `./assamese_BART` repo.
I did save it, just that I forgot to include it in the code. The only problem is that I can't figure out why is it working for RoBERTa and BART, but not MBart50, as I need to implement that only 😿.
Also, the log produced :
` ... Otherwise, make sure './assamese_BART' is the correct path to a directory containing all relevant files for a MBart50TokenizerFast tokenizer`
Apart from `vocab.json` and `merges.txt` does MBart50Tokenizer needs some other file due to which it is not reading it?
If you could point me/tell me how I can do it would be really really helpful 🙏🏻 <|||||>No worries. Could you just give me a full reproducing script of what your are doing with MBart50,that way I can check if we indeed have aproblem with the blogpost and it might have to be update! <|||||>I somehow managed to make it work by using `SentencePiece` tokenizer instead of the one given in the script. It now reads the spm file and works when it reads the `spm.model` file. This is what I did :
```
import sentencepiece as spm
spm.SentencePieceTrainer.Train(
input='/content/as_mod.txt',
model_prefix='spm',
vocab_size=1000,
pad_piece='<pad>',
bos_piece='<s>',
eos_piece='</s>',
user_defined_symbols='<mask>',
model_type='unigram'
)
sp = spm.SentencePieceProcessor()
sp.Load('spm.model')
```
Now, instead of the `from_pretrained` method, I directly use the `spm.model` file. Code for that :
```
config = MBartConfig(
vocab_size=1000,
max_position_embeddings=32,
num_attention_heads=2,
num_hidden_layers=1,
type_vocab_size=1,
)
# TODO: I believe, due to the BERTPreProcessor, the MBART doesn't seem to recognizes it.
tokenizer = MBartTokenizerFast(vocab_file='/content/spm.model')
model = AutoModelForMaskedLM.from_config(config=config)
```
This seems to work, and it can read the model. However, now when I try to train the model, it gives an error which I haven't even seen in this context. First, the `Trainer` and `TrainerArguments` code :
```
training_args = TrainingArguments(
output_dir='/content/assamese_BART',
num_train_epochs=1,
per_device_train_batch_size=32,
save_steps=5000,
save_total_limit=1,
prediction_loss_only=True
)
# Set the trainer.
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset
)
```
And on running `trainer.train()`, I come across this scary error :
```
/usr/local/lib/python3.10/dist-packages/transformers/models/mbart/modeling_mbart.py in shift_tokens_right(input_ids, pad_token_id)
73
74 index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
---> 75 decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()
76 prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone()
77 prev_output_tokens[:, 0] = decoder_start_tokens
RuntimeError: index -1 is out of bounds for dimension 1 with size 16
```
I can't seem to find anything on this anywhere. Can I get some help in this regard? This is when I didn't use GPU from colab environment, but when I used GPU the error was :
```
RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect
```<|||||>This just means that the tokens outputed by the tokenizer are not in the correct format. I recommend using `tokenizers` and the `tokenizers` from tranformers, otherwise your are mixing two libraries, which are not made to work together.<|||||>Ok. Will do 👍🏻 <|||||>> No worries. Could you just give me a full reproducing script of what your are doing with MBart50,that way I can check if we indeed have a problem with the blogpost and it might have to be update!
I'll link the Colab notebook here for your reference, if there is anything missing just tell :
https://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing<|||||>Thanks for the collab. I can't help you if you keep using the spm model raw. I can help you regarding the original issue, that is if the tokenizer you are using is from `tokenizers` or `transformers`! 😉 <|||||>Yeah, my bad. I accidentally gave gave you the different version of the same repo. Just wait a minute ...<|||||>> Thanks for the collab. I can't help you if you keep using the spm model raw. I can help you regarding the original issue, that is if the tokenizer you are using is from `tokenizers` or `transformers`! wink
https://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing
Check(I think it's the same link, but the changes are made)<|||||>Notebook doesn't work for me! Could you check? (ALso I think you gave me writing rights 😅 so I might have changed things, dont't give them to me I'll copy your notebook)<|||||>Sorry for the late reply...
> Notebook doesn't work for me! Could you check? (ALso I think you gave me writing rights sweat_smile so I might have changed things, don't give them to me I'll copy your notebook)
What exactly doesn't work? Can you access the data ?
Changed the settings to view only.
https://colab.research.google.com/drive/1cNfQn9nPITpwCS4i8P8loeN2SjLON1Tx?usp=sharing
I think it should work just fine...<|||||>Again, the notebook does not work out of the box. Try to open a private tab and run it without the cached inputs. Anyway after debugging, it's just that you are trying to convert a BertTokenizer (which is a `simple` basic tokenizer) to a MBart50 tokenizer, which is based on `sentencepiece`. This is impossible and this is also the reason why it is failing : the `.json` file is ignored because it is looking for a `.bpe` file. <|||||>Ok , I think I understand the problem. Thanks a lot for being patient with me, as this was my first issue. I tried to do my best here. |
transformers | 23,730 | closed | LongformerEmbeddings "position_embedding_type" parameter are not used. | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In transformers.models.longformer.modeling_longformer.py file:
```python
class LongformerEmbeddings(nn.Module):
"""
Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
"""
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
self.padding_idx = config.pad_token_id
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
```
"position_embedding_type" are not work. By the way, self.position_embeddings has redundant initialization
### Expected behavior
Add support for “position_embedding_type” | 05-24-2023 07:32:07 | 05-24-2023 07:32:07 | Hey! Thanks for reporting this. I'm opening a PR to remove the unused parts, however I don't think it has to support the `position_embedding_type` as the model did not use it. |
transformers | 23,729 | closed | [docs] duplicate sentences in `document_question_answering.mdx` | ### Description
There are duplicate sentences in `document_question_answering.mdx` from line number 40 to 45.
### Document / Language
`document_question_answering.mdx` / [en](https://huggingface.co/docs/transformers/tasks/document_question_answering)
### Suggestion

should be either:
<table>
<tr>
<td> candidate 1 </td> <td> candidate 2 </td>
</tr>
<tr>
<td>
(...), to predict the positions of the start and end tokens of the answer. (...)
</td>
<td>
(...), in order to predict which token is at the start of the answer and which token is at the end of the answer. (...)
</td>
</tr>
</table>
Please let me know which of the two candidates you would prefer.
| 05-24-2023 06:56:17 | 05-24-2023 06:56:17 | I'd prefer candidate 1 personally. Thanks for reporting and looking forward to your PR with a fix!<|||||>Thanks for your feedback and I agree that candidate 1 is better. I opened PR #23735. |
transformers | 23,728 | closed | RuntimeError: Expected to mark a variable ready only once. | ```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if
you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready
multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 129 with name base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter
```
After upgrade transformers, train Lora models got above error.
| 05-24-2023 06:36:54 | 05-24-2023 06:36:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>this question has solved?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,727 | closed | Can not using LION optimizer | ### Feature request
Error:
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if
you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready
multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 129 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to
either INFO or DETAIL to print parameter names for further debugging.
0%|
```
Does there any example code of using LION optimzier?
I using code from here https://github.com/lucidrains/lion-pytorch/blob/main/lion_pytorch/lion_pytorch.py but got error above.
### Motivation
LION need to be support since it converges more faster.
### Your contribution
no | 05-24-2023 06:03:40 | 05-24-2023 06:03:40 | We can't do anything to help without knowing the code that triggered the error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,726 | closed | ValueError: Attempting to unscale FP16 gradients. | ### System Info
V100 torch2./0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When i train a LLMs with this
```
# data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer)
data_collator = DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
)
```
it throws above error, but thwn I using self-defined one, it works OK. why?
```
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
input_ids, labels = tuple(
[instance[key] for instance in instances] for key in ("input_ids", "labels")
)
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(
labels, batch_first=True, padding_value=-100
)
return dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
)
```
the only differences here is the data collator
### Expected behavior
should trainable with fp16 | 05-24-2023 03:24:35 | 05-24-2023 03:24:35 | |
transformers | 23,725 | closed | Fix the regex in `get_imports` to support multiline try blocks and excepts with specific exception types | # What does this PR do?
Fixes the regex in `get_imports` to support multiline try blocks and excepts with specific exception types, by
1. adding `re.DOTALL` so that new lines are matched in the try block
2. adding `.*?` after the except so that it will match things like `except ImportError`
Fixes #23667
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
| 05-23-2023 22:30:45 | 05-23-2023 22:30:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Done! |
transformers | 23,724 | closed | fix: Whisper generate, move text_prompt_ids trim up for max_new_tokens calculation | # What does this PR do?
Fixes #23723
Moves trimming the length of the text_prompt_ids further up so it is performed before the calculation determining the new `max_new_tokens` [here](https://github.com/huggingface/transformers/blob/003a0cf8cc4d78e47ef9debfb1e93a5c1197ca9a/src/transformers/models/whisper/modeling_whisper.py#L1645-L1648). As mentioned in the issue, this previously led to two issues with prompting: under certain circumstances `generate` could throw a nebulous and the `max_new_tokens` was not properly enforced when a prompt longer than the context + `max_new_tokens` was provided.
Happy to add a test for either bug if wanted.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@hollance @sanchit-gandhi | 05-23-2023 22:18:11 | 05-23-2023 22:18:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 23,723 | closed | Two bugs in whisper generate with `prompt_ids` regarding generation length | ### System Info
- `transformers` version: 4.30.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
# -*- coding: utf-8 -*-
# the above line is for the `prompt_for_error`
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny", language="English", task="transcribe")
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny", language="English", task="transcribe")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
it = iter(load_dataset("librispeech_asr", "all", split="test.other", streaming=True))
while it:
_ = [next(it) for x in range(3)]
clip = next(it)
if clip["id"] == '7902-96592-0026':
break
input_features = processor(clip['audio']['array'], sampling_rate=clip['audio']['sampling_rate'], return_tensors="pt").input_features
# Example of it not limiting generation to max_new_tokens when prompt_ids length too large
long_prompt = 5 * "Bubalina is a subtribe of wild cattle that includes the various species of true buffalo. Species include the African buffalo, the anoas, and the wild water buffalo (including the domesticated variant water buffalo. Buffaloes can be found naturally in sub-Saharan Africa, South Asia and Southeast Asia, and domestic and feral populations have been introduced to Europe, the Americas, and Australia. In addition to the living species, bubalinans have an extensive fossil record where remains have been found in much of Afro-Eurasia."
prompt_ids = processor.get_prompt_ids(long_prompt)
pred_ids = model.generate(input_features, language="english", task="transcribe", max_new_tokens=10, prompt_ids=prompt_ids)
decoded = processor.decode(pred_ids[0], skip_special_tokens=True)
new_tokens = processor.tokenizer(decoded, add_special_tokens=False)["input_ids"]
print(len(new_tokens)) # should be <=10, is actually 25
# Example of erroring
prompt_for_error = "some text rich in domain specific vocabulary lives here - I wish you would believe me that I am in as great trouble about it as you are - then as archiestered in the dark literally a gas for the astonishment here at the faint and wrestling once more and again all with silent - I'll soon show them that I am not going to be played with - to do this he must scheme lie head till morning then make for the nearest point it's signal for help I also boats crew were already searching for him how to escape - no that was too bad you cannot do that - but there was no chance for his body there the head would not go first - shall I come to father? no - what a queer dream he thought to himself - and I am hungry too 今晚會是我 再回家吧 - oh those bars he meant 雷 exclaimed and he was advancing towards them, and just as he drew near there was a wrestling noise nd to the window a couple of hands seized the bars there was a scratching of 布側 against stonework and ram スペース 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 敬射的 金融 � - I saw you last night and wondered whose boy he was - I think I don't know you Mr. Orphazard "
prompt_ids = processor.get_prompt_ids(prompt_for_error)
pred_ids = model.generate(input_features, language="english", task="transcribe", max_new_tokens=128, prompt_ids=prompt_ids)
```
### Expected behavior
Two issues arising when using whisper generate with `prompt_ids`:
1. `max_new_tokens` doesn't properly limit the generation of new tokens when the length of the provided `prompt_ids` is too large
2. An unclear error is thrown with certain long prompt + audio combinations, less clear on this one right now (thank you @dgram0 for raising this in https://github.com/huggingface/transformers/pull/22496#issuecomment-1559317037)
I believe they have the same root cause where if `prompt_ids` are provided, the max_new_tokens is recalculated using the length of the `text_prompt_ids` but before they are trimmed to fit within the context. I'm not certain yet how 2. is caused / fixed by this, but I think its because with a confusing prompt + audio combo the model doesn't know when to stop and needs `max_new_tokens` to be set properly, otherwise it'll index error. I can confirm that fixing the max_new_tokens recalculation fixes both issues in the example script. | 05-23-2023 22:03:29 | 05-23-2023 22:03:29 | Thanks for the detailed write-up and reproducible code snippet @connor-henderson! Cool that you've found a fix to both already 🙌 By the sounds of it, I agree that the PR should fix both issues by bumping the token slicing logic to before the change of max new tokens |
transformers | 23,712 | closed | Trainer.repo.push_to_hub returns None, causing raised exception | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For some root cause that I'm not certain of, `Trainer.repo.push_to_hub` can return `None`, which causes `Trainer._push_from_checkpoint` to raise an exception (as it expects a tuple to be returned).
```
Traceback (most recent call last):
File "F:\eo-reco\run_speech_recognition_ctc.py", line 810, in <module>
main()
File "F:\eo-reco\run_speech_recognition_ctc.py", line 756, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 1664, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2019, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2308, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 2462, in _save_checkpoint
self._push_from_checkpoint(output_dir)
File "F:\eo-reco\.env\Lib\site-packages\transformers\trainer.py", line 3649, in _push_from_checkpoint
_, self.push_in_progress = self.repo.push_to_hub(
^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: cannot unpack non-iterable NoneType object
```
(Note: line numbers in `run_speech_recognition_ctc.py` will not be accurate, as I've copied it and modified it)
`repo.push_to_hub` can return `None` if the repo is clean, which will cause the issue. However, that might not have happened in my case, since there was no corresponding log message about that (assuming logging would immediately be logged, and not buffered).
### Expected behavior
No exception, maybe just a warning. | 05-23-2023 18:07:11 | 05-23-2023 18:07:11 | cc @Wauplin can we have a consistent return type? That would solve this issue.<|||||>Hmm, what do you mean by _a consistent return type_ ? If nothing is pushed, we can't really return a CommandInProgress object. In general I would prefer not to touch the return type of a method that seems to have been around for 2 years and that might be integrated in a lot of scripts already.
(+ I expect the usage of `Repository` to slowly disappear once we switch to `upload_folder`)<|||||>I mean always a tuple so we don't have to make weird workarounds. But I will do the weird workaround in Transformers to fix this then. |
transformers | 23,701 | closed | Bug fix - flip_channel_order for channels first images | # What does this PR do?
The `flip_channel_order` function for rotating the channel order of images from BGR -> RGB had a bug when the input image was in channels first order.
Previously, the rows of pixels would be rotated, rather than the channel orders i.e. `image = image[:, ::-1, ...]` instead of `image = image[::-1, ...]`.
For the current image processors and pipelines, this path would only be triggered if `do_resize` was overridden and set to `False`. The method is used in 3 model's image processing files:
* LayoutLMV. If `do_resize=False` no batches could be prepared as the images would be of different sizes and they do no additional cropping or padding.
* LayoutLMV3 (just imported - wasn't used by the image processor)
* MobileViT
This PR:
* Moves the common logic into the image transforms library
* Resolves the bug
* Adds tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 05-23-2023 16:22:06 | 05-23-2023 16:22:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,700 | closed | Help with TrOCR training for Spanish | ### System Info
I'm using Torch 2.0 and Transformers 4.28.0 running in a Google Colab
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi
Thanks @NielsRogge for your incredible work in Transformers.
I'm working in develop a handwritten system recognition in spanish, so i choose TrOCR for the traduction from handwrite to text in spanish.
I think I followed yours Notebooks examples to fine tune and inference with TrOCR and many post with people with the same problem when we need to train TrOCR in a diferent language, spanish for me.
The code to create dataset is:
class SpanishDataset(Dataset):
def __init__(self, root_dir, df, processor, max_target_length=128):
self.root_dir = root_dir
self.df = df
self.processor = processor
self.max_target_length = max_target_length
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
# get file name + text
file_name = self.df['Path'][idx]
text = self.df['Text'][idx]
# prepare image (i.e. resize + normalize)
image = Image.open(self.root_dir + file_name).convert("RGB")
pixel_values = self.processor(image, return_tensors="pt").pixel_values
# add labels (input_ids) by encoding the text
labels = self.processor.tokenizer(text, padding="max_length",max_length=self.max_target_length).input_ids
# important: make sure that PAD tokens are ignored by the loss function
labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels]
encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)}
return encoding
and
```
feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
decoder_tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-roberta-base-spanish")
processor =TrOCRProcessor(feature_extractor=feature_extractor, tokenizer=decoder_tokenizer)
processor.save_pretrained('./processor')
processor = TrOCRProcessor.from_pretrained("./processor")
train_dataset = SpanishDataset(root_dir='/ShardDrives/MyDrive',
df=train_df,
processor=processor)
eval_dataset = SpanishDataset(root_dir='/ShardDrives/MyDrive',
df=test_df,
processor=processor)
```
I use this encoder and decoder:
```
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", "bertin-project/bertin-roberta-base-spanish")
# set decoder config to causal lm
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
# set special tokens used for creating the decoder_input_ids from the labels
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
# make sure vocab size is set correctly
model.config.vocab_size = model.config.decoder.vocab_size
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
# ensure that randomly initialized cross-attention layers are added
assert model.config.decoder.is_decoder is True
assert model.config.decoder.add_cross_attention is True
```
I'm using cer as metric
```
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id
label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return {"cer": cer}
```
and the training code is:
```
training_args = Seq2SeqTrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-4,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=100,
output_dir="./",
predict_with_generate=True,
)
# Training
trainer = Seq2SeqTrainer(
model=model,
tokenizer=processor.feature_extractor,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=default_data_collator
)
trainer.train()
```
But the Cer is bad (> 0.9)
This is an output for a dataset element
```
train_dataset[0]
{'pixel_values': tensor([[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]],
[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]],
[[ 0.9922, 1.0000, 1.0000, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
[ 0.9843, 0.9922, 0.9922, ..., -1.0000, -1.0000, -1.0000],
...,
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000],
[-1.0000, -1.0000, -1.0000, ..., -1.0000, -1.0000, -1.0000]]]),
'labels': tensor([ 0, 1323, 344, 2858, 11966, 66, 11507, 3298, 344, 14783,
66, 2, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100])}
```
I have an error, but i don't know where, so any help or advice is welcome.
Thank very much for everything
### Expected behavior
I expect to train a handwrite recognition system for spanish using TrOCR | 05-23-2023 16:18:15 | 05-23-2023 16:18:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,699 | closed | Skip `TFCvtModelTest::test_keras_fit_mixed_precision` for now | # What does this PR do?
#23339 uses TF 2.12 with CUDA 11.8 (and CUDNN 8700). With those, the test
```bash
tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_keras_fit_mixed_precision
```
gets
```bash
tensorflow.python.framework.errors_impl.UnknownError: Failed to determine best cudnn convolution algorithm for:
```
and affect other two tests
```bash
FAILED tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_pt_tf_model_equivalence - AssertionError: 0.007014513 not less than or equal to 1e-05 : outputs.last_hidden_state: Difference between torch and tf is 0.00701451301574707 (>= 1e-05).
FAILED tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelIntegrationTest::test_inference_image_classification_head - AssertionError: False is not true
```
**Those 2 tests will pass if `test_keras_fit_mixed_precision` is not run in the same pytest process.** (probably the GPU/CUDA/CUDNN is in bad states).
We will have to take a look and fix `test_keras_fit_mixed_precision`. But in the meantime, **let's skip it and not to affect the other 2 tests.**
@Rocketknight1 If you ever want to take a look this CUDA/CUDNN/TF issue. (Maybe better to open an issue in TF repo. but it may take 10 years to get a fix) | 05-23-2023 16:07:01 | 05-23-2023 16:07:01 | Here is the full error when running
```bash
python3 -m pytest -v tests/models/cvt/test_modeling_tf_cvt.py::TFCvtModelTest::test_keras_fit_mixed_precision
```
## Error
```bash
self = <tests.models.cvt.test_modeling_tf_cvt.TFCvtModelTest testMethod=test_keras_fit_mixed_precision>
def test_keras_fit_mixed_precision(self):
policy = tf.keras.mixed_precision.Policy("mixed_float16")
tf.keras.mixed_precision.set_global_policy(policy)
> super().test_keras_fit()
tests/models/cvt/test_modeling_tf_cvt.py:192:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_tf_common.py:1585: in test_keras_fit
history1 = model.fit(
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler
raise e.with_traceback(filtered_tb) from None
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <transformers.models.cvt.modeling_tf_cvt.TFCvtForImageClassification object at 0x7f4c0b7f10a0>
data = {'labels': <tf.Tensor: shape=(13,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, '...,
[0.38049886, 0.97876924, 0.96599656, ..., 0.5474588 ,
0.8447144 , 0.1995452 ]]]], dtype=float32)>}
def train_step(self, data):
"""
A modification of Keras's default `train_step` that correctly handles matching outputs to labels for our models
and supports directly training on the loss output head. In addition, it ensures input keys are copied to the
labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure
that they are available to the model during the forward pass.
"""
# We hardcode the most common renamings; models with weirder names can set `self._label_to_output_map`
arg_names = list(dict(inspect.signature(self.call).parameters).keys())
label_kwargs = find_labels(self.__class__)
label_to_output = self.get_label_to_output_name_mapping()
output_to_label = {val: key for key, val in label_to_output.items()}
if not self._using_dummy_loss and parse(tf.__version__) < parse("2.11.0"):
# Newer TF train steps leave this out
data = data_adapter.expand_1d(data)
x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)
# If the inputs are mutable dictionaries, make a shallow copy of them because we will modify
# them during input/label pre-processing. This avoids surprising the user by wrecking their data.
# In addition, modifying mutable Python inputs makes XLA compilation impossible.
if isinstance(x, dict):
x = x.copy()
if isinstance(y, dict):
y = y.copy()
# When using a dummy loss, we ensure that separate labels are copied to the correct model arguments,
# if those keys are not already present in the input dict
if self._using_dummy_loss and y is not None:
# If y is a tensor and the model only has one label-like input, map y to that input
if len(label_kwargs) == 1 and isinstance(y, tf.Tensor):
if isinstance(x, tf.Tensor):
x = {arg_names[0]: x}
label_kwarg = next(iter(label_kwargs))
if label_kwarg not in x:
x[label_kwarg] = y
# Otherwise, copy keys from y to x as long as they weren't already present in x
elif isinstance(y, dict):
if isinstance(x, tf.Tensor):
x = {arg_names[0]: x}
for key, val in y.items():
if key in arg_names and key not in x:
x[key] = val
elif output_to_label.get(key, None) in arg_names and key not in x:
x[output_to_label[key]] = val
if y is None:
y = {key: val for key, val in x.items() if key in label_kwargs}
if not y and not self._using_dummy_loss:
raise ValueError("Could not find label column(s) in input dict and no separate labels were provided!")
if isinstance(y, dict):
# Rename labels at this point to match output heads
y = {label_to_output.get(key, key): val for key, val in y.items()}
# Run forward pass.
with tf.GradientTape() as tape:
if self._using_dummy_loss and "return_loss" in arg_names:
y_pred = self(x, training=True, return_loss=True)
else:
y_pred = self(x, training=True)
if self._using_dummy_loss:
loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses)
else:
loss = None
# This next block matches outputs to label keys. Tensorflow's standard method for doing this
# can get very confused if any of the keys contain nested values (e.g. lists/tuples of Tensors)
if isinstance(y, dict) and len(y) == 1:
if list(y.keys())[0] in y_pred.keys():
y_pred = y_pred[list(y.keys())[0]]
elif list(y_pred.keys())[0] == "loss":
y_pred = y_pred[1]
else:
y_pred = y_pred[0]
_, y = y.popitem()
elif isinstance(y, dict):
# If the labels are a dict, match keys from the output by name
y_pred = {key: val for key, val in y_pred.items() if key in y}
elif isinstance(y, tuple) or isinstance(y, list):
# If the labels are a tuple/list, match keys to the output by order, skipping the loss.
if list(y_pred.keys())[0] == "loss":
y_pred = y_pred.to_tuple()[1:]
else:
y_pred = y_pred.to_tuple()
y_pred = y_pred[: len(y)] # Remove unused fields in case those cause problems
else:
# If the labels are a single tensor, match them to the first non-loss tensor in the output
if list(y_pred.keys())[0] == "loss":
y_pred = y_pred[1]
else:
y_pred = y_pred[0]
if loss is None:
loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)
# Run backwards pass.
> self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
E tensorflow.python.framework.errors_impl.UnknownError: Failed to determine best cudnn convolution algorithm for:
E %cudnn-conv.6 = (f16[1,3,3,96]{3,2,1,0}, u8[0]{0}) custom-call(f16[1,5,5,1248]{3,2,1,0} %bitcast.46, f16[96,3,3,16]{3,2,1,0} %transpose.3), window={size=3x3}, dim_labels=b01f_o01i->b01f, feature_group_count=96, custom_call_target="__cudnn$convForward", metadata={op_type="Conv2DBackpropFilter" op_name="gradients/Conv2D_grad/Conv2DBackpropFilter" source_file="/usr/local/lib/python3.8/dist-packages/keras/layers/convolutional/base_conv.py" source_line=286}, backend_config="{\"conv_result_scale\":1,\"activation_mode\":\"0\",\"side_input_scale\":0}"
E
E Original error: UNKNOWN: CUDNN_STATUS_BAD_PARAM
E in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(3588): 'op' CUDNN_BACKEND_OPERATION: cudnnFinalize Failed
E
E To ignore this failure and try to use a fallback algorithm (which may have suboptimal performance), use XLA_FLAGS=--xla_gpu_strict_conv_algorithm_picker=false. Please also file a bug for the root cause of failing autotuning. [Op:__inference___backward__jit_compiled_convolution_op_11189_11200]
src/transformers/modeling_tf_utils.py:1611: UnknownError
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ugh, this seems like a nightmare upstream issue alright - I think it's the best use of our time to just leave it unless it starts affecting multiple models, or if we start seeing it outside of the mixed_precision code paths.<|||||>Actually, it is mixed precision with training. I didn't see any other TF model having this test. Good for me to ignore it! |
transformers | 23,698 | closed | [`Blip`] Fix blip doctest | # What does this PR do?
Fixes the current Blip Failing doctest:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForQuestionAnswering
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# training
text = "How many cats are in the picture?"
label = "2"
inputs = processor(images=image, text=text, return_tensors="pt")
labels = processor(text=label, return_tensors="pt").input_ids
inputs["labels"] = labels
outputs = model(**inputs)
loss = outputs.loss
loss.backward()
```
In https://github.com/huggingface/transformers/pull/23153 I removed the redundant tokens shifting but also removed the assignment of `decoder_input_ids` if they are set to `None`, which is needed for training
This PR applies also the same fix on TF Blip
cc @sgugger @ydshieh | 05-23-2023 15:57:44 | 05-23-2023 15:57:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,697 | closed | Graphormer multi label classification label input format | ### System Info
NA
### Who can help?
@clefourrier
Kindly share the input format for multi label classification specially on the label side.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
NA | 05-23-2023 15:56:25 | 05-23-2023 15:56:25 | Hi!
It's basically a list of ints.
You can see an example of a graph with multiple labels with the [ogbg-molcpba dataset](https://huggingface.co/datasets/OGB/ogbg-molpcba). There is a detailed explanation of the types needed as inputs of graph classification in the [blog post on graph classification](https://huggingface.co/blog/graphml-classification) using transformers.
Can you please tell me what added information you need? <|||||>> Hi!
>
> It's basically a list of ints.
>
> You can see an example of a graph with multiple labels with the [ogbg-molcpba dataset](https://huggingface.co/datasets/OGB/ogbg-molpcba). There is a detailed explanation of the types needed as inputs of graph classification in the [blog post on graph classification](https://huggingface.co/blog/graphml-classification) using transformers.
>
> Can you please tell me what added information you need?
While trying to train, I'm getting the error message of
TypeError: _stack_dispatcher() got an unexpected keyword argument 'dim'.
At the same time, It is working for regression/Binary classification/multi class classification usecases.<|||||>Hi!
Could you provide your full stack trace please?<|||||>> st of ints.
Please find below stack trace:
/usr/local/lib/python3.10/dist-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1664 in train │
│ │
│ 1661 │ │ inner_training_loop = find_executable_batch_size( │
│ 1662 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1663 │ │ ) │
│ ❱ 1664 │ │ return inner_training_loop( │
│ 1665 │ │ │ args=args, │
│ 1666 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1667 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1909 in _inner_training_loop │
│ │
│ 1906 │ │ │ │ rng_to_sync = True │
│ 1907 │ │ │ │
│ 1908 │ │ │ step = -1 │
│ ❱ 1909 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1910 │ │ │ │ total_batched_samples += 1 │
│ 1911 │ │ │ │ if rng_to_sync: │
│ 1912 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory: │
│ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │
│ 680 │ │ return data │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:54 in fetch │
│ │
│ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │
│ 52 │ │ else: │
│ 53 │ │ │ data = self.dataset[possibly_batched_index] │
│ ❱ 54 │ │ return self.collate_fn(data) │
│ 55 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/models/graphormer/collating_graphormer.py:1 │
│ 32 in __call__ │
│ │
│ 129 │ │ │ else: # binary classification │
│ 130 │ │ │ │ batch["labels"] = torch.from_numpy(np.concatenate([i["labels"] for i in │
│ 131 │ │ else: # multi task classification, left to float to keep the NaNs │
│ ❱ 132 │ │ │ batch["labels"] = torch.from_numpy(np.stack([i["labels"] for i in features], │
│ 133 │ │ │
│ 134 │ │ return batch │
│ 135 │
│ in stack:179 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: _stack_dispatcher() got an unexpected keyword argument 'dim'<|||||>Hi @techthiyanes ,
Could you please provide the command you launched or a code snippet so I can make sure I'm working on the same thing as you?<|||||>Hi @clefourrier ,
Thank you for your time and response.
Please find below code snippet that i have tried where num_classes are not passed inside arguments as it's multi label classification.
# -*- coding: utf-8 -*-
"""Untitled334.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Xnz4vI75fkIdQVT6wKKiuDipoQzO4uZ1
"""
!pip install -q -U datasets transformers Cython accelerate
!pip install -q -U matplotlib networkx
from transformers.utils import is_cython_available
print("Cython is installed:", is_cython_available())
from datasets import load_dataset
dataset = load_dataset("OGB/ogbg-molpcba")
dataset['train'] = dataset['train'].select(list(range(1000)))
dataset['test'] = dataset['test'].select(list(range(100)))
dataset['validation'] = dataset['validation'].select(list(range(100)))
from datasets import load_metric
metric = load_metric("accuracy")
import networkx as nx
import matplotlib.pyplot as plt
# We want to plot the first train graph
graph = dataset["train"][0]
edges = graph["edge_index"]
num_edges = len(edges[0])
num_nodes = graph["num_nodes"]
# Conversion to networkx format
G = nx.Graph()
G.add_nodes_from(range(num_nodes))
G.add_edges_from([(edges[0][i], edges[1][i]) for i in range(num_edges)])
# Plot
nx.draw(G)
dataset
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
dataset_processed = dataset.map(preprocess_item, batched=False)
# split up training into training + validation
train_ds = dataset_processed['train']
val_ds = dataset_processed['validation']
from transformers import GraphormerForGraphClassification
model_checkpoint = "clefourrier/graphormer-base-pcqm4mv2" # pre-trained model from which to fine-tune
model = GraphormerForGraphClassification.from_pretrained(
model_checkpoint,
# num_classes=2, Commenting due to multi label
ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
"graph-classification",
logging_dir="graph-classification",
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
auto_find_batch_size=True, # batch size can be changed automatically to prevent OOMs
gradient_accumulation_steps=10,
dataloader_num_workers=4,
num_train_epochs=20,
evaluation_strategy="epoch",
logging_strategy="epoch",
# push_to_hub=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=val_ds,
data_collator=GraphormerDataCollator()
)
trainer.train()
!pip install -q -U datasets transformers Cython accelerate
!pip install -q -U matplotlib networkx
from transformers.utils import is_cython_available
print("Cython is installed:", is_cython_available())
from datasets import load_dataset
dataset = load_dataset("OGB/ogbg-molpcba")
dataset['train'] = dataset['train'].select(list(range(1000)))
dataset['test'] = dataset['test'].select(list(range(100)))
dataset['validation'] = dataset['validation'].select(list(range(100)))
from datasets import load_metric
metric = load_metric("accuracy")
import networkx as nx
import matplotlib.pyplot as plt
# We want to plot the first train graph
graph = dataset["train"][0]
edges = graph["edge_index"]
num_edges = len(edges[0])
num_nodes = graph["num_nodes"]
# Conversion to networkx format
G = nx.Graph()
G.add_nodes_from(range(num_nodes))
G.add_edges_from([(edges[0][i], edges[1][i]) for i in range(num_edges)])
# Plot
nx.draw(G)
dataset
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
dataset_processed = dataset.map(preprocess_item, batched=False)
# split up training into training + validation
train_ds = dataset_processed['train']
val_ds = dataset_processed['validation']
from transformers import GraphormerForGraphClassification
model_checkpoint = "clefourrier/graphormer-base-pcqm4mv2" # pre-trained model from which to fine-tune
model = GraphormerForGraphClassification.from_pretrained(
model_checkpoint,
# num_classes=2, Commenting due to multi label
ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
"graph-classification",
logging_dir="graph-classification",
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
auto_find_batch_size=True, # batch size can be changed automatically to prevent OOMs
gradient_accumulation_steps=10,
dataloader_num_workers=4,
num_train_epochs=20,
evaluation_strategy="epoch",
logging_strategy="epoch",
# push_to_hub=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=val_ds,
data_collator=GraphormerDataCollator()
)
trainer.train()
Thanks
Thiya<|||||>Ok, thank you very much for reporting!
I can reproduce your issue, I'll fix it asap<|||||>I fixed this problem in the PR above (now we need to wait for the fix to be merged, which will not be instantaneous). Thank you very much for reporting! :hugs:
Note that for multi-label classification, you will also need to provide the correct number of labels (in this case 128) to `num_classes`, like so:
```python
model_checkpoint = "clefourrier/graphormer-base-pcqm4mv2" # pre-trained model from which to fine-tune
model = GraphormerForGraphClassification.from_pretrained(
model_checkpoint,
num_classes=128, # HERE
ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
``` |
transformers | 23,696 | closed | Unable to download Google/vit-base-patch-16-224 / Getting 404 tepo not found error | ### System Info
I am running the following code on Google Colab to download pretrained Vision Transformer. I am authenticating with proper write access token. I get repo not found error.
Code:
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
import torch
model_id = 'google/vit-base-patch-16-224'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModelForImageClassification.from_pretrained(model_id, use_auth_token=WRITE_TOKEN_HF).to(device)
model.eval()
Env: Google Colab
Error:
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
238 try:
--> 239 response.raise_for_status()
240 except HTTPError as e:
12 frames
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/google/vit-base-patch-16-224/resolve/main/config.json
The above exception was the direct cause of the following exception:
RepositoryNotFoundError Traceback (most recent call last)
RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-646cdb2a-00843123602c48d04fc1bc45)
Repository Not Found for url: https://huggingface.co/google/vit-base-patch-16-224/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py](https://localhost:8080/#) in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
422
423 except RepositoryNotFoundError:
--> 424 raise EnvironmentError(
425 f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
426 "listed on '[https://huggingface.co/models'\nIf](https://huggingface.co/models'/nIf) this is a private repository, make sure to "
OSError: google/vit-base-patch-16-224 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
import torch
model_id = 'google/vit-base-patch-16-224'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModelForImageClassification.from_pretrained(model_id, use_auth_token=WRITE_TOKEN_HF).to(device)
model.eval()
### Expected behavior
The model should be downloaded without error | 05-23-2023 15:31:23 | 05-23-2023 15:31:23 | As the error indicates, `google/vit-base-patch-16-224` does not exist on the Hub. You can browse ViT models [here](https://huggingface.co/models?other=vit).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,694 | closed | Fix a `BridgeTower` test | # What does this PR do?
A fix is required after #23029.
So far on main, an error is given
```bash
tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTrainingTest::test_training
(line 656) AssertionError: unexpectedly None : Gradients should not be None - got None for bridgetower.cross_modal_image_layers.1.attention.self.query.weight
``` | 05-23-2023 15:06:49 | 05-23-2023 15:06:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the fix! |
transformers | 23,693 | closed | ZeRO 3 error: expected the next 4 parameters in the parameter fetch queue to be ... but got () | ### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed with 2 NVIDIA RTX A5000 GPUs
### Who can help?
@stas00 may be the more suited for this since the issue is probably related to deepspeed
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Currently, I'm struggling to make a reproducible script, as the errors happens suddenly during training with ZeRO 3 stage activated and I'm using a custom dataset. The task is a contrastive loss pertaining. The backbone is the [GLPN's](https://huggingface.co/docs/transformers/model_doc/glpn) encoder model, followed by a custom Attention Pooling module. The parameters causing the issues
Deepspeed version is `0.9.1`
The issue may be related to [this](https://github.com/microsoft/DeepSpeed/issues/1938), although the stack trace is not identical
The error shows only when resuming from a checkpoint (`resuming_from_checkpoint`=/path/to/checkpoint).
I'm attaching the log output (`error.txt`), along with the deepspeed ZeRO 3 configuration (`config_adam_zero3.txt`) I'm using, plus the custom model implementation (`modeling_custom_apr.txt`).
[config_adam_zero3.txt](https://github.com/huggingface/transformers/files/11545179/config_adam_zero3.txt)
[error.txt](https://github.com/huggingface/transformers/files/11545180/error.txt)
[modeling_custom_apr.txt](https://github.com/huggingface/transformers/files/11545354/modeling_custom_apr.txt)
This is the last part of the log where the error shows up
```
5[2023-05-23 14:02:25,781] [INFO] [logging.py:96:log_dist] [Rank 0] step=14290, skipped=17, lr=[0.00014992267618019753], mom=[(0.9, 0.999)]
[2023-05-23 14:02:25,783] [INFO] [timer.py:199:stop] epoch=0/micro_step=2070/global_step=2070, RunningAvgSamplesPerSec=8.340844178398823, CurrSamplesPerSec=8.091999012978865, MemAllocated=0.4GB, MaxMemAllocated=19.03GB
{'loss': 1.0438, 'learning_rate': 0.00014992267618019753, 'epoch': 3.68}
[2023-05-23 14:02:36,757] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, but hysteresis is 2. Reducing hysteresis to 1
%|▍ | 14287/305600 [3:34:27<454:15:14, 5.61s/it]
5%|▍ | 14288/305600 [3:34:33<467:44:45, 5.78s/it]
5%|▍ | 14289/305600 [3:34:38<455:08:12, 5.62s/it]
5%|▍ | 14290/305600 [3:34:43<443:40:08, 5.48s/it]
5%|▍ | 14290/305600 [3:34:43<443:40:08, 5.48s/it]
5%|▍ | 14291/305600 [3:34:49<448:35:16, 5.54s/it]
5%|▍ | 14292/305600 [3:34:54<442:30:06, 5.47s/it]Traceback (most recent call last):
File "/mnt/beegfs/scratch/dcaffagni/runs/clpt_gpu_2_lr_154_cos_10k_wu/maticad_side/train.py", line 96, in <module>
train_out = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 2661, in training_step
loss = self.deepspeed.backward(loss)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1796, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1923, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 62, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 169, in backward
ctx.pre_backward_function(ctx.module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 419, in _run_before_backward_function
self.pre_sub_module_backward_function(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 500, in pre_sub_module_backward_function
param_coordinator.fetch_sub_module(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
Traceback (most recent call last):
File "/mnt/beegfs/scratch/dcaffagni/runs/clpt_gpu_2_lr_154_cos_10k_wu/maticad_side/train.py", line 96, in <module>
train_out = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/transformers/trainer.py", line 2661, in training_step
loss = self.deepspeed.backward(loss)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1796, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 1923, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 62, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 169, in backward
ctx.pre_backward_function(ctx.module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
[modeling_custom_apr.txt](https://github.com/huggingface/transformers/files/11545331/modeling_custom_apr.txt)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 419, in _run_before_backward_function
self.pre_sub_module_backward_function(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 500, in pre_sub_module_backward_function
param_coordinator.fetch_sub_module(sub_module)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/homes/dcaffagni/.conda/envs/glpn_hf/lib/python3.9/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 288, in fetch_sub_module
raise RuntimeError(
RuntimeError: tracing error at step 999:
module id: 921, training: True
expected the next 4 parameters in the parameter fetch queue to be ({'id': 'name=attn_pool.k_proj.bias id=915', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.v_proj.bias id=919', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.c_proj.bias id=921', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}}, {'id': 'name=attn_pool.q_proj.bias id=917', 'status': 'AVAILABLE', 'numel': 512, 'ds_numel': 512, 'shape': (512,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {921}})
but got
().
```
### Expected behavior
After resuming from a checkpoint, the training should proceed fine, as it happens when training with the same setup from scratch.
| 05-23-2023 14:42:24 | 05-23-2023 14:42:24 | Hi @dcaffo98, it'd be the best to file this directly with Deepspeed https://github.com/microsoft/DeepSpeed/issues since the issue is on the Deepspeed side.
In general such issues relate to code that changes the model after it was initialized, but there are many complex nuanced situations so it's best to talk to the DS developers directly.<|||||>I've filed the issue to the DS team as well. It may be worth noting that the error happens right after the first detected OVERFLOW in the run. However, multiple overflows occurred during the previous 24h of training (before resuming from the checkpoint).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,692 | closed | Token Alignment | `data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]** | 05-23-2023 14:06:10 | 05-23-2023 14:06:10 | Looks like this should go on the Datasets repo, t doesn't seem linked to Transformers :-)<|||||>> Looks like this should go on the Datasets repo, t doesn't seem linked to Transformers :-)
Could you help me in anyway to fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,691 | closed | Fix some docs what layerdrop does | # What does this PR do?
Fix some configuration docs what layerdrop does. Copy from `configuration_opt.py` and reset defaults which is same as init values.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/23351
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @stevhliu, @MKhalusova and @sanchit-gandhi
| 05-23-2023 13:56:30 | 05-23-2023 13:56:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your fix! Just have one syntax comment to propagate on all docs.
Thanks for your suggestion! I will add more commits later. |
transformers | 23,690 | closed | feat: add warning for using use_pretrained_backbone with from_pretrained | # What does this PR do?
This PR adds a warning when using `from_pretrained` and you also have `use_pretrained_backbone` enabled. `from_pretrained` will load its weights after the pretrained backbone is initialized meaning some weights may be unexpectedly overridden. I am not sure if this case is general enough to warrant a warning. The more general case would be if any weights are loaded before the `from_pretrained` weights. An additional feature would be to allow selectively loading weights using `from_pretrained`, but I understand if that use case is too esoteric.
```
model = DetrForObjectDetection.from_pretrained(
"facebook/detr-resnet-50",
num_labels=len(categories.keys()),
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
num_queries=20,
backbone="resnet50",
use_pretrained_backbone=True,
use_timm_backbone=True,
)
```
For the scenario above, the `use_pretrained_backbone` will not have any effect as the `facebook/detr-resnet-50` weights will take precedent.
## Who can review?
Perhaps this is the area of @sgugger or @amyeroberts | 05-23-2023 13:50:30 | 05-23-2023 13:50:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23690). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,689 | closed | #23675 Registering Malay language | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-23-2023 13:32:41 | 05-23-2023 13:32:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Please only add translated files in that folder. There is no need to copy the whole Englih documentation.<|||||>translated some sections of the _toctree.yml file into Malay language |
transformers | 23,688 | closed | LlamaForCausalLM generate() runtime error when top_p=0 | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-4.4.0-210-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.3
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This example follows and modifies the LLaMA documentation at: https://huggingface.co/docs/transformers/v4.29.1/model_doc/llama.
```python
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS).cuda()
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt").cuda()
# Generate
generate_ids = model.generate(
inputs.input_ids, max_length=1024,
do_sample=True,
temperature=0.6,
top_k=1000,
top_p=0.0, # cause error
repetition_penalty=(1.0 / 0.85),
)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
>RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
### Expected behavior
`generate()` should check if `top_p==0` and disables it to avoid numerical error. This behavior is the same as when `top_p==1` and consistent with the documentation. | 05-23-2023 12:40:53 | 05-23-2023 12:40:53 | cc @gante <|||||>Hi @tranhungnghiep -- there is indeed an issue, but not on allowing p=0 :) I will open a PR to fix it (feel free to check the solution there, after it gets linked on this issue)
Please note that setting `top_p=0` is the effectively the same as doing `do_sample=False` ⚠️ |
transformers | 23,687 | closed | [RWKV] Inference memory leak unless use_cache=False is specified | ### System Info
- `transformers` version: 4.29.2
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.13.0-rc0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Memory usage rapidly climbs when running a stripped-down version of the RWKV overview example [here](https://huggingface.co/docs/transformers/model_doc/rwkv#overview), in a loop:
```python
from transformers import AutoTokenizer, RwkvModel
model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile")
tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile")
for _ in range(1000):
inputs = tokenizer("This is an example.", return_tensors="pt")
# Feed everything to the model
model(inputs["input_ids"]). # <--- memory leak
```
Passing `use_cache=False` to the forward step solves this, though it's not clear why, since the cached state 'should' be bounded to 5 entries.
### Expected behavior
Stable memory usage | 05-23-2023 12:24:46 | 05-23-2023 12:24:46 | This is because you are not executing your forward pass under a `torch.no_grad`, so the gradient history blows up via the state of the outputs. Either do this or manually detach the states to avoid this memory use. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,686 | closed | support for model.generate with assistant_model / model being load_in_8bit and PeftModel (LoRA) | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, RTX3090ti
- Using distributed or parallel set-up in script?: idk, I use accelerate via: device_map="auto", and load_in_8bit=True
### Who can help?
@gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduce:
1: Setup load PEFT model as assistant_model (bloom with lora, load_in_8bit):
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "aari1995/GermanGPT_dolly_lora_1b5"
config = PeftConfig.from_pretrained(peft_model_id)
assistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to("cuda:0")#, return_dict=True, load_in_8bit=True, device_map='auto')
# Load the Lora model
assistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id)
```
2: Load Bloom model (load_in_8bit, with or without lora):
```python
model = AutoModelForCausalLM.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1",load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1")
```
3. Generate using PeftModel as :
```python
prompt = "<|prompter|>Hallo<|endoftext|><|assistant|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
outputs = assistant_model.generate(**inputs, assistant_model=model,use_cache=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
### Expected behavior
I expected to get an assisted generation, however it gets stuck, for example here:
generation/utils.py
-> 4253 prev_seq_len = model_kwargs["assistant_past_key_values"][0][assistant_kv_indexing].shape[-2]
'NoneType' object is not subscriptable
i suspect it maybe due to the fact that i use load_in_8bit=True and also use PeftModel.
Thanks! | 05-23-2023 12:20:05 | 05-23-2023 12:20:05 | cc @younesbelkada since @gante is on vacation.<|||||>Same thing here, did not realize this was because of 8-bit.
Following the instructions in the blog post for assisted generation, I run into some issues. (FYI, both the longform_model and assistant_model are 8-bit finetuned versions of OPT, which is the exact same model used in the blog post.)
First, when I do exactly what's in the post:
```
prompt = prompt + "\nAnswer:"
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = longform_model.generate(**inputs, assistant_model=assistant_model)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
I get an error telling me that assisted generation requires `use_cache=True`. Hmm... weird, and the blog post didn't seem to need to use that argument, but okay, let's try it!
```
prompt = prompt + "\nAnswer:"
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = longform_model.generate(**inputs, assistant_model=assistant_model, use_cache=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
Then this happens:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-e9645bbc79d4> in <module>
----> 1 generate_from_prompt("Which is a species of fish? Tope or rope?")
<ipython-input-9-14fc80d284ea> in generate_from_prompt(prompt)
2 prompt = prompt + "\nAnswer:"
3 inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
----> 4 outputs = longform_model.generate(**inputs, assistant_model=assistant_model, use_cache=True)
5 print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
/usr/lib/python3/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
~/.local/lib/python3.8/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
1493
1494 # 12. run assisted generate
-> 1495 return self.assisted_decoding(
1496 input_ids,
1497 assistant_model=assistant_model,
~/.local/lib/python3.8/site-packages/transformers/generation/utils.py in assisted_decoding(self, input_ids, assistant_model, do_sample, logits_processor, logits_warper, stopping_criteria, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
4253 # 1.1. use the assistant model to obtain the next candidate logits
4254 if "assistant_past_key_values" in model_kwargs:
-> 4255 prev_seq_len = model_kwargs["assistant_past_key_values"][0][assistant_kv_indexing].shape[-2]
4256 # `new_token_len` can be 1 or 2 (next token in assistant + last token picked by the larger model)
4257 new_token_len = candidate_input_ids.shape[1] - prev_seq_len
TypeError: 'NoneType' object is not subscriptable
```
I'm using bleeding edge version of Transformers, so I'm curious what I'm doing wrong here, or else maybe this is just a bug.<|||||>Hey @achibb @andersonbcdefg 👋 First of all, apologies for the delay :)
I looked at your script, and the root cause for the exception on my end (with transformers v4.30, peft 0.3.0, and torch 2.0.0) was in the execution of the PEFT model -- it had caching set to False. Assisted generation needs caching on both models, so manually setting the config fixed it. This means the `use_cache` argument in `generate()` is not being piped correctly, for which I'll open a PR 🤗
______________________________________
(temporary fix until the PR gets merged)
```py
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "aari1995/GermanGPT_dolly_lora_1b5"
config = PeftConfig.from_pretrained(peft_model_id)
assistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to("cuda:0")#, return_dict=True, load_in_8bit=True, device_map='auto')
# Load the Lora model
assistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id).to("cuda:0")
assistant_model.config.use_cache = True
model = AutoModelForCausalLM.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1",load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1")
prompt = "<|prompter|>Hallo<|endoftext|><|assistant|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
outputs = assistant_model.generate(**inputs, assistant_model=model, use_cache=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```<|||||>@gante thanks, however I see a mistake on my side, I accidently switched the models. My generation model is a regular model and my assistant is a peft, so:
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "aari1995/GermanGPT_dolly_lora_1b5"
config = PeftConfig.from_pretrained(peft_model_id)
assistant_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).to("cuda:0")#, return_dict=True, load_in_8bit=True, device_map='auto')
# Load the Lora model
assistant_model = PeftModel.from_pretrained(assistant_model, peft_model_id).to("cuda:0")
assistant_model.config.use_cache = True
model = AutoModelForCausalLM.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1",load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("malteos/bloom-6b4-clp-german-oasst-v0.1")
prompt = "<|prompter|>Hallo<|endoftext|><|assistant|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(**inputs, assistant_model=assistant_model, use_cache=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
Here i get the error:
RuntimeError: The expanded size of the tensor (11) must match the existing size (12) at non-singleton dimension 2.
Target sizes: [16, 0, 11]. Tensor sizes: [16, 1, 12]
I hope your vacation was nice!<|||||>(@achibb looking at your newest comment to determine whether the issue needs to be reopened :) )<|||||>@achibb the root cause was the different class name, when the model gets loaded with PEFT. See the PR description in #24198 to see how it impacted the script you were trying to run :p
After the PR gets merged, you will be able to run the script you pasted above!<|||||>It should be sorted now -- try running your script from `main` :)<|||||>perfect, works now! Thanks :) |
transformers | 23,685 | closed | Add albert resources | # What does this PR do?
Fixes #20055
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
| 05-23-2023 11:03:40 | 05-23-2023 11:03:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,684 | closed | [`SAM`] Fixes pipeline and adds a dummy pipeline test | # What does this PR do?
1- Fixes the automatic mask generation pipeline, currently on the main branch, the script below
```python
from transformers import pipeline
from PIL import Image
import requests
generator = pipeline("mask-generation", model="facebook/sam-vit-base", device=0)
img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
outputs = generator(raw_image, points_per_batch=64)
```
is broken
2- Adds a dummy pipeline test. I know the pipelines are already tested in tests/pipeline but these tests are quite slow. I thought adding a small dummy pipeline test is easier for future contributors to make sure they will not break the pipeline without having to run the entire pipeline testing suite for SAM
cc @ArthurZucker @ydshieh @sgugger | 05-23-2023 10:05:21 | 05-23-2023 10:05:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,683 | closed | fix: use bool instead of uint8/byte in Deberta/DebertaV2/SEW-D to make it compatible with TensorRT | # What does this PR do?
TensorRT cannot accept onnx graph with uint8/byte intermediate tensors. This PR uses bool tensors instead of unit8/byte tensors to make the exported onnx file compatible with TensorRT.
| 05-23-2023 10:05:03 | 05-23-2023 10:05:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I also fix Deberta/SEW-D since they have the similar compatible issue.<|||||>> Could you add a very small snippet to reproduce this just to show the previous error make sure this actually works?
@ArthurZucker Use the following code to export DebertaV2 to ONNX file:
```python
# This script exports DebertaV2 models from HuggingFace directly.
import gc
import torch
from transformers import DebertaV2Model
from transformers.activations import FastGELUActivation
import mock
def make_log_bucket_position(relative_pos, bucket_size, max_position):
sign = torch.sign(relative_pos.float())
mid = bucket_size // 2
abs_pos = torch.where(
(relative_pos < mid) & (relative_pos > -mid),
torch.tensor(mid - 1).type_as(relative_pos),
torch.abs(relative_pos),
)
log_pos = (
torch.ceil(
torch.log(abs_pos / mid)
/ torch.log(torch.tensor((max_position - 1) / mid))
* (mid - 1)
)
+ mid
)
bucket_pos = torch.where(
abs_pos <= mid, relative_pos.type_as(log_pos), log_pos * sign
)
return bucket_pos
# The following patch convert the input tensor of sign to float32 dtype,
# since sign op in TensorRT does not support int dtype.
@mock.patch(
"transformers.models.deberta_v2.modeling_deberta_v2.make_log_bucket_position",
make_log_bucket_position,
)
def export_deberta(model_name, max_seq_len, fast_gelu=False):
model = DebertaV2Model.from_pretrained(model_name)
gelu_tag = ""
if fast_gelu:
for layer in model.encoder.layer:
layer.intermediate.intermediate_act_fn = FastGELUActivation()
gelu_tag = "-gelu-tanh"
input_ids = torch.zeros((1, max_seq_len // 2), dtype=torch.int)
attention_mask = torch.zeros((1, max_seq_len // 2), dtype=torch.int)
args = (
input_ids,
{"attention_mask": attention_mask},
)
base_model_name = model_name[model_name.rfind("/") + 1 :]
torch.onnx.export(
model,
args,
f"{base_model_name}{gelu_tag}.onnx",
input_names=["input_ids", "attention_mask"],
output_names=["last_hidden_state"],
opset_version=13,
dynamic_axes={
"input_ids": {0: "batch", 1: "sequence"},
"attention_mask": {0: "batch", 1: "sequence"},
"last_hidden_state": {0: "batch", 1: "sequence"},
},
)
if __name__ == "__main__":
export_deberta("microsoft/deberta-v3-large", 4096, True)
```
Use TensorRT to convert ONNX file to engine file:
```bash
trtexec --onnx=deberta-v3-large-gelu-tanh.onnx --explicitBatch --fp16 --shapes=input_ids:1x2048,attention_mask:1x2048 --memPoolSize=workspace:4096 --timingCacheFile=./deberta-v3-large-bs1-seq2048.cache --saveEngine=deberta-v3-large-bs1-seq2048 --buildOnly
```
You will see the following error log:
```
[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:800: While parsing node number 73 [Cast -> "/encoder/Cast_output_0"]:
[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:801: --- Begin node ---
input: "/encoder/Mul_output_0"
output: "/encoder/Cast_output_0"
name: "/encoder/Cast"
op_type: "Cast"
attribute {
name: "to"
i: 2
type: INT
}
[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:802: --- End node ---
[05/24/2023-03:42:37] [E] [TRT] ModelImporter.cpp:804: ERROR: ModelImporter.cpp:239 In function parseNode:
[8] Assertion failed: legalUINT8 && "TensorRT does not support UINT8 types for intermediate tensors!"
[05/24/2023-03:42:37] [E] Failed to parse onnx file
[05/24/2023-03:42:38] [I] Finished parsing network model. Parse time: 15.5136
[05/24/2023-03:42:38] [E] Parsing model failed
[05/24/2023-03:42:38] [E] Failed to create engine from model or file.
[05/24/2023-03:42:38] [E] Engine set up failed
```
After used this MR, the error above will be gone. |
transformers | 23,682 | closed | Fix SAM | # What does this PR do?
Some failures are introduced in #23656. I only checked the TF tests but not the PT tests. Sorry.
cc @Rocketknight1 : it's likely a hardware difference. Always nice if we can get the results from GCP VM, but I understand it makes your workflow a bit difficult. | 05-23-2023 10:04:58 | 05-23-2023 10:04:58 | Forgot to mention: kudo to @younesbelkada for spotting this <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The tol used in the tests changed in this PR are not changed in #23656, so nothing to be reverted back. |
transformers | 23,681 | closed | Fix sagemaker DP/MP | # What does this PR do?
Fixes the broken sagemaker tests, proved it works.
Solves https://github.com/huggingface/transformers/issues/23390
Needs to be coordinated with the Accelerate pr as well | 05-23-2023 09:47:30 | 05-23-2023 09:47:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello, devs. I encountered this error on Google Colab (Python 3.10) and transformers 4.30.1.
```
❱ 3386 │ │ │ self.args.distributed_state is None and self.local_rank !
AttributeError: 'CLTrainer' object has no attribute 'local_rank'
```
It looks like the line was added by this change.
By looking through the code, I suspect `self.local_rank` should be `self.args.local_rank`. I'm fairly new to this library, so apologies if my guess is wrong.<|||||>Indeed. Would you like to open a PR with the fix?<|||||>Sure, I can open a PR in a few days. But I'm actually pretty new to this repo, so please feel free to make a quick fix for that.<|||||>cc @muellerzr might be worth a quick fix.<|||||>Hi. I just opened #24297. |
transformers | 23,679 | closed | How to check word ids for BartTokenizer? | one single words is getting splitted to multiple tokens, and i tried to check word_ids by using **t.word_ids()**
i got an Error
**ValueError: word_ids() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).** | 05-23-2023 08:41:30 | 05-23-2023 08:41:30 | As the error indicates, you need to use `BartTokenizerFast` to use this method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,676 | closed | About Tokenizer | ### System Info
### Who can help?
@ArthurZucker @younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
tokenizer = BertTokenizer.from_pretrained(gpt2_path)
### Expected behavior
I see the following code:
tokenizer = BertTokenizer.from_pretrained(gpt2_path)
This code uses BertTokenizer to read GPT2 related files. What is the difference between the above code and the following two codes?
tokenizer = BertTokenizer.from_pretrained(bert_path)
tokenizer = AutoTokenizer.from_pretrained(gpt2_path) | 05-23-2023 07:56:25 | 05-23-2023 07:56:25 | Please use the [forums](https://discuss.huggingface.co/) for questions around the library as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,675 | open | [i18n-ms, ISO 639-1] Translating docs to Malay | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Malay-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ms` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ms/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## internal
## main_classes
## model_doc
## tasks
- [ ] asr.mdx
- [ ] audio_classification.mdx
- [ ] document_question_answering.mdx
- [ ] image_captioning.mdx
- [ ] image_classification.mdx
- [ ] language_modeling.mdx
- [ ] masked_language_modeling.mdx
- [ ] monocular_depth_estimation.mdx
- [ ] multiple_choice.mdx
- [ ] object_detection.mdx
- [ ] question_answering.mdx
- [ ] semantic_segmentation.mdx
- [ ] sequence_classification.mdx
- [ ] summarization.mdx
- [ ] text-to-speech.mdx
- [ ] token_classification.mdx
- [ ] translation.mdx
- [ ] video_classification.mdx
- [ ] zero_shot_image_classification.mdx
- [ ] zero_shot_object_detection.mdx
- [ ] _config.py
- [ ] _toctree.yml
- [ ] accelerate.mdx
- [ ] add_new_model.mdx
- [ ] add_new_pipeline.mdx
- [ ] add_tensorflow_model.mdx
- [ ] attention.mdx
- [ ] autoclass_tutorial.mdx
- [ ] benchmarks.mdx
- [ ] bertology.mdx
- [ ] big_models.mdx
- [ ] community.mdx
- [ ] contributing.md
- [ ] create_a_model.mdx
- [ ] custom_models.mdx
- [ ] custom_tools.mdx
- [ ] debugging.mdx
- [ ] fast_tokenizers.mdx
- [ ] geeneration_strategies.mdx
- [ ] glossary.mdx
- [ ] hpo_train.mdx
- [ ] index.mdx
- [ ] installation.mdx
- [ ] model_sharing.mdx
- [ ] model_summary.mdx
- [ ] multilingual.mdx
- [ ] notebooks.md
- [ ] pad_truncation.mdx
- [ ] perf_hardware.mdx
- [ ] perf_infer_cpu.mdx
- [ ] perf_infer_gpu_many.mdx
- [ ] perf_infer_gpu_one.mdx
- [ ] perf_infer_special.mdx
- [ ] perf_train_tpu.mdx
- [ ] perf_train_tpu_tf.mdx
- [ ] performance.mdx
- [ ] perplexity.mdx
- [ ] philosophy.mdx
- [ ] pipeline_tutorial.mdx
- [ ] pipeline_webserver.mdx
- [ ] pr_checks.mdx
- [ ] preprocessing.mdx
- [ ] quicktour.mdx
- [ ] run_scripts.mdx
- [ ] sagemaker.mdx
- [ ] serialization.mdx
- [ ] task_summary.mdx
- [ ] tasks_explained.mdx
- [ ] testing.mdx
- [ ] tf_xla.mdx
- [ ] tokenizer_summary.mdx
- [ ] torchscript.mdx
- [ ] training.mdx
- [ ] transformers_agents.mdx
- [ ] troubleshooting.mdx
<!--
Keep on adding more as you go 🔥
-->
| 05-23-2023 06:52:06 | 05-23-2023 06:52:06 | I would like to work on the 'Get Started' and 'Tutorial' sections<|||||>Could you finish editing the template in the first comment on your issue?<|||||>hi, i have edited the template. Can I clarify, must I translate a few documents first before submitting a pull request? or do I submit a pull request when registering the new language?<|||||>No you can submit a pull request with just one new translated doc, thanks! |
transformers | 23,674 | open | custom stopping_critriea function doesn't receive logits scores (receives None instead) | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduction Steps:
1. Initialize a BART model & its tokenizer (in my case it is facebook/bart-large)
2. Create a custom stopping_criteria function and add it to StoppingCriteriaList object
3. Run model.generate() with the your stopping criteria list as argument
Scores argument is always None
Example code:
```python
import torch
from transformers import StoppingCriteriaList, BartForConditionalGeneration, BartTokenizer
def custom_stopping_criteria(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
print ("Scores:", scores)
return False
stopping_criteria = StoppingCriteriaList([custom_stopping_criteria])
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
model.generate(batch["input_ids"], stopping_criteria=stopping_criteria)
```
The above code uses a stopping critriea that just prints the scores value when called (which prints None)
### Expected behavior
The expected behavior should be to have Scores logits populated with values instead of being None (values before or after softmax don't matter) | 05-23-2023 04:32:09 | 05-23-2023 04:32:09 | cc @gante <|||||>Hey @Gandalf098 (the white, I hope ;) )
By default, the scores are not initialized and are kept as `None` (see [here](https://github.com/huggingface/transformers/blob/5fa0a1b23b6a79eb646636dbf9a22cb34ff48a74/src/transformers/generation/utils.py#L2308)). To enable score-keeping, you must pass `return_dict_in_generate=True, output_scores=True` to your `.generate()` call.
____________________________________________
```py
import torch
from transformers import StoppingCriteriaList, BartForConditionalGeneration, BartTokenizer
def custom_stopping_criteria(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
print("Scores:", scores)
return False
stopping_criteria = StoppingCriteriaList([custom_stopping_criteria])
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
model.generate(batch["input_ids"], stopping_criteria=stopping_criteria, return_dict_in_generate=True, output_scores=True)
```<|||||>Hi @gante and @Gandalf098,
According to the `StoppingCriteria.__call__` [signature](https://huggingface.co/docs/transformers/v4.30.0/en/internal/generation_utils#transformers.StoppingCriteria) and to its docstring, `scores` is supposed to be a `torch.FloatTensor`.
> scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head.
It makes sense to think of it as the **last** prediction scores of the language modeling head, meaning that the score-keeping here refers not to `score` (optional history of the prediction scores) but to `next_token_scores` (always available last prediction scores - at least for [greedy decoding](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/generation/utils.py#L2400-L2401), we should verify for other decoding strategies).
In that sense, I do think we should correct this point. What do you think @gante?
<|||||>We might want to build some stopping criteria based on a sequence of tokens/sequence of scores, so this API is more general 🤗
We do need better docs and/or input validation, though, to detect these issues in advance. It is my priority for this month (and I'm keeping this issue open so I don't forget to address this case) |
transformers | 23,673 | closed | Bump requests from 2.27.1 to 2.31.0 in /examples/research_projects/decision_transformer | Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.27.1...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 05-23-2023 03:27:44 | 05-23-2023 03:27:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,672 | closed | Audio related Transformer | I'm now trying to use audio-related Transformer, like Conformer, Audio Spectrogram Transformer, or Whisper, to process audio information. However, our input is ark (with certain dimensions of audio features) and scp files instead of wav form. I tried to use your library but it seems to have errors while processing ark/scp files. Are there any functions to process ark/scp directly, and are there any examples to show their usages? Thanks a lot | 05-23-2023 02:19:54 | 05-23-2023 02:19:54 | cc @sanchit-gandhi <|||||>Hey @chlorane - did you get the `ark`/`scp` files using the Kaldi library? I think in this case you're better off just passing the raw `.wav` files directly to the `transformers` models, rather than first through the Kaldi pre-processing to get `ark`/`scp` and then trying to force them through the `transformers` models. The only information I can find regarding converting `ark`/`scp` to `wav` is this thread: https://groups.google.com/g/kaldi-help/c/t6Ra3uHiDJQ/m/R6e01pF5CwAJ For questions such as this one, you might have more luck posting on the Hugging Face forum, where others in the community can pitch-in based on their experiences: https://discuss.huggingface.co
Note that all audio models in the `transformers` library are designed to work directly with audio inputs, as per their respective papers. The `ark`/`scp` file formats first convert the raw audio inputs to either MFCC features or some other feature extracted form, thus these aren't compatible with models that expect raw audio inputs.<|||||>> Hey @chlorane - did you get the `ark`/`scp` files using the Kaldi library? I think in this case you're better off just passing the raw `.wav` files directly to the `transformers` models, rather than first through the Kaldi pre-processing to get `ark`/`scp` and then trying to force them through the `transformers` models. The only information I can find regarding converting `ark`/`scp` to `wav` is this thread: https://groups.google.com/g/kaldi-help/c/t6Ra3uHiDJQ/m/R6e01pF5CwAJ For questions such as this one, you might have more luck posting on the Hugging Face forum, where others in the community can pitch-in based on their experiences: https://discuss.huggingface.co
>
> Note that all audio models in the `transformers` library are designed to work directly with audio inputs, as per their respective papers. The `ark`/`scp` file formats first convert the raw audio inputs to either MFCC features or some other feature extracted form, thus these aren't compatible with models that expect raw audio inputs.
Because our dataset has only (full) ark and corresponding scp files. Some wav files in the dataset are not available<|||||>How did you obtain the `ark`/`scp` files - is this a reversible process? I think in this case converting the files to `.wav` is your best bet<|||||>> How did you obtain the `ark`/`scp` files - is this a reversible process? I think in this case converting the files to `.wav` is your best bet
I think this is not very reversible. They are retrieved using Kaldi, but we don't have all the original voice in the dataset<|||||>Then I'm afraid I don't think it's possible to use these file formats. Have you asked on the Hugging Face forum? You could also check on the Kaldi repo to see if there's any advice there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving this closed since the audio file issue is related to a file format derived from the Kaldi repository (where I still think is the best place to ask for help!)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,671 | closed | AutoTokenizer Encode Error | ### System Info
For the LlamaTokenizer, I can get correct encoding result when directly loading from LlamaTokenizer. But the results are incorrect when using AutoTokenizer. Another issue is loading the AutoTokenizer much slower than directly loading the LlamaTokenizer. It take around 4 mins to load the tokenizer from the path when using AutoTokenizer, while it only takes one second if directly using the LlamaTokenizer.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Python version: 3.8.16
transformers version: 4.28.1
Follow the given example:
```
from transformers import LlamaTokenizer, AutoTokenizer
model_path = 'openlm-research/open_llama_7b_700bt_preview'
str = ' is embarassed, because Samantha made snide comments about the shirt Rebecca was wearing.'
tokenizer1 = LlamaTokenizer.from_pretrained(model_path)
tokenizer2 = AutoTokenizer.from_pretrained(model_path)
ret1 = tokenizer1.encode(str, add_special_tokens=False)
ret2 = tokenizer2.encode(str, add_special_tokens=False)
print(ret1)
print(ret2)
```
### Expected behavior
ret1: [322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret2: [31822, 322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret1 is the expected output and ret2 is an error result from AutoTokenizer. AutoTokenizer add an additional token, 31822 (which is a space token), to the encoding results.
| 05-22-2023 23:51:07 | 05-22-2023 23:51:07 | Hey! You are using an old version of the tokenizer. You should be using the one available [here](https://huggingface.co/huggyllama/llama-7b). This issue was already fixed.
AutoTokenizer has to convert the slow tokenizer to a fast one, which takes of course a lot of time since the model was not saved on the shared repo. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,670 | closed | Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/visual_bert | Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.22.0...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 05-22-2023 23:13:09 | 05-22-2023 23:13:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23670). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,669 | closed | Expose default_to_square parameter in CLIPImageProcessor | I'm looking to train an image model without cropping the input's sides (either horizontally or vertically). But I noticed that in this [`CLIPImageProcessor` class](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clip/image_processing_clip.py#LL51C9-L51C9), the `default_to_square` parameter is hard-coded to `False`. Is there any way I can still modify this so that my input in not cropped as a result of the resize and center_crop combination of transforms? | 05-22-2023 23:05:51 | 05-22-2023 23:05:51 | cc @amyeroberts <|||||>Hi @shubhamgoel27,
`default_to_square` is used in `get_size_dict` in order to control the behaviour when converting old configuration values (int, tuples or lists) to the expected dictionary format for the `size` parameter. As such, it's tied to the image processor class and isn't meant to be modified.
If I've understood correctly, you'd like to use the CLIPImageProcessor, but not perform resizing or cropping of the images. For all image processors, all transformations can be turned on / off with the `do_xxx` flags either during instantiation or calling. To not resize or crop the input images:
```python
from transformers import CLIPImageProcessor
image_processor = CLIPImageProcessor("openai/clip-vit-base-patch32")
inputs = image_processor(images=images, do_resize=False, do_center_crop=False)
```
Note: if `do_resize=False` and `do_center_crop=False`, then all the input images but be of the same (height, width) dimensions in order to create a batch. <|||||>Hey @amyeroberts ,
Thanks for the swift response.
My use-case is to not crop the image during the resize step, but still resize it to a smaller size (e.g. 224x244). So if the original image is 576x1024, the resize method would stretch/squeeze whichever dimension necessary and return a 224x224 image. But since the `default_to_square` parameter is hard-coded to `False`, I couldn't find a way to do so using the CLIPImageProcessor.
P.S. The context around this is that I don't want to crop useful information out from either sides (horizontal or vertical) during the pre-processing stage, as it might have a lot of value for the domain I'm interested in. <|||||>@shubhamgoel27 Is there a reason that you specifically want to use CLIP's image processor? All of the image processors are implemented to be aligned with the processing in the model paper, so it's not always possible to adapt it to every need. For your use case, the simplest approach would be to use another model's image processor, specifically ViT's. This image processor does three simple transformations:
* Resizes the images to 224x224
* Rescales the pixel values to be between 0-1
* Normalizes the pixel values with a given image mean and std
If it's necessary to have the same normalization constants as those used in CLIP, these ca be passed in when instantiating the class e.g.:
```python
from transformers import ViTImageProcessor
from transformers.utils.constants import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD
image_processor = ViTImageProcessor(image_mean=OPENAI_CLIP_MEAN, image_std=OPENAI_CLIP_STD)
```<|||||>@amyeroberts I'm finetuning the VIT component of a CLIP model, so was trying to use `CLIPImageProcessor`. But it looks like the `ViTImageProcessor` is allowing for both height and width in the resize method without using the `default_to_square=False`. So that should most likely be enough for my use-case. Thanks for pointing it out :)
|
transformers | 23,668 | closed | Bump requests from 2.22.0 to 2.31.0 in /examples/research_projects/lxmert | [//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [requests](https://github.com/psf/requests) from 2.22.0 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>v2.30.0</h2>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>v2.29.0</h2>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/blob/main/HISTORY.md">requests's changelog</a>.</em></p>
<blockquote>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<li>
<p>Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of <code>Proxy-Authorization</code> headers to destination servers when
following HTTPS redirects.</p>
<p>When proxies are defined with user info (<a href="https://user:pass@proxy:8080">https://user:pass@proxy:8080</a>), Requests
will construct a <code>Proxy-Authorization</code> header that is attached to the request to
authenticate with the proxy.</p>
<p>In cases where Requests receives a redirect response, it previously reattached
the <code>Proxy-Authorization</code> header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are <em>strongly</em> encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.</p>
<p>Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.</p>
<p>Full details can be read in our <a href="https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q">Github Security Advisory</a>
and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-32681">CVE-2023-32681</a>.</p>
</li>
</ul>
<h2>2.30.0 (2023-05-03)</h2>
<p><strong>Dependencies</strong></p>
<ul>
<li>
<p>⚠️ Added support for urllib3 2.0. ⚠️</p>
<p>This may contain minor breaking changes so we advise careful testing and
reviewing <a href="https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html">https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html</a>
prior to upgrading.</p>
<p>Users who wish to stay on urllib3 1.x can pin to <code>urllib3<2</code>.</p>
</li>
</ul>
<h2>2.29.0 (2023-04-26)</h2>
<p><strong>Improvements</strong></p>
<ul>
<li>Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (<a href="https://redirect.github.com/psf/requests/issues/6226">#6226</a>)</li>
<li>Requests relaxes header component requirements to support bytes/str subclasses. (<a href="https://redirect.github.com/psf/requests/issues/6356">#6356</a>)</li>
</ul>
<h2>2.28.2 (2023-01-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/psf/requests/commit/147c8511ddbfa5e8f71bbf5c18ede0c4ceb3bba4"><code>147c851</code></a> v2.31.0</li>
<li><a href="https://github.com/psf/requests/commit/74ea7cf7a6a27a4eeb2ae24e162bcc942a6706d5"><code>74ea7cf</code></a> Merge pull request from GHSA-j8r2-6x86-q33q</li>
<li><a href="https://github.com/psf/requests/commit/302225334678490ec66b3614a9dddb8a02c5f4fe"><code>3022253</code></a> test on pypy 3.8 and pypy 3.9 on windows and macos (<a href="https://redirect.github.com/psf/requests/issues/6424">#6424</a>)</li>
<li><a href="https://github.com/psf/requests/commit/b639e66c816514e40604d46f0088fbceec1a5149"><code>b639e66</code></a> test on py3.12 (<a href="https://redirect.github.com/psf/requests/issues/6448">#6448</a>)</li>
<li><a href="https://github.com/psf/requests/commit/d3d504436ef0c2ac7ec8af13738b04dcc8c694be"><code>d3d5044</code></a> Fixed a small typo (<a href="https://redirect.github.com/psf/requests/issues/6452">#6452</a>)</li>
<li><a href="https://github.com/psf/requests/commit/2ad18e0e10e7d7ecd5384c378f25ec8821a10a29"><code>2ad18e0</code></a> v2.30.0</li>
<li><a href="https://github.com/psf/requests/commit/f2629e9e3c7ce3c3c8c025bcd8db551101cbc773"><code>f2629e9</code></a> Remove strict parameter (<a href="https://redirect.github.com/psf/requests/issues/6434">#6434</a>)</li>
<li><a href="https://github.com/psf/requests/commit/87d63de8739263bbe17034fba2285c79780da7e8"><code>87d63de</code></a> v2.29.0</li>
<li><a href="https://github.com/psf/requests/commit/51716c4ef390136b0d4b800ec7665dd5503e64fc"><code>51716c4</code></a> enable the warnings plugin (<a href="https://redirect.github.com/psf/requests/issues/6416">#6416</a>)</li>
<li><a href="https://github.com/psf/requests/commit/a7da1ab3498b10ec3a3582244c94b2845f8a8e71"><code>a7da1ab</code></a> try on ubuntu 22.04 (<a href="https://redirect.github.com/psf/requests/issues/6418">#6418</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/psf/requests/compare/v2.22.0...v2.31.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 05-22-2023 22:52:58 | 05-22-2023 22:52:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23668). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,667 | closed | Imports in multiline try blocks are not properly ignored when determining the necessary packages for a modeling file | ### System Info
- `transformers` version: 4.29.2
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help?
@sgugger
### Reproduction
The minimal repro is to just call https://github.com/huggingface/transformers/blob/v4.29.2/src/transformers/dynamic_module_utils.py#L118-L134 on a file that has a multiline try import, e.g.
```
try:
from package import function
from package2 import function
except:
pass
```
and run the `get_imports` function on it. The output will be `['package', 'package2']`, when it should be `[]`
### Expected behavior
Imports in multiline try blocks should be ignored when determining what packages a modeling file requires. I believe https://github.com/huggingface/transformers/blob/ba7054533fa455e8b2dd35feb077e0c7aae646b3/src/transformers/dynamic_module_utils.py#L126 just needs to be modified to include `flags=re.MULTILINE | re.DOTALL`. | 05-22-2023 22:09:32 | 05-22-2023 22:09:32 | I'd make the change myself, but I can't tell where this codepath is tested (implicitly everywhere?), so will leave to a dev more familiar with `transformers` dev process and tests.<|||||>This is the right fix indeed. I don't think that function is tested yet, but you can add a new test file in `tests/utils/` named `test_dynamic_module_utils` with your failing test if you want to go the extra mile. |
transformers | 23,666 | closed | Splitting the transformers dependencies | ### Feature request
Right now this Python packages has a lot of dependencies including major DL frameworks like PyTorch, TensorFlow and Jax. This causes some complexity in the downstream packages that use `transformers` e.g. CI/CD environments get gigantic and it likely creates dependency issues e.g.

and

This is a CI exception of a Torch code using enformer-pytorch, which depends on transformers. Although there is nothing using Jax either in the Torch code or in the enformer-pytorch, we have to solve this Jax-related issue now.
I was wondering if you can somehow split the dependencies into groups e.g. `'pip install transformers[jax]'` or `'pip install transformers[pytorch]'`. Let me know what you think.
### Motivation
I think splitting the dependencies into reasonable groups would improve installation or testing of the downstream packages using transformers.
### Your contribution
I am not able to contribute, but I think my suggestions is relatively simple to implement. | 05-22-2023 21:53:05 | 05-22-2023 21:53:05 | This is already the case.<|||||>Oh I didn't know! So `'pip install transformers[torch]'` doesn't install jax or tensorflow?<|||||>No. You may have it in your environment from other installs, but `pip install transformers[torch]` will only install Transformers and its core dependencies (very light) and torch.<|||||>Thanks so much! |
transformers | 23,665 | closed | Metas MMS speech recognition | ### Model description
In their Blogpost they are writing it's 1B sized wav2vec 2 models, so only a new converter script should be needed?
A nice alternative to compare to whisper
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://github.com/facebookresearch/fairseq/tree/main/examples/mms
@sanchit-gandhi
@patrickvonplaten FYI | 05-22-2023 20:33:34 | 05-22-2023 20:33:34 | Interested Model!! Thank @flozi00 to open request.<|||||>👀 https://huggingface.co/models?other=mms<|||||>> :eyes: https://huggingface.co/models?other=mms
Nice! This is the pretrained model.
I am looking to also convert the ASR models; any pointers?
The scripts I am looking at (such as [this very useful boi](https://huggingface.co/HfSpeechUtils/convert_wav2vec2_to_hf/blob/main/run_convert.sh)) require dict and HF config but maybe it is the same configuration as `facebook/wav2vec2-large` :thinking: <|||||>Leeez go - working on the conversion as we speak :-) <|||||>I also made a request in _#23811_. Looking forward to it!<|||||>PR merged.
Also see:
- https://huggingface.co/docs/transformers/main/en/model_doc/mms
- https://github.com/huggingface/transformers/pull/23813
- https://huggingface.co/facebook/mms-1b-all |
transformers | 23,664 | closed | Update all no_trainer with skip_first_batches | # What does this PR do?
This PR updates all `no_trainer` examples to use `skip_first_batches` properly from the `Accelerator`/Accelerate when resuming from a checkpoint
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 05-22-2023 18:41:42 | 05-22-2023 18:41:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23664). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,663 | closed | TF version compatibility fixes | This PR makes several changes to core methods to support the upcoming 2.13 release of TF and generally futureproof against any other upcoming changes that might happen.
The core source of all of these problems is that Keras has been very mobile inside TensorFlow: Initially we used `tf.python.keras` until they told us this was a deprecated copy of Keras that shouldn't be there, and it got removed. We then switched to `tf.keras`, but now Keras has fully moved into its own library and namespace again. Although this is still mirrored at `tf.keras`, for the newest version of TF we'll just `import keras`.
There are several other related problems where our code assumed that parts of Keras that weren't really part of the public API would stay where they were. Not all of these have caused problems yet (that I know of) but they look very risky to me, and so I made some general fixes. This might surface some hidden bugs!
Fixes #23352 | 05-22-2023 18:37:54 | 05-22-2023 18:37:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I've added a proper framework inference function, put it in `utils/generic` and moved everything over to that. That should catch any edge cases in future, unless one of the frameworks gets renamed entirely :sweat_smile: |
transformers | 23,662 | closed | Enable prompts on the Hub | # What does this PR do?
This PR enables users to share prompt templates for the Agent on the Hub by supporting a repo_id instead of a string for prompts. In terms of API I'm still hesitating between
1. Let the user pass a the prompt template or a repo for both `run_prompt_template` and `chat_prompt_template` which has the cons of a bit of a brittle check (to determine if we have a repo ID or a real prompt) and the repetition of the repo twice for a repo that implements both a `run_prompt_template` and a `chat_prompt_template`
2. Add a new argument `prompts_repo_id` which has the cons of being a new argument in a public API and to add some checks if the repo only implements one of the prompts but not both.
I went with 1 in this PR but curious to have your advice. | 05-22-2023 17:19:20 | 05-22-2023 17:19:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,660 | closed | TransformerEngine FP8 inference | ### Feature request
Hi!
Could anyone please help me with using HuggingFace models (LLaMa [or if LLaMa is difficult, MPT-7b]) with the TransformerEngine TE FP8 inference? We really need the speedup
https://github.com/NVIDIA/TransformerEngine/issues/199
This is a somewhat related issue to this topic.
### Motivation
Faster inference and more specialized tensor operations means less cost and less latency.
### Your contribution
I would really love to test suggestions out as I have temporary access to a H100 cloud GPU.
I am not sufficient in porting the models myself which is why I created this issue.
I really appreciate any help, thank you very much. | 05-22-2023 16:00:47 | 05-22-2023 16:00:47 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,659 | closed | Add PerSAM [bis] | # What does this PR do?
This PR adds support for [PerSAM](https://arxiv.org/abs/2305.03048). Simplification of #23652.
2 optional arguments are introduced:
- attention_similarity
- target_embedding.
Those are used by PerSAM, a method which enables SAM to be quickly adapted to new concepts. | 05-22-2023 15:43:56 | 05-22-2023 15:43:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,658 | closed | Update workflow files | # What does this PR do?
Same as #23465 but for daily CI and push CI. | 05-22-2023 15:25:36 | 05-22-2023 15:25:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23658). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,657 | closed | Muellerzr fix deepspeed | # What does this PR do?
This PR fixes the slow tests by avoiding a recursion issue with `self.n_gpus`. It has `self.n_gpu` first check to see if we've assigned/spawned `self._n_gpu`, and if not then call setup_devices. This let's us maintain the clean `ParallelMode.DISTRIBUTED` check, without needing the complex check block in `setup_devices`.
I've confirmed DeepSpeed tests pass.
If we'd rather not do this, then the logic at https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1732-L1735 would need to be repeated (which can be done, just a bit messy)
Fixes # (issue)
Failing deepspeed tests on nightly
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 05-22-2023 14:13:03 | 05-22-2023 14:13:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23657). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,656 | closed | Fix SAM tests and use smaller checkpoints | This PR moves all the SAM tests for both PT and TF to the `sam-vit-base` instead of the `sam-vit-huge` (!) checkpoint they were using before. The huge checkpoint made the tests quite slow and caused OOM issues in TensorFlow.
It also fixes an issue with the `check_pt_tf_equivalence` test in the PyTorch tests, which should now pass correctly. | 05-22-2023 13:40:41 | 05-22-2023 13:40:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Ah! That test was also sneakily loading `sam-vit-huge`. I've fixed it, it should work fine now.<|||||>All pass now 🚀 |
transformers | 23,655 | closed | Add EnCodec model | # What does this PR do?
Adds the EnCodec neural codec from the [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) paper.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-22-2023 13:16:23 | 05-22-2023 13:16:23 | What still needs to be done at this point:
- remove `assert`s
- remove `einops` stuff: `rearrange`, `repeat`
- `QuantizedResult` -> `QuantizerOutput`
- I don't like the names in the `EncodecOutput` etc classes, so left some TODO items for renaming them
- rename arguments in config file (see TODO items in that file)
- add doc comments for all arguments in config file
- clean up `_linear_overlap_add`, the padding functions, and `_kmeans`
- improve variable names in the modeling code
- get rid of `activation_params` from config
- fix issues with padding on the 48khz model, since the last (incomplete) frame is different than with the original model
- remove all training code; I suggest we throw an exception from all `forward` methods
- doc strings in modeling code
- MDX file
- and probably more stuff
Most of these are small items.<|||||>TODO list:
**Done**
- <s> remove asserts </s>
- <s> remove einops stuff: rearrange, repeat </s>
- <s> QuantizedResult -> QuantizerOutput </s>
- <s> I don't like the names in the EncodecOutput etc classes, so left some TODO items for renaming them </s>
- <s> clean up _linear_overlap_add, the padding functions, and _kmeans </s>
- <s> remove all training code; I suggest we throw an exception from all forward methods </s>
- <s> improve variable names in the modeling code </s>
- <s> get rid of activation_params from config </s>
- <s> rename arguments in config file (see TODO items in that file) </s>
- <s> add doc comments for all arguments in config file </s>
**Final TODO**:
- doc strings in modeling code
- MDX file
- fix issues with padding on the 48khz model, since the last (incomplete) frame is different than with the original model<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review. Before asking for a final review:
- [x] Finish designing the modelling integration tests.
- [x] Finish testing all edge cases for the feature extractor.
Working on this! <|||||>Ok, addressed everything! feel free to merge if it's good with you @amyeroberts |
transformers | 23,654 | closed | QuestionAnsweringPipeline is never able to truncate the question | ### System Info
I was trying out feeding some in-context examples question side to the pipeline, and based on the design of the QuestionAnsweringPipeline it's basically impossible to get the truncation done at the question side and not the context.
question_first = bool(self.tokenizer.padding_side == "right") in the [docs](https://huggingface.co/transformers/v4.6.0/_modules/transformers/pipelines/question_answering.html) makes sure that whatever you try it's not possible to get actual question truncation (unless you just write out a whole new pipeline basically).
Think this should be easily fixable if you made it possible to "force" truncation=longest.
tagging @Narsil as it's pipeline related.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Setup whatever QAPipeline
1. Send a long question together with whatever context that would require truncation.
2. See: Exception: Truncation error: Sequence to truncate too short to respect the provided max_length
### Expected behavior
It should be possible to pass very long questions, at least if specifying truncation=longest (this just get's overrides now). | 05-22-2023 12:34:47 | 05-22-2023 12:34:47 | Are you sure it' s OK to truncate questions ? For the actual results of the model ?
We can definitely add more control over the truncation process, currently it's quite hardcoded because of all the Squad specific controls: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L409
We could switch to `tokenizer_kwargs` to allow any parameters to be passed.
@sgugger for confirmation it' s a good idea ?<|||||>I don't think it is a good idea. This seems like a specific use-case for which you can use the tokenizer and model directly, instead of using the `pipeline`.<|||||>Understandable 👍 |
transformers | 23,653 | closed | RWKV - loss.backward() failed | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Below is the code from the official example: https://huggingface.co/docs/transformers/main/en/model_doc/rwkv#transformers.RwkvForCausalLM
```
import torch
from transformers import AutoTokenizer, RwkvForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
```
2. I only added this line `loss.backward()` to run but it failed:
```
import torch
from transformers import AutoTokenizer, RwkvForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
loss.backward()
```
3. Error messages:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-7-ffc1d58be8b3>](https://localhost:8080/#) in <cell line: 10>()
8 outputs = model(**inputs, labels=inputs["input_ids"])
9 loss = outputs.loss
---> 10 loss.backward()
1 frames
[/usr/local/lib/python3.10/dist-packages/torch/_tensor.py](https://localhost:8080/#) in backward(self, gradient, retain_graph, create_graph, inputs)
485 inputs=inputs,
486 )
--> 487 torch.autograd.backward(
488 self, gradient, retain_graph, create_graph, inputs=inputs
489 )
[/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py](https://localhost:8080/#) in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
198 # some Python versions print out the first line of a multi-line function
199 # calls in the traceback and some print out the last line
--> 200 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
201 tensors, grad_tensors_, retain_graph, create_graph, inputs,
202 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 768]], which is output 0 of AsStridedBackward0, is at version 12; expected version 11 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
### Expected behavior
loss.backward() should work out. | 05-22-2023 12:21:20 | 05-22-2023 12:21:20 | +1 on this issue. Any update?<|||||>Checkout [this reply](https://github.com/huggingface/transformers/pull/22797#issuecomment-1546740612). I guess it's the same issue though not looked into it.<|||||>> Checkout [this reply](https://github.com/huggingface/transformers/pull/22797#issuecomment-1546740612). I guess it's the same issue though not looked into it.
Thanks for the help. I still have the same bug as before.
```
File "modeling_rwkv.py", line 783, in forward
hidden_states, state, attentions = block(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "modeling_rwkv.py", line 510, in forward
attention, state = self.attention(self.ln1(hidden), state=state, use_cache=use_cache)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "modeling_rwkv.py", line 436, in forward
rwkv, layer_state = rwkv_linear_attention(
File "modeling_rwkv.py", line 377, in rwkv_linear_attention
return rwkv_linear_attention_cpu(time_decay, time_first, key, value, state=state, return_state=return_state)
File "modeling_rwkv.py", line 361, in rwkv_linear_attention_cpu
den_state = e1 * den_state + e2
```
Any idea about this?
<|||||>I'm experiencing loss.backward() failure when using custom cuda kernel. In other words, whenever the setup branches towards the else path below:
```
if rwkv_cuda_kernel is None or no_cuda or one_token:
return rwkv_linear_attention_cpu(time_decay, time_first, key, value, state=state, return_state=return_state)
else:
return RwkvLinearAttention.apply(time_decay, time_first, key, value, state, return_state)
```
loss.backward() throws out an error "TypeError: backward() takes 2 positional arguments but 3 were given".
When rwkv_linear_attention_cpu is called instead, things work out fine.
Any ideas on what might contribute to this?<|||||>Pinging both @sgugger and @younesbelkada as they ported the model <|||||>I can confirm the backward fails both on CPU (first error) and on GPU (last error). Diving into this.<|||||>On CPU a simple workaround is to set `model.train()` (which you would need to do for real training anyway 😅 ), the bug comes from gradients of the state. I'll try to dig more, but it doesn't sounds super urgent.
For GPU the fix should be in a PR later today/tomorrow morning.<|||||>GPU fix was merged in #23774 <|||||>Thanks! I have verified that it is working, and the fine-tuning process is also functioning properly after this issue has been fixed. |
transformers | 23,652 | closed | Add PerSAM | # What does this PR do?
This PR adds the PerSAM model.
Question: when you do:
```
from transformers import PerSamModel
model = PerSamModel.from_pretrained("facebook/sam-vit-huge")
```
you get this warning:
```
You are using a model of type sam to instantiate a model of type persam. This is not supported for all configurations of models and can yield errors.
```
was wondering whether we could suppress this warning. PerSAM uses the exact same weights as the original SAM model, just modifies the forward pass with 2 additional arguments. Currently the model_type is set to "persam" in `PerSamConfig`.
| 05-22-2023 12:13:42 | 05-22-2023 12:13:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, will close this PR in favor of modifying `modeling_sam.py`. |
transformers | 23,651 | closed | How to use FSDP or DDP with Seq2SeqTrainer? | ### System Info
```shell
python = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
transformer version = '4.28.1'
torch version = '2.0.0+cu117'
GPUs = 2 * GTX 1080 Ti, each one 11G RAM
cuda information = Cuda compilation tools, release 11.5, V11.5.119, Build cuda_11.5.r11.5/compiler.30672275_0
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have 2 GTX 1080 Ti GPUs(11G RAM each one) and i want to fine-tune openai/whisper-small model which one of the hugging face transformers models. Also, I want to use Fully Sharded Data Parallel(FSDP) via seq2seqTrainer but i got error.
**Here is my code related to data:**
1.
```
def prepare_dataset(batch):
batch["input_features"] = feature_extractor(batch["audio"], sampling_rate=16000).input_features[0]
batch["labels"] = tokenizer(batch["text"]).input_ids
batch["input_features"] = torch.tensor(batch["input_features"])
batch["labels"] = torch.tensor(batch["labels"])
return batch
```
2.
```
train_ds = train_ds.map(prepare_dataset, remove_columns=train_ds.column_names)
val_ds = val_ds.map(prepare_dataset, remove_columns=val_ds.column_names)
```
**this is how i build the model and its parameters:**
1.
```
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small",
activation_dropout=0.1,
attention_dropout=0.1,
dropout=0.1)
```
2.
```
os.environ['RANK'] = '0'
os.environ['WORLD_SIZE'] = '2'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
```
3.
```
training_args = Seq2SeqTrainingArguments(
output_dir="/home/whisper_small_16_2_outputs/",
per_device_train_batch_size=8,
gradient_accumulation_steps=2,
learning_rate=1e-5,
warmup_steps=936,
fp16=True,
local_rank=0,
save_strategy='steps',
evaluation_strategy="steps",
gradient_checkpointing=True,
predict_with_generate=True,
generation_max_length=210,
save_steps=600,
eval_steps=300,
logging_steps=300,
num_train_epochs=30,
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
save_total_limit=5,
fsdp='full_shard',
fsdp_config='/home/fsdp_config.json'
)
```
4.
```
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=train_ds,
eval_dataset=val_ds,
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
```
5. **the fsdp_config json file:**
{
"fsdp_config": {
"fsdp_backward_prefetch_policy": "backward_pre",
"fsdp_forward_prefetch": false,
"limit_all_gathers": true,
"xla": false}
}
**and this is the error i’ve got:**
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 training_args = Seq2SeqTrainingArguments(
2 output_dir="/home/whisper_small_16_2_outputs/",
3 per_device_train_batch_size=8,
4 gradient_accumulation_steps=2,
5 learning_rate=1e-5,
6 warmup_steps=936,
7 fp16=True,
8 local_rank=0,
9 save_strategy='steps',
10 evaluation_strategy="steps",
11 gradient_checkpointing=True,
12 predict_with_generate=True,
13 generation_max_length=210,
14 save_steps=600,
15 eval_steps=300,
16 logging_steps=300,
17 num_train_epochs=30,
18 load_best_model_at_end=True,
19 metric_for_best_model="wer",
20 greater_is_better=False,
21 save_total_limit=5,
22 fsdp='full_shard',
23 fsdp_config='/home/fsdp_config.json'
24 )
File <string>:115, in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, xpu_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, dataloader_pin_memory, skip_memory_metrics, use_legacy_prediction_loop, push_to_hub, resume_from_checkpoint, hub_model_id, hub_strategy, hub_token, hub_private_repo, gradient_checkpointing, include_inputs_for_metrics, fp16_backend, push_to_hub_model_id, push_to_hub_organization, push_to_hub_token, mp_parameters, auto_find_batch_size, full_determinism, torchdynamo, ray_scope, ddp_timeout, torch_compile, torch_compile_backend, torch_compile_mode, sortish_sampler, predict_with_generate, generation_max_length, generation_num_beams, generation_config)
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1259, in TrainingArguments.__post_init__(self)
1253 if version.parse(version.parse(torch.__version__).base_version) == version.parse("2.0.0") and self.fp16:
1254 raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
1256 if (
1257 self.framework == "pt"
1258 and is_torch_available()
-> 1259 and (self.device.type != "cuda")
1260 and (get_xla_device_type(self.device) != "GPU")
1261 and (self.fp16 or self.fp16_full_eval)
1262 ):
1263 raise ValueError(
1264 "FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
1265 " (`--fp16_full_eval`) can only be used on CUDA devices."
1266 )
1268 if (
1269 self.framework == "pt"
1270 and is_torch_available()
(...)
1275 and (self.bf16 or self.bf16_full_eval)
1276 ):
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1694, in TrainingArguments.device(self)
1690 """
1691 The device used by this process.
1692 """
1693 requires_backends(self, ["torch"])
-> 1694 return self._setup_devices
File ~/.local/lib/python3.10/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)
52 cached = getattr(obj, attr, None)
53 if cached is None:
---> 54 cached = self.fget(obj)
55 setattr(obj, attr, cached)
56 return cached
File ~/.local/lib/python3.10/site-packages/transformers/training_args.py:1679, in TrainingArguments._setup_devices(self)
1677 torch.distributed.init_process_group(backend=self.xpu_backend, timeout=self.ddp_timeout_delta)
1678 else:
-> 1679 torch.distributed.init_process_group(backend="nccl", timeout=self.ddp_timeout_delta)
1680 device = torch.device("cuda", self.local_rank)
1681 self._n_gpu = 1
File ~/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:920, in init_process_group(backend, init_method, timeout, world_size, rank, store, group_name, pg_options)
916 barrier()
917 else:
918 # Use store based barrier here since barrier() used a bunch of
919 # default devices and messes up NCCL internal state.
--> 920 _store_based_barrier(rank, store, timeout)
File ~/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:459, in _store_based_barrier(rank, store, timeout)
456 log_time = time.time()
458 if timedelta(seconds=(time.time() - start)) > timeout:
--> 459 raise RuntimeError(
460 "Timed out initializing process group in store based barrier on "
461 "rank: {}, for key: {} (world_size={}, worker_count={}, timeout={})".format(
462 rank, store_key, world_size, worker_count, timeout
463 )
464 )
466 logger.info(
467 f"Rank {rank}: Completed store-based barrier for key:{store_key} with {world_size} nodes."
468 )
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=2, worker_count=1, timeout=0:30:00)
```
### Expected behavior
```shell
i just want to be able using bigger batch size for fine-tuning Whisper-small model with the GPUs i've mentioned. After a little researching, I found that I had to use FSDP or DDP but i got just errors! can anyone help me?!
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine. | 05-22-2023 12:11:38 | 05-22-2023 12:11:38 | You cannot set yourself the `local_rank` variable in the training arguments. This is done when you launch your script in a distributed fashion with `torchrun`.<|||||>I removed `local_rank` from training arguments and lunch train script with `torchrun train_script.py` but i got that error again<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,650 | closed | Fix accelerate logger bug | # What does this PR do?
Fixes a slow test that requires accelerate t[hat is currently failing](https://github.com/huggingface/transformers/actions/runs/5035387610/jobs/9030918234) with the following error:
```bash
RuntimeError: You must initialize the accelerate state by calling either `PartialState()` or `Accelerator()` before using the logging utility.
```
I suspect this comes from https://github.com/huggingface/accelerate/pull/1446
The fix seems to be to first initialize a dummy accelerator. However I couldn't reproduce the issue with a simpler snippet. It seems to appear only when a model is created together with CPU offloading + multi GPU.
I tried this snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": 0,
"transformer.h.0": "cpu",
"transformer.h.1": "cpu",
"transformer.h.2": 0,
"transformer.h.3": 0,
"transformer.h.4": 0,
"transformer.h.5": 0,
"transformer.h.6": 0,
"transformer.h.7": 0,
"transformer.h.8": 0,
"transformer.h.9": 1,
"transformer.h.10": 0,
"transformer.h.11": 1,
"transformer.h.12": 0,
"transformer.h.13": 0,
"transformer.h.14": 1,
"transformer.h.15": 0,
"transformer.h.16": 0,
"transformer.h.17": 1,
"transformer.h.18": 1,
"transformer.h.19": 0,
"transformer.h.20": 1,
"transformer.h.21": 1,
"transformer.h.22": 0,
"transformer.h.23": 0,
"transformer.ln_f": 1,
}
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "hello"
encoded_input = tokenizer(input_text, return_tensors="pt")
# Check the exactness of the results
output_parallel = model.generate(input_ids=encoded_input["input_ids"].to(0), max_new_tokens=10)
```
But it didn't raised any error, strangely, the error is only raised when the test is run.
I would appreciate any insight @muellerzr @sgugger 🙏 | 05-22-2023 11:43:48 | 05-22-2023 11:43:48 | I'll try looking into it more, however also note that for logging you should use the `PartialState` not accelerator :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,648 | open | Unexpected padding behaviour of `ClapFeatureExtractor` | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-4.15.0-204-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Adding `padding=True` argument to `ClapFeatureExtractor` changes the padding strategy from `repeatpad`, which is the default, to constant padding ([code](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/feature_extraction_clap.py#L248)).
Code adapted from: https://huggingface.co/docs/transformers/model_doc/clap#transformers.ClapModel.forward.example
```python
from datasets import load_dataset
from transformers import AutoProcessor
# load data
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
# load data processor
processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused")
# pre-process data
inputs1 = processor.feature_extractor(audio_sample, return_tensors="pt")
inputs2 = processor.feature_extractor(audio_sample, return_tensors="pt", padding=True)
print((inputs1["input_features"] - inputs2["input_features"]).max())
# Output: tensor(119.4260)
```
This becomes a problem, for instance, when using `ClapProcessor`. `ClapProcessor` shares `kwargs` between the `tokenizer` and the `feature_extractor` ([code](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/processing_clap.py#LL87C9-L87C9)). When using text inputs of different length, you need to pass `padding=True` argument to the `tokenizer`, but doing so changes the behaviour of the `feature_extractor`.
### Expected behavior
1. Either don't allow `padding=True` argument. Assert its value to be one of the allowed values - `repeatpad`, `repeat`, and `pad` in case of `ClapFeatureExtractor`.
2. Or, use the default padding strategy if `padding=True`.
As for sharing `kwargs`, I don't think that's a good idea. Would having two arguments, `tokenizer_kwargs` and `feature_extractor_kwargs` be better? | 05-22-2023 09:32:42 | 05-22-2023 09:32:42 | cc @sanchit-gandhi <|||||>Thanks for the clean write-up @anmol-wiai!
I think splitting arguments make sense for the processor classes (send one set of args to the feature extractor, another set of args to the tokeniser). Previously, I overwrote the most common feature extractor arg to fall out of the shared arg logic: https://github.com/huggingface/transformers/pull/20022 But clearly we need something robust to handle the different sets of possible inputs args for the feature extractor and tokeniser respectively
Probably what we need to do here to prevent breaking changes is have three input arguments:
1. `feature_extractor_kwargs` -> get sent to the feature extractor
2. `tokenizer_kwargs` -> get sent to the tokeniser
3. `kwargs` -> get sent to both (needed to prevent a breaking change)
WDYT about this design @amyeroberts @anmol-wiai?<|||||>Hi @sanchit-gandhi,
> Probably what we need to do here to prevent breaking changes is have three input arguments:
>
> feature_extractor_kwargs -> get sent to the feature extractor
> tokenizer_kwargs -> get sent to the tokeniser
> kwargs -> get sent to both (needed to prevent a breaking change)
>
I think this is good. One small thing is that `kwargs` can be a little ambiguous.
If I have two functions like this:
```python
# version 1
def run_func1_and_func2_v1(func1_arg, func2_arg, **kwargs):
...
# version 2
def run_func1_and_func2_v2(func1_arg, func2_arg, func1_kwargs, func2_kwargs, **kwargs):
...
```
While it's expected than in version 1, `kwargs` could be passed on to subsequent function calls inside `run_func1_and_func2_v1`, is it so obvious in version 2 as well given that `run_func1_and_func2_v2` already has `func1_kwargs` and `func2_kwargs` arguments?
Would renaming `kwargs` to something like `shared_kwargs`, or `common_kwargs` be more clear? In any case, you should consider deprecating `kwargs`.
---
Another issue that I flagged is about what should `ClapFeatureExtractor` do when it is passed `padding=True` argument. (Sorry for raising two things in a single issue. I got a little confused. I can open a separate issue if that helps.)
If you look at how `padding` argument works for `tokenizer` (code around [this](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/tokenization_utils_base.py#LL2366C1-L2366C1) line), it looks something like:
```python
if padding is not False:
if padding is True:
# use default strategy - LONGEST
...
elif not isinstance(padding, PaddingStrategy):
# note that this will raise an error if you pass an non-allowed padding - like "PAD_WITH_HUNDRED"
padding_strategy = PaddingStrategy(padding)
elif isinstance(padding, PaddingStrategy):
# do something
...
else:
# do not pad
...
```
In contrast to that, the `ClapFeatureExtractor`'s `padding` argument is used like this (code around [this](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/clap/feature_extraction_clap.py#LL242C20-L242C27) line):
```python
if padding == "repeat":
# do something
...
if padding == "repeatpad": # default value of padding = "repeatpad"
# do something
...
# zero pad otherwise - whatever the value of padding is!
waveform = np.pad(waveform, (0, max_length - waveform.shape[0]), mode="constant", constant_values=0)
```
Try this code snippet for example:
```python
from datasets import load_dataset
from transformers import AutoProcessor
# load data
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
# load data processor
processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused")
# pre-process data
inputs1 = processor.feature_extractor(audio_sample, return_tensors="pt", padding=True)
inputs2 = processor.feature_extractor(audio_sample, return_tensors="pt", padding="PAD_WITH_HUNDRED")
print((inputs1["input_features"] != inputs2["input_features"]).sum())
# Output: tensor(0)
```
I think this behaviour is unexpected and this should be made consistent with `tokenizer`.<|||||>> Would renaming `kwargs` to something like `shared_kwargs`, or `common_kwargs` be more clear?
I think `kwargs` would be fine if we document the function properly, but don't feel too strongly about keeping the name so fine with switching to `shared_kwargs` if we think it adds clarity over a proper docstring
> I think this behaviour is unexpected and this should be made consistent with `tokenizer`
Thanks for the clear explanation - the code snippets you've provided are very clean! I think so too - the CLAP processor functionality is a bit unexpected here. Shall we tackle this in a separate PR to more closely follow the `tokenizer` logic?<|||||>> fine with switching to `shared_kwargs` if we think it adds clarity over a proper docstring
I think it does but I am okay with whatever you prefer.
> Shall we tackle this in a separate PR to more closely follow the tokenizer logic?
Yes, these two things are independent and can be dealt with in separate PRs.<|||||>Awesome @anmol-wiai! Would you like to open a PR to fix one or both of these issues? Happy to guide you through the integration process, sounds like you have a good idea of what needs to be fixed with getting the CLAP processor consistent with the tokenizer 👍<|||||>Hi @sanchit-gandhi, I'm currently a little busy for the next two weeks. However, I can work on this afterwards if that timeframe suits you.<|||||>Sounds perfect - feel free to tag me in a PR whenever you get the chance to look at this and I'll get you a review! More than happy to answer any questions / queries here or on the PR, so just ping me if you get stuck 🤗<|||||>Still think this would be a nice way of clarifying the CLAP feature extractor kwargs! It's all yours @anmol-wiai if you're still up for it!<|||||>Hey @sanchit-gandhi! Sorry, I have been a little busy lately. I was planning to work on it this weekend. Let me create a PR and then I will reach out to you. |
transformers | 23,647 | open | Wandb sweeps integraition: custom objective | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.14.0-162.6.1.el9_1.0.1.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu117 (True) -- using torch in my experiments
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.10 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I followed [this guide](https://huggingface.co/docs/transformers/hpo_train) to use wandb sweeps with the trainer by modifying the summarization scripts slightly. Below are the hp_space, model_init, and hyperparameter_search commands that I use.
Most notably, the objective is to maximize the sum of sari and rougeLsum.
```python
def wandb_hp_space(trial):
return {
"method": "bayes",
"metric": {
"name": "objective",
"goal": "minimize" if hyperopt_args.hparam_optimize_for_loss else "maximize"
},
"parameters": {
"num_train_epochs": {"min": hyperopt_args.hparam_epoch_min, "max": hyperopt_args.hparam_epoch_max}
},
"run_cap": hyperopt_args.hparam_max_trials
}
def model_init(trial):
return AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
def hparam_objective(metrics: Dict[str, float]) -> float:
metrics = copy.deepcopy(metrics)
if hyperopt_args.hparam_optimize_for_loss:
return metrics["eval_loss"]
return metrics["eval_rougeLsum"] + metrics["eval_sari"]
best_trial = trainer.hyperparameter_search(
compute_objective=hparam_objective,
backend="wandb",
# I think that this is only used to set the column in the sweep chart but does not mean that we use
# this metric only for optimization. That is what the hparam_objective is for?
metric="eval/sari",
hp_space=wandb_hp_space,
n_trials=hyperopt_args.hparam_max_trials,
direction="minimize" if hyperopt_args.hparam_optimize_for_loss else "maximize",
)
```
The problem is that when I look at the wandb interface at the generated sweep config, it looks like this:
```yaml
method: bayes
metric:
goal: maximize
name: eval/sari
parameters:
num_train_epochs:
distribution: int_uniform
max: 30
min: 2
run_cap: 16
```
So the generated sweep config includes `eval/sari` as the metric name because I passed it in the `hyperparameter_search`. But as you can read in the comment, I thought this was only for the wandb visualization but now I am not so sure. When I leave out the `metric` keyword (as in the example), wandb seems to fallback to `eval/loss`.
```yaml
method: bayes
metric:
goal: maximize
name: eval/loss
parameters:
num_train_epochs:
distribution: int_uniform
max: 30
min: 2
run_cap: 16
```
My worry here is the disconnect between the generated sweep config and the custom objective function. What is wandb optimizing now? My custom objective function (sari+rougeLsum) or `metric` that is passed to `hyperparameter_search` for `direction`?
### Expected behavior
More clarity about the relationship between a wandb sweep configuration and how the trainer uses wandb as a backend, and the importance of `metric` and `direciton` arguments vs. `compute_objective`. | 05-22-2023 09:11:55 | 05-22-2023 09:11:55 | We are not maintaining the Wandb integration, so you should ping the Wandb team here :-) <|||||>Hey @BramVanroy thanks for raising this issue. Here's my response from top of my head:
Scenario 1:
> the objective is to maximize the sum of sari and rougeLsum
Are you logging this sum to wandb? Say you are logging it to wandb with name `eval/sari_rougelsum` then passing `metric: eval/sari_rougelsum` should work.
Scenario 2:
You are logging the two metrics separately. Sweeps doesn't support multi-objective optimization today. Say you are using the grid or random search you can use the parallel coordinate plot generated in the sweeps dashboard to find a set of hyperparameters that can optimize two objectives:
In this proxy example say `eval/val_accuracy` and `eval_loss` are two objectives. I can then use this parallel coordinate plot to find range of hparams that maximise the `val_accuracy` and minimise `eval_loss`.
<img width="1355" alt="image" src="https://github.com/huggingface/transformers/assets/31141479/1b8b62a4-4be0-4540-b331-f08801708e67">
> More clarity about the relationship between a wandb sweep configuration and how the trainer uses wandb as a backend, and the importance of metric and direciton arguments vs. compute_objective.
I have shared this request internally. Thanks.<|||||>Hi @ayulockin, thanks for the quick reply!
> Are you logging this sum to wandb? Say you are logging it to wandb with name eval/sari_rougelsum then passing metric: eval/sari_rougelsum should work.
Can you tell me how to log an extra metric in the trainer or directly in this training script (I'm using a modified version of this): https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py
> Sweeps doesn't support multi-objective optimization today.
Does that mean that the custom objective function that I pass to the huggingface trainer is NOT used by `wandb`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ayulockin Any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Kind rmeinde reminder :) Bump @ayulockin |
transformers | 23,646 | closed | Remove erroneous `img` closing tag | # What does this PR do?
Removes erroneous closing tag for the "support" image tag for the Portuguese documentation (https://huggingface.co/docs/transformers/v4.29.1/pt/index)
Currently, there is an extra "</img>" after the image, which is causing issues with doc-builder

<!-- Remove if not applicable -->
Fixes issue in https://github.com/huggingface/transformers/pull/23625. Doc builder logs:
```bash
[vite-plugin-svelte] /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241 <img> is a void element and cannot have children, or a closing tag
file: /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241
91 |
92 | <a target="_blank" href="https://huggingface.co/support">
93 | <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"></img>
^
94 | </a>
95 | <h2 class="relative group">
> /tmp/tmpem9ikhqv/kit/src/routes/index.mdx:93:241 <img> is a void element and cannot have children, or a closing tag
at error (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17691:19)
at Parser$1.error (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17767:9)
at tag (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:16765:20)
at new Parser$1 (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17726:21)
at parse$3 (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:17858:20)
at compile (file:///tmp/tmpem9ikhqv/kit/node_modules/svelte/compiler.mjs:31871:17)
at compileSvelte2 (file:///tmp/tmpem9ikhqv/kit/node_modules/@sveltejs/vite-plugin-svelte/dist/index.js:319:20)
at async Object.transform (file:///tmp/tmpem9ikhqv/kit/node_modules/@sveltejs/vite-plugin-svelte/dist/index.js:1602:23)
at async transform (/tmp/tmpem9ikhqv/kit/node_modules/rollup/dist/shared/rollup.js:21965:16)
at async ModuleLoader.addModuleSource (/tmp/tmpem9ikhqv/kit/node_modules/rollup/dist/shared/rollup.js:22191:30)
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger, @stevhliu, @MKhalusova | 05-22-2023 06:34:25 | 05-22-2023 06:34:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,645 | closed | Add support for non-rust implemented tokenization for `__getitem__` m… | …ethod.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-22-2023 04:53:43 | 05-22-2023 04:53:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23645). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @jacklanda, thanks for opening this PR!
Could you add some more detail to the PR description about the issue this is addressing or feature this is adding?
cc @younesbelkada @ArthurZucker <|||||>> Hi @jacklanda, thanks for opening this PR!
>
> Could you add some more detail to the PR description about the issue this is addressing or feature this is adding?
>
> cc @younesbelkada @ArthurZucker
Hi @amyeroberts , this PR is going to add a support for the usage scenario of "getting a slice from the batch-tokenized sequences".
Without this PR, it seems to raise `KeyError` with the following message `KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers'`
P.S. The above scenario could be reproduced by using some models new uploaded but not support to Rust-implemented tokenization, such as `fnlp/moss-moon-003-sft`. Also we can run a examplar script for reproducing this issue:
```python3
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("fnlp/moss-moon-003-sft", trust_remote_code=True)
tok.add_special_tokens({"pad_token": "[PAD]"})
texts = ["Today is a good day!", "It's a good idea!", "How's going?"]
batch_tok = tok(texts, padding=True)
print(batch_tok[0:3]) # report `KeyError` here
```
All in all, I think it seems useful to implement `__getitem__` method behind it in Python side :)<|||||>Why this auto pipeline always failed? <|||||>@jacklanda By 'auto pipeline' are you referring to the CI test suite? If so, it seems there's two main issues causing failing CI runs:
* Quality style check. Running `make style` and pushing the changes should resolve these.
* Unexpected change in behaviour in one of our custom tokenizers. It seems that some of the tests for `JumanppTokenizer` are now failing as a result of the changes in this PR.
For the CI runs, you should be able to click on `Details` for each of the CI runs to see the full output including error messages / failed test reports. Let us know if this isn't working for you.
<|||||>>
Yes, I mean `CI test suite`.
- Where to run `make style`?
- How to fix this? Should I reopen another PR for this commit?
Thanks!<|||||>@jacklanda
> Where to run make style?
In the top level of your local `transformers` fork. This and more information about how to contribute to the library can found in the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests).
> How to fix this? Should I reopen another PR for this commit?
The changes should be part of this PR and pushed to this branch. However, I can see that this PR has been opened on the `main` branch of your fork, and not a new feature branch. For this PR we can still merge the changes, however if you wish to contribute again to transformers using this fork it may become tricky to manage conflicts. I would suggest deleting this branch once this PR is merged, fetching `main` from the parent repository and then working from feature branches. Again, see the contributor guidelines for details on typical workflows for PRs. |
transformers | 23,644 | closed | save all custom files in the meanwhile | ### Feature request
Many models in your huggingface hub have their custom files, like [mpt](https://huggingface.co/mosaicml/mpt-7b-instruct). However, your `save_pretrained` func can't save them all. Is it possible to save all of them at the same time?
### Motivation
It's pretty useful when you fine-tune the model and want to save it in a new directory.
### Your contribution
no | 05-22-2023 04:43:28 | 05-22-2023 04:43:28 | Hi @JaheimLee, thanks for raising this issue.
Which files specifically are being referred to when you say "custom files"? <|||||>> Hi @JaheimLee, thanks for raising this issue.
>
> Which files specifically are being referred to when you say "custom files"?
Like the model files which is not merged into transformers. For example:
https://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/configuration_mpt.py
https://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/adapt_tokenizer.py
https://huggingface.co/mosaicml/mpt-7b-instruct/blob/main/modeling_mpt.py
...<|||||>The `save_pretrained` function indicates a reference to the original repo so that you get the updates they push there automatically. You can still use your model with `from_pretrained` and push it to the Hub, and it will always work and keep the latest code version from their side.
If you really want the modeling files to make changes, you can find them all in the cache: `~/.cache/huggingface/hub/models--mosaicml--mpt-7b-instruct`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,643 | closed | [wip: testing doc-builder] | # DO NOT MERGE
Testing https://github.com/huggingface/doc-builder/pull/373
| 05-22-2023 04:10:52 | 05-22-2023 04:10:52 | Unfortunately trying to use a custom commit hash just skips the doc-building (since it can't find it). Will wait for @mishig25 to update his branch :) <|||||>Testing continued at https://github.com/huggingface/transformers/pull/23625 |
transformers | 23,642 | closed | Prediction code snippet for graphormer on graph classification tasks. | ### System Info
Kindly share the code snippet of prediction for graphormer model.
### Who can help?
cc @clefourrier
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1) Train the model for ogl data.
2) Using trainer.predict to get the prediction dataset results.
3) Unable to write common prediction code for validation dataset without trainer.predict
4) This requires trainer.train to get completed every time.
5) Kindly share the snippet similar to normal text classification and other tasks.
### Expected behavior
Prediction code snippet for graphormer on graph classification tasks. | 05-22-2023 02:07:02 | 05-22-2023 02:07:02 | Hi, could you share the logs that you got at step 3?<|||||>> Hi, could you share the logs that you got at step 3?
Now I'm able to write out prediction code independently.
Code Snippet :
# For validation, Kindly opt only one data alone from the dataset.
val_ds1 = val_ds.select(list(range(0,1)))
# Trainer prediction
preds = trainer.predict(val_ds1)
# Output : PredictionOutput(predictions=(array([[ 3.615392, -2.313876]], dtype=float32),
# Independent prediction code
inputs = dict()
dataset_processed['train'][0] = val_ds1[0]
import torch
inputs['input_nodes'] = torch.tensor([dataset_processed['train'][0]['input_nodes']])
inputs['input_edges'] = torch.tensor([dataset_processed['train'][0]['input_edges']])
inputs['attn_bias'] = torch.tensor([dataset_processed['train'][0]['attn_bias']])
inputs['in_degree'] = torch.tensor([dataset_processed['train'][0]['in_degree']])
inputs['out_degree'] = torch.tensor([dataset_processed['train'][0]['out_degree']])
inputs['spatial_pos'] = torch.tensor([dataset_processed['train'][0]['spatial_pos']])
inputs['attn_edge_type'] = torch.tensor([dataset_processed['train'][0]['attn_edge_type']])
from transformers import GraphormerForGraphClassification
model_checkpoint = "/content/graph-classification/checkpoint-1029" # pre-trained model from which to fine-tune
model = GraphormerForGraphClassification.from_pretrained(
model_checkpoint,
num_classes=2,
ignore_mismatched_sizes = True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
predicted_class_id,logits
# Output : (0, tensor([[ 3.6154, -2.3139]]))
Works as expected. Hence closing this issue. Thanks a lot. |
transformers | 23,641 | closed | fix: TextIteratorStreamer cannot work with pipeline | Deepcopying the TextIteratorStreamer object causes the exception.
# What does this PR do?
Fixes #23552
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-22-2023 01:20:23 | 05-22-2023 01:20:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger , pls help review, thx.<|||||>@sywangyi <|||||>@gante Please help to review.<|||||>I ran into the same issue and also had another issue about a week ago (which I don't remember now) that was also related to the deepcopy operation. If the sole purpose of the deep copy is to allow calls to `pop` without modifying the dictionary, then shouldn't a shallow copy do?<|||||>@gante We used the transformers pipeline with TextIteratorStreamer for chatbot, but it raised the deepcopy exception. Can you help to check the issue and review the PR?
Thanks a lot.<|||||>@gante is on vacation, please be patient until he comes back :-)
Also cc @Narsil <|||||>> @gante is on vacation, please be patient until he comes back :-) Also cc @Narsil
Got it. Thanks.<|||||>Do I need to fix these test failures which are not related with this patch? @gante <|||||>@yuanwu2017 to get rid of the CI errors, please rebase the PR. It is a hard requirement -- LMK if you need instructions to do so :) <|||||>@amyeroberts see the [issue here](https://github.com/huggingface/transformers/issues/23785), which explains why this copy is not needed :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23641). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,640 | open | Use python generator instead of streamer for generation | ### Feature request
Add an option for receiving tokens (or similar) as they are generated via a [python generator](https://wiki.python.org/moin/Generators) as an alternative to needing a streamer object.
### Motivation
There is a new feature [streamers](https://huggingface.co/docs/transformers/generation_strategies#streaming) for accessing the tokens being generated during generation. Usage of this object requires you to run some code in parallel while the model.generate function blocks it's current thread. You need to instead have your processing code defined like a callback within the streamer object you are using.
A much simpler interface that solves this same problem is to yield the token sequences as they are generated with a [python generator](https://wiki.python.org/moin/Generators). Below is example usage for either case...
## Proposed Generator Implementation
```
for token in model.generate(**inputs, max_new_tokens=20, yield_tokens=True):
print(f"The next token is {token}")
```
## Current Streamer Implementation
from transformers import AutoModelForCausalLM, TextStreamer
```
class MyStreamer:
def __init__(self):
pass
def put(self, token):
print(f"The next token is {token}")
def end():
pass
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
```
Not only does the generator implementation save on lines of code/simplify syntax, but python generators return iterables which has the benefit of making it easy to use all sorts of existing python tools without modification. For example, you can
### Enumerate
```
for idx, token in enumerate(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)):
print(f"The {idx}'th token is {token}")
```
### Progress bar with TQDM
Progress bar appears in CLI or jupyter notebook, updating in real time
```
for token in tqdm(model.generate(**inputs, max_new_tokens=20, yield_tokens=True)):
my_endpoint.post(token)
```
And there's many many more tools that would easily integrate!
In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and similar multiple beams (multiple sequences) are being considered at any given time. This would enable more features on the developer side, esp in the case where you may want to generate multiple sequences in one call. But this is more of a sidenote and either of this or the base implementation would be really awesome.
### Your contribution
I'm not planning to put in a PR anytime soon, but I did have a look through the code before finding the new streamer WIP feature. It seems like it would be fairly easy to implement a version of what I am describing. You just need to add a flag to optionally
```
yield new_token
```
inside each of beam_search, beam_sample, greedy_search, etc- and then update the model.generate wrapper to also optionally yield the results from each of these.
In this case I proposed tokens because it's easier to think about that way, and it matches the current streamer implementation, but it may be easier to implement yielding a list of lists of tokens, since for beam search and such multiple beams are being considered at any given time. | 05-22-2023 00:20:42 | 05-22-2023 00:20:42 | cc @gante <|||||>This has been mentioned in PR: [2249](https://github.com/huggingface/transformers/pull/22449)
We need this feature, @gante @oobabooga, can you provide a short script how to try this out when calling `model.generate`, like this function work as a python generator object.<|||||>@JamesDConley I found this https://huggingface.co/spaces/joaogante/transformers_streaming. I think this could be great start with your problem.<|||||>@ambiSk This is on @gante roadmap but note he is on vacation for two weeks, so you will have to be a bit patient :-)<|||||>Hey @JamesDConley @ambiSk -- I agree the generator structure is superior, and that is why you see a warning in the docs saying the existing API is temporary (e.g. [here](https://huggingface.co/docs/transformers/v4.30.0/en/internal/generation_utils#transformers.TextIteratorStreamer)).
Back when I was exploring the MVP of the feature, I managed to get an iterator going. However, it required significant changes in `.generate`, adding `yield from` statements in a few places and restructuring a few bits so that the tokens could be piped out correctly. The branch is in a very incomplete state (see [here](https://github.com/huggingface/transformers/compare/main...gante:transformers:streamer_yield)), and I don't expect to be able to pick it up in the next ~2 months -- if anyone would like to get their hands dirty, feel free to pick this feature up 🙌
(just let me know if you decide to work on it :) ) |
transformers | 23,639 | closed | How to generate one token after the other when no past_key_values is returned? | ### System Info
Python: 3.11
transformers==4.29.2
torch==2.0.1
### Who can help?
@ArthurZucker @younesbelkada @gante @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
model = AutoModelForCausalLM.from_pretrained('allenai/scibert_scivocab_uncased').to(device)
input_sequence = "Hello, I'm a language model,"
inputs = torch.as_tensor(tokenizer.encode(input_sequence)).unsqueeze(0).to(device)
attention_mask = torch.as_tensor(tokenizer(input_sequence).attention_mask).unsqueeze(0).to(device)
past_key_values = None
count = 0
complete_token = []
with torch.no_grad():
while count < 10:
count += 1
print("Iteration no.: " + str(count))
if count > 1:
inputs = input_token
print(inputs.to(device))
print(attention_mask)
print(past_key_values[0][0].shape if past_key_values else None)
model_out = model(input_ids=inputs.to(device), attention_mask=attention_mask, past_key_values=past_key_values)
logits = model_out.logits[:, -1, :]
past_key_values = model_out.past_key_values
topk_values, topk_indices = torch.topk(logits, 5)
log_probs = F.softmax(topk_values, dim=-1)
inputs_in_topk = torch.multinomial(log_probs, num_samples=1, replacement=True)
input_token = torch.gather(topk_indices, 1, inputs_in_topk)
attention_mask = torch.concat((attention_mask, torch.tensor([[1]]).to(attention_mask.device)), dim=1)
complete_token.append(input_token)
```
### Expected behavior
I would like to use Scibert for iterated token generation. Above is my code.
However, we have past_key_values = Null all the time. I tried this approach with other models and past_key_values is not null there. How can I make the iteration work here, such that we have the knowledge of the previous iteration? | 05-21-2023 18:36:31 | 05-21-2023 18:36:31 | This is an encoder model, so you can't use it as a decoder model for generation like this.<|||||>Hi @sgugger, many thanks for getting back to me.
So is there no possibility to use that model for generative purposes?
Also, could we have some slight influence on the selection of each individual token?<|||||>Hi @junoriosity -- only model architectures with decoder blocks can be used for generative purposes, see [this page from our NLP course](https://huggingface.co/learn/nlp-course/chapter1/4?fw=pt#general-architecture). If these concepts are new to you, I'd recommend going through the course, which you can complete in a single day!
`allenai/scibert_scivocab_uncased`, a BERT model, so it doesn't have a decoder block :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,638 | closed | Image column name missing for CLIP | # What does this PR do?
The patch makes sure that the image features do not get removed. Otherwise **transform_images(examples)** will have a **KeyError** for the image_column when trying to transform the image.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-21-2023 16:46:28 | 05-21-2023 16:46:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23638). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @TJKlein, thanks for opening this PR.
Could you provide a bit more information about the issue this is resolving? There isn't a `transform_images` method in the `run_clip.py` script or in the [dataset preparation script](https://huggingface.co/datasets/ydshieh/coco_dataset_script/blob/main/coco_dataset_script.py)<|||||>Well, there is:
https://github.com/huggingface/transformers/blob/28f589f0a46cced297fba46014ee73b862fa247b/examples/pytorch/contrastive-image-text/run_clip.py#L396
https://github.com/huggingface/transformers/blob/28f589f0a46cced297fba46014ee73b862fa247b/examples/pytorch/contrastive-image-text/run_clip.py#L410
Since the image_column is not a standard feature it will get pruned from trainer. Therefore it has to be specified to the trainer. That's what I did.
```
trainer._signature_columns=["input_ids", "attention_mask", data_args.image_column]
```<|||||>@TJKlein Apologies, I completely missed the function, sorry!
Could you expand on the issue this tackles - perhaps with a code snippet to reproduce? I'm able to run the example script with the following command:
```python
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ../clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir
```
`_signature_columns` is a private attribute, and not something we want to modify directly like this. Understanding a bit more about how the issue arises, hopefully we'll be able to find another approach. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,637 | closed | Fix typo in a parameter name for open llama model | # What does this PR do?
Renames a parameter `use_memorry_efficient_attention` to `use_memory_efficient_attention`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 05-21-2023 14:43:10 | 05-21-2023 14:43:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @aaalexlit, thanks for opening this PR!
As the model was part of the most recent release, we'll need to ensure that these changes are backwards compatible with the previous configurations. What I would suggest is popping the previous argument `use_memorry_efficient_attention` from the config kwargs during initialization, and using that value if passed in, otherwise defaulting to `use_memory_efficient_attention`. <|||||>Thanks for pointing that out, @amyeroberts! Makes all the senses in the world.
I hope I interpreted your recommendation correctly |
transformers | 23,630 | closed | causalLM Text generation with OPT models give weird results | ### System Info
* Platform: NVIDIA Jetson Orin 32GB development kit
* CPU `ARMv8 Processor rev 1 (v8l)`
* Linux `5.10.104-tegra #1 SMP PREEMPT Sun Mar 19 07:55:28 PDT 2023 aarch64 aarch64 aarch64 GNU/Linux`
* Python 3.8.10
* transformers 4.29.2
* torch `2.0.0a0+fe05266f.nv23.4` (compiled package for Jetson platform provided by Nvidia)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In my first attempts to run causalLM text generation with huggingface transformer models, I get weird and most surely wrong results with the OPT model family.
I run the official example from the huggingface OPT documentation:
https://huggingface.co/docs/transformers/model_doc/opt#transformers.OPTForCausalLM.forward.example
```
~$ python3
Python 3.8.10 (default, Mar 13 2023, 10:26:41)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer, OPTForCausalLM
>>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
Downloading (…)lve/main/config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 644/644 [00:00<00:00, 239kB/s]
Downloading pytorch_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 663M/663M [00:51<00:00, 12.8MB/s]
Downloading (…)neration_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 137/137 [00:00<00:00, 58.7kB/s]
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 685/685 [00:00<00:00, 600kB/s]
Downloading (…)olve/main/vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 899k/899k [00:00<00:00, 2.33MB/s]
Downloading (…)olve/main/merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 1.61MB/s]
Downloading (…)cial_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 441/441 [00:00<00:00, 324kB/s]
>>> prompt = "Hey, are you consciours? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'Hey, are you consciours? Can you talk to me?<s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>'
```
For OPT-1.3b I get the following, repeatedly:
```
>>> model = OPTForCausalLM.from_pretrained("facebook/opt-1.3b")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'Hey, are you consciours? Can you talk to me? it it it it it it it it it it it it it it it'
```
Any other prompts give the same result always, just `<s>` tokens or repeated `it`.
### Expected behavior
OPT model causalLM text generation should return text that not consists of only `<s>` tokens or repeated `it`, like in the given documentation example:
```
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
``` | 05-21-2023 13:10:17 | 05-21-2023 13:10:17 | cc @younesbelkada @gante <|||||>hi @SteffenBauer
thanks for the issue!
I just checked on our daily CI tests that also checks the generations on OPT and there seem to be no issue on our side. What I can see is that you are using a version of torch that sounds a bit exotic `2.0.0a0+fe05266f.nv23.4`
Does this behavior happen with OPT only or with other models as well?<|||||>Hi @younesbelkada
thanks for checking this!
The torch version is a special package provided by Nvidia, compiled with platform-dependent patches for the Jetson platform. (https://developer.nvidia.com/embedded/downloads)
Nvidia released yesterday an update to torch-2.0.0 for Jetson, but the problem still persists unchanged.
As it works on your environment, it strongly hints that the problem is Jetson-platform specific. I will investigate this further.
Could also be connected with issue #23413, where an issue in torch might be the problem.
I will close this now, as it doesn't seem to be an issue with the transformers library.
<|||||>Thank you very much @SteffenBauer , please let us know how it goes<|||||>Hey @SteffenBauer 👋
I can confirm that things seem to be fine on our end, using the latest release (v4.30). Have a look at [this notebook](https://colab.research.google.com/drive/11iSgaa6j0S9jbIMg2C9iqUTfhHEtFebq?usp=sharing) :)<|||||>Hi @gante ,
thanks for confirming, your colab notebook also works here fine with expected result.
As I copied the example code verbatim to a python shell on the Jetson Orin, the problem now is confirmed to be on the side of the platform.
I hadn't time to investigate this further, no clue so far if it caused by the ARM64 architecture, or the Nvidia provided PyTorch library. As the Nvidia Jetson platform is an important tool for DL practitioners, this should indeed be looked into more deeply.
Will update here once I have results.<|||||>@SteffenBauer can you set the device with PyTorch's `.to()`, using Jetson? If so, you could try running the model on CPU and GPU to determine whether the problem is exclusive to a particular device or at a software level.
Opening an issue with Nvidia may help too :)<|||||>Hi @gante ,
you might have found something! There is indeed a difference between CPU and GPU.
(by the way, I just upgraded the transformer library to 4.30.2 before doing this test)
I use this code, directly copied from the notebook you supplied:
```python
from transformers import OPTForCausalLM, AutoTokenizer
import torch
device = 'cpu' # 'cuda'
model = OPTForCausalLM.from_pretrained("facebook/opt-1.3b").to(device)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids.to(device), max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
`device = 'cpu'`:
"Hey, are you consciours? Can you talk to me? it it it it it it it it it it it it it it it"
`device = 'cuda'`:
"Hey, are you consciours? Can you talk to me?\nI'm not a conscioure, but I can talk to"
<|||||>@SteffenBauer On my end, using an x86 CPU, I get the same outputs as your CUDA output above. This seems to point to a PyTorch+ARM64 issue 🤔 |
transformers | 23,625 | closed | [wip: testing doc-builder] | testing https://github.com/huggingface/doc-builder/pull/373 | 05-21-2023 09:15:15 | 05-21-2023 09:15:15 | @mishig25 Should be fixed by [this](https://github.com/huggingface/doc-builder/pull/373/commits/4895cfbf332994c5c5d73e83f990fe017184ecc5)<|||||>Rerunning the doc-build https://github.com/huggingface/transformers/actions/runs/5036878423 <|||||>Re running the doc-build: https://github.com/huggingface/transformers/actions/runs/5036878423<|||||>Latest error message is:
```
<img> is a void element and cannot have children, or a closing tag
```
which appears to be an error by the person who made the docs? In the original version, did you attempt to fix those for the user?<|||||>Confirmed (https://huggingface.co/docs/transformers/v4.29.1/pt/index) - see trailing "</img>" after image:

<|||||>I made a PR to fix the closing tag https://github.com/huggingface/transformers/pull/23646<|||||>Thanks a lot for [this PR](https://github.com/huggingface/transformers/pull/23625#issuecomment-1556623634).
Despite the img tag was wrong before, previous doc-builds were not failing. Does it mean that a change in https://github.com/huggingface/doc-builder/pull/373 is making closing tag of img fail ?<|||||>Well yes since `<img></img>` is invalid, svelte complains.
Previously, the error was not being detected because </img> was being encoded as `</img>`<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23625). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,621 | closed | 🌐 [i18n-KO] Translated `tasks/monocular_depth_estimation.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/monocular_depth_estimation.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 05-21-2023 05:31:02 | 05-21-2023 05:31:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,593 | closed | `run_mlm.py` doesn't log perplexity to `wandb` | ### System Info
When I run `run_mlm.py` all of the metrics which are created with names such as `train_loss` or `eval_loss` are logged to `wandb` reformatted as `train/loss` or `eval/loss`. However the eval perplexity is simply logged as `perplexity` and is not included in the wandb metrics
https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/examples/pytorch/language-modeling/run_mlm.py#LL620C1-L635C46
The following is the output from `run_mlm.py`, you can observe that only metrics with a `eval_` prefix are included. I couldn't find in the code where the wandb login integration is or I'd dig further.
``` console
***** eval metrics *****
epoch = 10.0
eval_accuracy = 0.9946
eval_loss = 0.0248
eval_runtime = 0:10:43.84
eval_samples = 62679
eval_samples_per_second = 97.351
eval_steps_per_second = 3.043
perplexity = 1.0251
wandb: Waiting for W&B process to finish... (success).
wandb: / 0.130 MB of 0.130 MB uploaded (0.000 MB deduped)
wandb: Run history:
wandb: eval/accuracy ▁
wandb: eval/loss ▁
wandb: eval/runtime ▁
wandb: eval/samples_per_second ▁
wandb: eval/steps_per_second ▁
wandb: train/epoch ▁▂▂▃▃▄▄▅▅▆▆▇▇███
wandb: train/global_step ▁▂▂▃▃▄▄▅▅▆▆▇▇███
wandb: train/learning_rate █▇▇▆▆▅▅▄▄▃▃▂▂▁
wandb: train/loss █▇▆▅▅▄▃▃▂▂▂▂▁▁
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/accuracy 0.99464
wandb: eval/loss 0.02479
wandb: eval/runtime 643.8475
wandb: eval/samples_per_second 97.351
wandb: eval/steps_per_second 3.043
wandb: train/epoch 10.0
wandb: train/global_step 16540
wandb: train/learning_rate 0.0
wandb: train/loss 0.0568
wandb: train/total_flos 3.677755155323775e+18
wandb: train/train_loss 0.02581
wandb: train/train_runtime 87644.5711
wandb: train/train_samples_per_second 144.98
wandb: train/train_steps_per_second 0.189
```
I'm also not getting per-epoch eval metrics logged to wandb but I may have omitted a cmd line parameter.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run train_mlm with wandb enabled
### Expected behavior
Log `eval/perpleixity` to `wandb` | 05-21-2023 01:39:07 | 05-21-2023 01:39:07 | Hi @david-waterworth, thanks for raising this issue.
There's isn't an `eval_` prefix in front if the perplexity metric because it's not part of the key [when added to the metrics dictionary](https://github.com/huggingface/transformers/blob/2faa09530bc5d29756bddfec12037c066cc85a02/examples/pytorch/language-modeling/run_mlm.py#LL632C19-L632C19). Would you like to open a PR to update this?
Regarding wandb logging, the integration logic can be [found here](https://github.com/huggingface/transformers/blob/867316670a909dd1a60ad69cdb0c962bdc6f0cd4/src/transformers/integrations.py#L663). These are community contributed and not actively maintained by hugging face. Feel free to open a PR if there's an issue you've spotted in the logging or ping the original contributors of the callback. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,552 | closed | TextIteratorStreamer cannot be used with TextGenerationPipeline | ### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.8.13
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue occurs because the `TextIteratorStreamer` class contains a `Queue` field which cannot be pickled and the text generation pipeline runs a deepcopy .
https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/src/transformers/pipelines/text_generation.py#L245
Code to reproduce issue:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer, pipeline
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
streamer = TextIteratorStreamer(tokenizer)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, streamer=streamer
)
pipe("test")
```
Trace:
```python
Traceback (most recent call last):
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 201, in __call__
return super().__call__(text_inputs, **kwargs)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1119, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1126, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1025, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/raf/.pyenv/versions/develop/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 245, in _forward
generate_kwargs = copy.deepcopy(generate_kwargs)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/raf/.pyenv/versions/3.8.13/lib/python3.8/copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle '_thread.lock' object
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
```
### Expected behavior
Pipeline should run normally | 05-20-2023 22:31:26 | 05-20-2023 22:31:26 | I encountered the same problem, I submitted a PR for it.
<|||||>I am having the same issue and hope that it will be resolved soon. Thanks!<|||||>@gante comes back wednesday.
I kind of agree a shallow copy should be enough and not a deepcopy, but since the code was deliberate there might be reasons for it. If it's ok let's wait for joao to come back.<|||||>Hey everyone! Apologies for the long delay 🤗
I agree with @Narsil, we can make it a shallow copy (I err toward deep copies, as I've been bitten by unexpected side-effects in the past). I believe the only modifications the object sees are the ones [here](https://github.com/huggingface/transformers/blob/535542d38d7f19c6347ad684347737a38107f148/src/transformers/pipelines/text_generation.py#L246), for which a shallow copy is enough. |
transformers | 23,541 | open | Add type hints for PyTorch BERT. | # What does this PR do?
Add type hints for PyTorch BERT.
Fixes #16059 for PyTorch BERT
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada
- type hints: @Rocketknight1
| 05-20-2023 18:31:58 | 05-20-2023 18:31:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23541). All of your documentation changes will be reflected on that endpoint.<|||||>Looks good! The last thing you'll need to do is `pip install transformers[quality]` followed by `make style` in the `transformers` directory. This runs our code formatting tools to make sure everything follows our style guidelines. Once you do that and commit any changes, the tests should pass!
If you run `make style` and you get an error, it may indicate some part of your code that has issues that our code style tools can't correct - if that happens, take a look and try to see what's wrong, and reply here if you can't figure it out!<|||||>It looks like I've got one last error that I cannot figure out, the relevant line of code appears several times in many files without issue otherwise. Could you help with this? <|||||>@Rocketknight1 <|||||>Ah, thanks for the ping! Investigating now<|||||>I've investigated and there's an issue in our copy checking code, specifically the `is_copy_consistent` function. This isn't your fault, and I'll need to file a PR to fix it!
(For internal `transformers` reference): The issue is triggered when a function is copied from with a single-line header, and there is a change that causes its header to now be multi-line (e.g. adding type hints and causing `black` to wrap the line). The `is_copy_consistent` function [builds a replacement function](https://github.com/huggingface/transformers/blob/main/utils/check_copies.py#L238) from the first line of the target function followed by the subsequent lines of the original function, which creates a mangled header if the original header has changed to multi-line but the target has not:
```python
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
```
@sgugger do you want me to make a PR to fix it?<|||||>Lol no, fixing this is really not a priority. The copies can be manually updated.
The only situation this can appear is this one, and it's rare enough that we can deal with it I think.<|||||>Understood! @coledie you might have to do some manual copying to make this work, in that case. Search the repository for the string `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast.build_inputs_with_special_tokens`. This will locate functions that are copied from the BERT function and that our repository tools keep in sync with it. If you manually copy your new `build_inputs_with_special_tokens` header over the existing headers in those functions and then rerun `make fixup` or `make fix-copies`, everything should work and the CI should pass.
If you have any issues, let me know and I can make the changes for you!<|||||>@Rocketknight1 Looks like it is working since test failures are unrelated?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,538 | closed | Fix tensor device while attention_mask is not None | # What does this PR do?
The PR while fix the tensor device while attention_mask is not None, the tensor like `torch.tensor(torch.finfo(attn_weights.dtype).min)`.
1. I don't use the `os.environ[“CUDA_VISIBLE_DEVICES”]` because i want to load other models in the same script.
2. Use torch.cuda.set_device(6) while other gpu device has already occupied.
3. The model is loaded in device `cuda:6`, but the new tensor will be load in `cuda:0` while attention_mask is not None.
```
{
'input_ids': tensor([[ 1, 29871, 30919]], device='cuda:6'),
'attention_mask': tensor([[1, 1, 1]], device='cuda:6')
}
input_ids device: cuda:6
model device: cuda:6
File "/data1/env/miniconda3/envs/llmdev/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 231, inforward
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0!
```
```
print('attn_weights device', torch.tensor(torch.finfo(attn_weights.dtype).min).device)
attn_weights device cuda:0
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada and @sgugger | 05-20-2023 16:56:46 | 05-20-2023 16:56:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,535 | closed | Bugfix: LLaMA layer norm incorrectly changes input type and consumers lots of memory | # What does this PR do?
It is a common setup to run LLaMA in `bfloat16` or `float16` while the `RMSNorm` layers are in `float32`. The current implementation converts an input into `float32` but never converts it back to the original input type. This means in the first layer the input type if `bfloat16` but for all remaining its `float32` this increases the overall memory-footprint of finetuning considerably.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada @sgugger
Please also see: https://github.com/huggingface/transformers/pull/23479
| 05-20-2023 16:41:32 | 05-20-2023 16:41:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> but for all remaining its `float32` this increases the overall memory-footprint of finetuning considerably.
...how bad was this? like 30% worse?<|||||>Please merge ^_^<|||||>I am assuming this does not impact 8bit training as I noticed no change in memory.<|||||>@official-elinas did an experiment with transformers==4.28.1 vs the code added here, and was unable to reproduce any speed/memory gains
https://wandb.ai/officialelinas/tests?workspace=user-officialelinas


@TimDettmers what do you think we did wrong? are there any experiments you can share regarding this?
<|||||>I think unless you've manually set RMSNorm layer's parameters to fp32 while the rest of your model is in fp16/bf16 you won't see any change.
I'm curious if as a design we should instead allow to specify `dtype` in the layer arguments? `F.softmax` has a dtype argument that allows one to control such a thing more finely? https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html |
transformers | 23,534 | closed | Fix tensor device while attention_mask is not None | # What does this PR do?
The PR while fix the tensor device while attention_mask is not None, the tensor like `torch.tensor(torch.finfo(attn_weights.dtype).min)`.
1. I don't use the `os.environ[“CUDA_VISIBLE_DEVICES”]` because i want to load other models in the same script.
2. Use `torch.cuda.set_device(6)` while other gpu device has already occupied.
3. The model is loaded in device `cuda:6`, but the new tensor will be load in `cuda:0` while attention_mask is not None.
```
{
'input_ids': tensor([[...]], device='cuda:6'),
'attention_mask': tensor([[...]], device='cuda:6')
}
input_ids device: cuda:6
model device: cuda:6
print('attn_weights device', torch.tensor(torch.finfo(attn_weights.dtype).min).device)
attn_weights device cuda:0
```
```
File "/data1/env/miniconda3/envs/llmdev/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 231, inforward
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0!
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada | 05-20-2023 16:40:25 | 05-20-2023 16:40:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23534). All of your documentation changes will be reflected on that endpoint. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.