repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 24,603 | closed | Make warning disappear for remote code in pipelines | # What does this PR do?
Currently, loading a pipeline with a model that has its code on the Hub will result in a warning that the model is not in the right auto class. This PR adds the custom class in the auto mapping so that this warning is not triggered.
Related to #24598 | 06-30-2023 19:45:53 | 06-30-2023 19:45:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,602 | open | Support gradient checkpointing for ESM models | Would you please add `gradient_checkpointing_enable()` feature for ESM models?
These models currently are the best available pre-trained protein language models for researchers.
Many thanks. | 06-30-2023 18:48:09 | 06-30-2023 18:48:09 | cc @Rocketknight1 |
transformers | 24,601 | closed | add link to accelerate doc | # What does this PR do?
This PR modifies the quantization doc to include a link to accelerate documentation if the user wants to quantize their own pytorch model. | 06-30-2023 16:00:09 | 06-30-2023 16:00:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,600 | closed | IndexError: index -1 is out of bounds for dimension 1 with size 0 | ### System Info
PC: M2
transformers== 4.31.0.dev0
refer: https://github.com/openai/whisper/discussions/1478
meet the error:
```
in <module>:9 โ
โ โ
โ 6 prompt_ids = processor.get_prompt_ids(prompt) โ
โ 7 โ
โ 8 forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe") โ
โ โฑ 9 predicted_ids = model.generate(input_features, prompt_ids=prompt_ids, forced_decoder_ids โ
โ 10 โ โ โ โ โ โ โ max_new_tokens=3000) โ
โ 11 transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) โ
โ 12 print("่ๆถ:", time.time() - start_time, transcription) โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/models/whisper/mo โ
โ deling_whisper.py:1664 in generate โ
โ โ
โ 1661 โ โ if generation_config.return_timestamps: โ
โ 1662 โ โ โ logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] โ
โ 1663 โ โ โ
โ โฑ 1664 โ โ return super().generate( โ
โ 1665 โ โ โ inputs, โ
โ 1666 โ โ โ generation_config, โ
โ 1667 โ โ โ logits_processor, โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py:115 โ
โ in decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/generation/utils. โ
โ py:1522 in generate โ
โ โ
โ 1519 โ โ โ โ ) โ
โ 1520 โ โ โ โ
โ 1521 โ โ โ # 11. run greedy search โ
โ โฑ 1522 โ โ โ return self.greedy_search( โ
โ 1523 โ โ โ โ input_ids, โ
โ 1524 โ โ โ โ logits_processor=logits_processor, โ
โ 1525 โ โ โ โ stopping_criteria=stopping_criteria, โ
โ โ
โ /Users/diaojunxian/anaconda3/envs/3.9/lib/python3.9/site-packages/transformers/generation/utils. โ
โ py:2349 in greedy_search โ
โ โ
โ 2346 โ โ โ if synced_gpus and this_peer_finished: โ
โ 2347 โ โ โ โ continue # don't waste resources running the code we don't need โ
โ 2348 โ โ โ โ
โ โฑ 2349 โ โ โ next_token_logits = outputs.logits[:, -1, :] โ
โ 2350 โ โ โ โ
โ 2351 โ โ โ # pre-process distribution โ
โ 2352 โ โ โ next_tokens_scores = logits_processor(input_ids, next_token_logits)
```
use these code all occur error.
```
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import librosa
import soundfile
import torchaudio
base_model = "/Users/ddd/Documents/github/whisper-large-v2"
processor = WhisperProcessor.from_pretrained(base_model,
language="zh",
task="transcribe",
local_files_only="True")
forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
# ่ทๅๆจกๅ
model = WhisperForConditionalGeneration.from_pretrained(base_model,
device_map="auto",
local_files_only=True).half()
model.eval()
audio_file = "/Users/ddd/Documents/gitlab/llm-train/yuyin/simple.m4a"
src_signal, sample_rate = librosa.load(audio_file, sr=16000)
start = 23196064
end = 23364576
src_signal_demo = src_signal[start:end]
input_features = processor(src_signal_demo, sampling_rate=sample_rate, return_tensors="pt").input_features.half().to("mps")
prompt = 'ไปฅไธๆฏๆฎ้่ฏ็ๅฅๅญ'
prompt_ids = processor.get_prompt_ids(prompt)
forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
predicted_ids = model.generate(input_features, prompt_ids=prompt_ids, forced_decoder_ids=forced_decoder_ids,
max_new_tokens=3000)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
```
```
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model="openai/whisper-large-v2",
device="mps",
chunk_length_s=30, # if not precised then only generate as much as `max_new_tokens`
generate_kwargs = {"num_beams": 5} # same as setting as "openai whisper" default
)
audio_file = "/Users/ddd/Documents/gitlab/llm-train/yuyin/simple.m4a"
src_signal, sample_rate = librosa.load(audio_file, sr=16000)
start = 23196064
end = 23364576
src_signal_demo = src_signal[start:end]
prompt = 'ไปฅไธๆฏๆฎ้่ฏ็ๅฅๅญ'
prompt_ids = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt")
result = pipe(src_signal_demo, generate_kwargs={"language": "zh", "task": "transcribe", "prompt_ids": prompt_ids})
print(result["text"])
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. load the audio
2. slice the audio
3. add the prompt
4. transcribe the slice audio, then occur error.
### Expected behavior
the audio can transform to the context. | 06-30-2023 15:24:04 | 06-30-2023 15:24:04 | cc @gante @sanchit-gandhi <|||||>Hey @diaojunxian ๐
Your reproducer contains private data, which means we can't easily reproduce on our end -- would you be able to share the audio file with us OR rewrite the reproducer from public data?
At a first glance, because of the thrown exception (`IndexError: index -1 is out of bounds for dimension 1 with size 0` in `next_token_logits = outputs.logits[:, -1, :]`), I'd bet something went wrong at preprocessing time :D bad model input shapes -> bad model output shapes<|||||>> Hey @diaojunxian ๐
>
> Your reproducer contains private data, which means we can't easily reproduce on our end -- would you be able to share the audio file with us OR rewrite the reproducer from public data?
>
> At a first glance, because of the thrown exception (`IndexError: index -1 is out of bounds for dimension 1 with size 0` in `next_token_logits = outputs.logits[:, -1, :]`), I'd bet something went wrong at preprocessing time :D bad model input shapes -> bad model output shapes
I can send it to you privately, but it cannot be published on the Internet. Only you can personally verify this bug. Can you see it?
<|||||>@diaojunxian yeah, that would be helpful. You can send it to the email attached to my GH account ([[email protected]](mailto:[email protected]))
You are using an unmodified `openai/whisper-large-v2`, correct?<|||||>> start = 23196064
> end = 23364576
yes, unmodified whisper-large-v2, and had send the audio to the gmail.<|||||>Hey @diaojunxian ๐
In both snippets, the problem is the same: as soon as the model tries to generate beyond its [maximum length](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/config.json#L42), the output sequence dimension becomes 0, causing the exception.
I've found the issue and will open a PR to fix it. The second example you provided works perfectly after the fix. The first one probably will fail because of `max_new_tokens=3000` (Whisper's maximum length is 448 and we default generation to its maximum length, you probably shouldn't set `max_new_tokens` at all :) )<|||||>After the PR linked above gets merged, you can install from `main` and it should work :) |
transformers | 24,599 | closed | Use protobuf 4 | # What does this PR do?
I move forward to generate the new `src/transformers/utils/sentencepiece_model_pb2.py` using the protocol buffer compiler. No test failure but have a quality check fails. I can probably removed the undefined part `_TRAINERSPEC` which doesn't seem used. | 06-30-2023 15:09:30 | 06-30-2023 15:09:30 | I have checked to make sure `protobuf==4.23.3` is installed and used in the CI.<|||||>@Narsil mentioned we should keep the old/new version files, and determine which one to use (by the protobuf version numbers?).
Let me know more details about this please.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Will merge once the CI for protobuf 4 a nd protobuf 3 both green.
Don't hesitate to leave comments later if any @Narsil ๐ <|||||>Great PR. I hope this doesn't have any unforeseen consequence (I don't know what are the breaking changes between protobuf 3 and 4)<|||||>> I don't know what are the breaking changes between protobuf 3 and 4
Yeah, me neither. I just rely on our CI ๐ค <|||||>@ydshieh I'm now getting
```
File "/home/fxmarty/hf_internship/transformers/src/transformers/utils/sentencepiece_model_pb2_new.py", line 16, in <module>
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
TypeError: Couldn't build proto file into descriptor pool: Invalid default '0.9995' for field sentencepiece.TrainerSpec.character_coverage of type 2
```
with protobuf=4.23.4 & transformers main when doing `from transformers.utils import sentencepiece_model_pb2_new`. Any idea what's wrong? protobuf 3.20.3 works well for me.<|||||>Hi @fxmarty !
Hmm, I remember I got `4.23.3` when I made this PR. Not sure if it's the reason. Let me check.<|||||>Hi again
~~@fxmarty `4.23.4` works for me.~~
```
(py39) ฮป pip show protobuf
Name: protobuf
Version: 4.23.4
Python 3.9.13 (main, Oct 13 2022, 21:23:06) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers.utils import sentencepiece_model_pb2
>>> from transformers.utils import sentencepiece_model_pb2_new
>>>
```<|||||>Hm, there is something wrong and I am looking deeper.<|||||>OK, PR #24622 makes this failing.
cc @ArthurZucker
<|||||>Comes from #24690. More details:
This does not fix the version error, but fixes the issue with 3.20.3, when we cannot use seqio or anything importing protobuf:
```python
! pip install protobuf=3.20.3
from transformers import AutoTokenizer
from seqio import SentencePieceVocabulary
TypeError: Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file "sentencepiece_model.proto":
sentencepiece_model.proto: A file with this name is already in the pool.
```<|||||>@fxmarty We are not able to reproduce the situation you have. Could you specify your transformers commit that is installed? Thanks.
(The above discussion is another story)<|||||>@ydshieh It appears I can't reproduce anymore the issue I had. Must have messed something up in my install that I fixed since then. Maybe the `pip uninstall protobuf && pip install --no-binary protobuf protobuf==3.20.3` that I used in the meanwhile helped to fix things.<|||||>@ydshieh Oh, actually I have this issue only running in my scripting IDE pyzo, but not in a terminal. The same python env is used so quite weird.<|||||>@ArthurZucker @ydshieh How does `sentencepiece_model_pb2_new.py` define `TrainerSpec.character_coverage`?
Specifically, how is https://github.com/huggingface/transformers/blob/33aafc26ee68df65c7d9457259fc3d59f79eef4f/src/transformers/utils/sentencepiece_model_pb2_new.py#L17 generated? If I use `decode()` on it python complains that `'utf-8' codec can't decode byte 0x80 in position 43`.<|||||>Hi @fxmarty
That file is being generated by protobuf compile. I don't have the courage to read it ...
When I enable the support to use protobuf v4, I ran the whole CI (non-slow), and no failed test.
Could you show us your usage (related to protobuf) that produce some failures - maybe I can help ? |
transformers | 24,598 | open | Falcon-40b-instruct on Runpod | ### System Info
2 x A100 80GB
32 vCPU 251 GB RAM
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"What does a raindrop feel when it hits the sea?:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Expected to Run smoothly, give an output.
Error :
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Setting pad_token_id to eos_token_id:11 for open-end generation. | 06-30-2023 13:58:19 | 06-30-2023 13:58:19 | Hi @Mrin7, thanks for raising this issue.
Indeed, this is arising because of [this check](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/pipelines/text_generation.py#L65) in the pipeline code, and the falcon model isn't registered in `MODEL_FOR_CAUSAL_LM_MAPPING`.
I'm able to get things working if I explicitly add it e.g.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
# Explicitly add the mapping here
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES["RefinedWebModel"] = "RWForCausalLM"
checkpoint = "tiiuae/falcon-40b-instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
generator = pipeline(
"text-generation",
model=checkpoint,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = generator(
"What does a raindrop feel when it hits the sea?:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
@sgugger For models on the hub, what's the standard way for enabling models to be loaded in a pipeline?
<|||||>Loading the model outside of the pipeline, or the workaround you mention. The check should be ignored when `trust_remote_code=True` but that's a bit more work on our side.<|||||>[amyeroberts](https://github.com/amyeroberts)
Hi Amy - I tried to add as suggested by you.
import torch
import transformers
from accelerate import init_empty_weights
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
# Explicitly add the mapping here
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES["RefinedWebModel"] = "RWForCausalLM"
checkpoint = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
generator = pipeline(
"text-generation",
model=checkpoint,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
#low_cpu_mem_usage=True,
#device_map="auto",
)
sequences = generator(
"Tell me everything about abortion bans in USA:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Till getting the same error:
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MusicgenForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1264: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:11 for open-end generation.<|||||>[sgugger](https://github.com/sgugger) - can you please show me how to load the model outside pipeline?<|||||>@Mrin7 I'm looking more into this, but this is an error, just a warning. I'll make it disappear but you can already use the pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,597 | open | Test with Pydantic V2 | ### Feature request
Pydantic V2 is about to be release. There is Pydantic 2.0b3 pre-release version available already https://pypi.org/project/pydantic/2.0b3/
Please, test transformers with Pydantic V2.
There is a special tool that could help with migrating the code base to Pydantic V2 https://github.com/pydantic/bump-pydantic/
### Motivation
Pydantic V2 is known to break things and it deprecates a lot of things, see https://errors.pydantic.dev/v2.0/migration.
Why upgrading? Pydantic is known to be 5-50x faster than Pydantic V1 according to https://docs.pydantic.dev/latest/blog/pydantic-v2-alpha/. This alone looks really beneficial for `transformers`. Apart from that Pydantic V2 brings a lot of new features, see the link above.
### Your contribution
Please, don't hesitate asking for help in [Pydantic Discussions](https://github.com/pydantic/pydantic/discussions) section and/or [report any issues](https://github.com/pydantic/pydantic/issues) encountered in the process. | 06-30-2023 13:45:00 | 06-30-2023 13:45:00 | Hi @lig
Thanks again for the PR #24596. Really appreciated.
Regarding using Pydantic V2, I am afraid that the involved places are not directly in `transformers` codebase.
For example, in
https://github.com/huggingface/transformers/pull/24596#issuecomment-1615176591
it shows
```bash
2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c "from deepspeed.launcher.runner import main":
2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig
2023-06-30T20:07:31.9884613Z 1.621 File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 76, in <module>
2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):
2023-06-30T20:07:31.9885814Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 171, in __new__
2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)
2023-06-30T20:07:31.9886812Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 361, in set_model_fields
2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)
2023-06-30T20:07:31.9888039Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py", line 112, in collect_model_fields
2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field "{ann_name}" has conflict with protected namespace "{protected_namespace}"')
2023-06-30T20:07:31.9889546Z 1.621 NameError: Field "model_persistence_threshold" has conflict with protected namespace "
```
which indicates `/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py` using `pydantic`.
It's the 3rd party libraries using pydantic have to do something in order to be run with pydantic V2. Right now, `transformers` can only pin v1 and wait.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,596 | closed | Limit Pydantic to V1 in dependencies | Pydantic is about to release V2 release which will break a lot of things. This change prevents `transformers` to be used with Pydantic V2 to avoid breaking things.
Also, see #24597
| 06-30-2023 13:25:48 | 06-30-2023 13:25:48 | Hi @lig, thanks for opening this PR!
Could you provide some more information about the kind of issues / breakages expected? I can only see `pydantic` used in one place in the library [here](https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/commands/serving.py#L26), so thankfully impact is limited.
For the quality checks, you'll need to run `make style` at the top level of the repo and push any changes made.
cc @ydshieh <|||||>This issue about `pydantic` is real. We get errors when trying to build docker image in the push CI triggered by commit [299aafe](https://github.com/huggingface/transformers/commit/299aafe55ff03c565c059682c6fd312e4b89bc2f).
@lig Thank you for this PR, it helps us a lot before the issue! I also add one more change quickly (for our CI).
@amyeroberts I am going to merge once @sgugger approves.
```bash
2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c "from deepspeed.launcher.runner import main":
2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig
2023-06-30T20:07:31.9884613Z 1.621 File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 76, in <module>
2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):
2023-06-30T20:07:31.9885814Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 171, in __new__
2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)
2023-06-30T20:07:31.9886812Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 361, in set_model_fields
2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)
2023-06-30T20:07:31.9888039Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py", line 112, in collect_model_fields
2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field "{ann_name}" has conflict with protected namespace "{protected_namespace}"')
2023-06-30T20:07:31.9889546Z 1.621 NameError: Field "model_persistence_threshold" has conflict with protected namespace "model_"
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts answering your question. I've had a quick look and I can say that this https://github.com/huggingface/transformers/blob/78a2b19fc84ed55c65f4bf20a901edb7ceb73c5f/src/transformers/commands/serving.py#L73C1-L73C36 will break.
Instead of
```py
tokens_ids: Optional[List[int]]
```
it should read
```py
tokens_ids: Optional[List[int]] = None
```
There is no implicit default None in Pydantic V2 here.
Thankfully, `bump-pydantic` helps with that https://github.com/pydantic/bump-pydantic/#bp001-add-default-none-to-optionalt-uniont-none-and-any-fields |
transformers | 24,595 | closed | Speed up TF tests by reducing hidden layer counts | A lot of our slow TF tests are caused by TF compilation. TF compilation isn't really affected by layer width at all - the main thing is just the number of operations it has to build a graph for. By reducing the number of hidden layers, compilation gets much faster, (hopefully) without interfering with test coverage at all. | 06-30-2023 13:21:51 | 06-30-2023 13:21:51 | Would be nice if you can show the timing for one model (before v.s. after) ๐ . Thanks.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh testing locally BERT went from 510 seconds -> 220 seconds<|||||>I don't know if it was @sgugger either - a lot of this code is really old! I see `tf.tuple()` in there, and even I had to look up the TF 1.x docs to remember what that was supposed to do, lol<|||||>I know he is probably not the one to decide use `5`, but he might know the history :-) |
transformers | 24,594 | closed | Fix loading dataset docs link in run_translation.py example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes broken link #24579
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-30-2023 12:43:48 | 06-30-2023 12:43:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the merge @amyeroberts ! |
transformers | 24,593 | open | Add forward methods to quantizer that also computes commitment loss | # What does this PR do?
This PR adds the code to compute commitment loss for the quantizer. The loss is only computed in the newly added forward() methods.
This is a small part of the bigger #24295
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #24295
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| 06-30-2023 10:45:26 | 06-30-2023 10:45:26 | Much of this was adapted from https://github.com/facebookresearch/encodec/blob/main/encodec/quantization/core_vq.py
For reference, a part of the conversation is in one of the specific changes as well:
https://github.com/huggingface/transformers/commit/4f697be0b62c4f3b0401ccbd00d1d46aac81906d<|||||>@ArthurZucker
Created a PR for this bit first, lemme know what you think. Thanks!<|||||>cc @sanchit-gandhi <|||||>Thanks for the review!
As I mentioned in https://github.com/huggingface/transformers/issues/24295#issuecomment-1614100206, I am busy in July so will be slow to iterate on this, but should have more time in August to keep iterating.<|||||>Awesome @hackyon! I believe the Meta AI authors are planning on releasing more fine-tuning code throughout July, so we'll be in a good position to finish this PR on your return ๐ค<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24593). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,592 | closed | Make (TF) CI faster (test only a random subset of model classes) | # What does this PR do?
Daily CI is currently running in 22h30m. @Rocketknight1 might have a way to bring it back to 19-20 hours.
For some tests, let's test only a (random) subset of the model classes ๐ .
Here is the timing of some very slow tests currently:
```
398.44s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_xla_fit
275.59s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation_extended
217.84s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model
106.25s call tests/models/bert/test_tokenization_bert_tf.py::BertTokenizationTest::test_saved_model
77.69s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_onnx_runtime_optimize
```
and
```
352.31s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_saved_model_creation_extended
272.56s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_compile_tf_model
270.84s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_xla_fit
132.59s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_onnx_runtime_optimize
``` | 06-30-2023 10:29:11 | 06-30-2023 10:29:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Some of the very slow tests (like `test_saved_model_creation_extended` and `test_xla_fit`) only apply to a few models anyway - they're in `test_modeling_tf_core.py`, so they shouldn't have a big effect on the total test runtime. I might have a couple of ideas for speeding up `test_compile_tf_model`, though!<|||||>> Let's not take a random subset but the first two then. To test the base model and a model with head.
Would it be ok to take the first one (base model) + a random other one with head?<|||||>Also, I looked a bit closer at this PR and I'm actually a bit scared of some of the changes - in particular, `test_pt_tf_model_equivalence` is one of the most important tests and picks up lots of implementation problems in TF ports, so I don't want to reduce its coverage!<|||||>@Rocketknight1
But that test is not changed, i.e. it doesn't use `get_random_model_classes` introduced here. Nothing to fear ๐ <|||||>> Would it be ok to take the first one (base model) + a random other one with head?
I don't like randomness in tests as it makes them flaky. <|||||>Well, in this situation, I do prefer to keep a random head model.
- We are reducing the number of model classes being tested due to the slow runtime. If we keep the fix model classes, we are likely to **miss failures in certain model heads**. (and for the involved tests in this PR, they all pass currently for their all model classes - if not, probably just one or two.)
- ~~Only slow tests are involved~~ --> **no flakyness shown on CircleCI.**
- Sorry, I am wrong in this. But I can change it to **only for slow tests**.
WDYT if I make changes only to slow tests?<|||||>I very much doubt we will have a failure on a model with head and not the others. With the randomness in the test, you won't be able to reproduce easily (and I don't see the test even printing the model class that failed) so I'd keep things reproducible. This is also on TensorFlow which has very low usage, so I don't think it's worth spending too much time over-engineering something.<|||||>OKOK<|||||>@Rocketknight1 OK for you? |
transformers | 24,591 | closed | DeepSpeed/FSDP ckpt saving utils fixes and FSDP training args fixes | # What does this PR do?
1. `_save` function saves `tokenizer` and `training_args.bin` in addition to model.
2. This PR rearranges logic for saving model for DS and FSDP such that the above 2 things are also saved in addition to model ckpt.
3. FIxes https://github.com/huggingface/transformers/issues/24641 | 06-30-2023 09:43:01 | 06-30-2023 09:43:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,590 | closed | Udate link to RunHouse hardware setup documentation. | # What does this PR do?
The [hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup) link gives a 404 error. Replaced with a [link](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup) that points to the latest version of the RunHouse documentation.
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Documentation: @dongreenberg, @sgugger, @stevhliu and @MKhalusova
| 06-30-2023 09:22:24 | 06-30-2023 09:22:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks!! I'll update the one in Accelerate too. |
transformers | 24,589 | open | stuck in the evaluation_loop of trainer.py when training | ### System Info
- Ubuntu: 20.04.5
- GPU: A800(80G) x 8
- CUDA: 11.7
- NCCL: 2.14.3
- python: 3.9.16
- deepspeed: 0.9.5
- transformers: 4.30.2
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Problem Description:
I'm doing a ***LLM pre-train***. The dataset is ready. The class dataset is my own implementation.
The base model if from the configuration file: LLAMA 7B huggingface
And other parameters are:
* batch_size: 2
* gradient_accumulation_steps: 2
* eval_batch_size: 1
* eval_steps: 12
* save_steps: 12
Additionally, ds_config is:
```
"zero_optimization": {
"stage": 1,
"offload_param": {
"device": "cpu"
},
"offload_optimizer": {
"device": "cpu"
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"contiguous_gradients": true,
"reduce_bucket_size": 5e8,
"overlap_comm": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 1e-05,
"betas": "auto",
"eps": 1e-08
}
},
```
### Question:
***When I start training from beginning to step=12, according to my settings, the first evaluate should be triggered. At this time, everything is normal: the train is normal, and the evaluate is triggered.
But evaluate never ends.***
I logged in my dataset class and saw that the first 12 train steps ended normally. And, the first batch val data has been returned from my \_\_getitem\_\_ to the trainer of huggingface, and then it gets stuck. No return, no more information. No re-entry into the val dataset ( i implement it and no next \_\_getitem\_\_ called ).
### Running stack
At this point, my cpu and gpu are fully loaded.
| NVIDIA-SMI 515.86.01 | Driver Version: 515.86.01 | CUDA Version: 11.7 |
|--|--|--|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. MIG M.|
|---|---|---|
| 0 NVIDIA A800 80G... On N/A 47C P0 76W / 300W | 58951MiB / 81920MiB| 100% Default Disabled |
| 1 NVIDIA A800 80G... On N/A 48C P0 74W / 300W | 59001MiB / 81920MiB| 100% Default Disabled |
| 2 NVIDIA A800 80G... On N/A 47C P0 72W / 300W | 58999MiB / 81920MiB | 100% Default Disabled |
| 3 NVIDIA A800 80G... On N/A 43C P0 69W / 300W | 58953MiB / 81920MiB| 100% Default Disabled |
| 4 NVIDIA A800 80G... On N/A 43C P0 71W / 300W | 58953MiB / 81920MiB | 100% Default Disabled |
| 5 NVIDIA A800 80G... On N/A 44C P0 70W / 300W | 58999MiB / 81920MiB | 100% Default Disabled |
|-------------------------------|----------------------|----------------------|
```
MiB Mem : 257598.2 total, 144592.4 free, 65631.8 used, 47374.0 buff/cache
MiB Swap: 532480.0 total, 531717.9 free, 762.0 used. 144353.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2335083 root 20 0 111.9g 12.9g 5.5g R 200.0 5.1 179:15.41 python3.9
2335101 root 20 0 111.7g 13.3g 5.9g R 199.7 5.3 178:57.67 python3.9
2335097 root 20 0 111.7g 13.3g 5.9g R 100.3 5.3 94:33.70 python3.9
2335099 root 20 0 111.7g 13.1g 5.7g R 100.3 5.2 95:05.42 python3.9
2335091 root 20 0 111.7g 13.1g 5.8g R 100.0 5.2 94:48.45 python3.9
2335095 root 20 0 111.7g 13.1g 5.7g R 100.0 5.2 95:00.15 python3.9
2335098 root 20 0 111.6g 13.1g 5.7g R 100.0 5.2 94:45.88 python3.9
2335096 root 20 0 111.7g 13.2g 5.8g R 99.7 5.2 94:39.61 python3.9
```
I figured out a way to print out all stacks of all active thread :
```
Printing stack
-- Thread ID: 140306341001024 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
Printing stack
-- Thread ID: 140281920169792 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
-- Thread ID: 140275561256704 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139870071547712 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
-- Thread ID: 140452146153280 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139800833808192 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
-- Thread ID: 139794038388480 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/DebugStack.py", line 17, in _ThreadPrintStack
traceback.print_stack(frame)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tensorboard/summary/writer/event_file_writer.py", line 244, in run
self._run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tensorboard/summary/writer/event_file_writer.py", line 269, in _ru
data = self._queue.get(True, queue_wait_duration)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/queue.py", line 180, in get
self.not_empty.wait(remaining)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 316, in wait
gotit = waiter.acquire(True, timeout)
-- Thread ID: 139793899173632 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/tqdm/_monitor.py", line 60, in run
self.was_killed.wait(self.sleep_interval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 581, in wait
signaled = self._cond.wait(timeout)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/threading.py", line 316, in wait
gotit = waiter.acquire(True, timeout)
Printing stack
-- Thread ID: 140320438421312 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 140180603557696 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
-- Thread ID: 140174133536512 ---
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
Printing stack
-- Thread ID: 139808714364736 ---
File "/data/work/ChatGPT/trlx_rlhf/sft_fullpretrain/./train_gptj_summarize.py", line 281, in <module>
trainer.train()
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2020, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 2321, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3053, in evaluate
output = eval_loop(
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3266, in evaluation_loop
logits = self._pad_across_processes(logits)
File "/root/mambaforge/envs/trlx_env/lib/python3.9/site-packages/transformers/trainer.py", line 3410, in _pad_across_processes
sizes = self._nested_gather(size).cpu()
```
### What I have tried
Also, I tried:
1. Change model (change to gpt-j 6b)
2. Change deepspeed stage (1, 2, 3)
3. Change batch_size (1, 2, 4)
None of them work, they are all stuck in the above-mentioned evaluate. I need help.
### Expected behavior
I want evaluate to work properly. Don't get stuck. | 06-30-2023 09:13:16 | 06-30-2023 09:13:16 | We can't really help without knowing the code you are running.<|||||>@sgugger
I have located the problem statement:
```
trainer.py
def _pad_across_processes(self, tensor, pad_index=-100):
....
# Gather all sizes
size = torch.tensor(tensor.shape, device=tensor.device)[None]
sizes = self._nested_gather(size).cpu()
....
```
Stuck at .cpu()
I disassembled self._nested_gather(size).cpu() and found that it was stuck in .cpu()
In the above statement, tensor.shape==[1, 4608, 106078], tensor.device == 'cuda:0' .
Why is it stuck there?<|||||>You might have tensors of different sizes on your GPUs. This usually causes a hang when gathering.
Again, it's hard to know for sure when we don't know the code you are running :-)<|||||>Hi, I met similar problems with the latest Transformers, where training gets stuck in the evaluation_loop of trainer.py. @clinton81 Have you found any solution?<|||||>Downgrading Transformers to v4.19.4 works (same code, same data, same command).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,588 | closed | ๐ [i18n-KO] Translated`tasks/document_question_answering.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.mdx` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `tasks/document_question_answering.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 06-30-2023 08:21:19 | 06-30-2023 08:21:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR? ๐ค
@sgugger, @ArthurZucker, @eunseojo |
transformers | 24,587 | open | [WIP] Add Llama Flax Implementation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This is a work-in-progress port of Llama to Flax, leaving it as a draft PR for now.
The implementation is based heavily off the GPT-Neo and GPT-J Flax implementations.
Currently, the submodules are ready, I just need to assemble into a full model, check weight loading, add tests, and update the documentation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [mentioned in this issue comment](https://github.com/huggingface/transformers/issues/22647#issuecomment-1579154174)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-30-2023 07:44:12 | 06-30-2023 07:44:12 | Very cool @vvvm23! Scanned through the PR and it looks very nice already - happy to do a full review when it's close to completion. Just drop me a line and I'll have a look! ๐ Likewise if you have any questions or queries, I'm on hand to help :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24587). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @vvvm23 and @sanchit-gandhi, do you guys have a timeline for this effort? Asking because I would love to import FlaxLlama from Hugging Face, but if it is going to take a while, I will probably build my own pipeline to import the model.
Not sure if this helps at all, but [here](https://github.com/young-geng/EasyLM/blob/main/EasyLM/models/llama/llama_model.py) you find an implementation of Llama in Flax (plus some other library-specific methods that you probably won't need).<|||||>Hi @gianlucadetommaso, I haven't had the time to work on this since this draft PR went live, but I am blocking time out this weekend to continue.<|||||>Cool to see community interest around running Flax Llama! Feel free to ping me here when you need a review @vvvm23!<|||||>Thanks @sanchit-gandhi I found a little time to continue today.
One issue I am noticing is that the tolerance when comparing the ground truth PyTorch implementation (in `modeling_llama.py`) and my own implementation, is a lot higher than I'd like. For three hidden layers in the decoder stack, I have to raise it to `atol=1e-2, rtol=1e-2`, with one hidden layer being at `atol=1e-3, rtol=1e-3` in order to pass. You can see the scratch test I am using at the bottom of `modeling_flax_llama.py`
I think some numerical differences are expected, but not sure to what degree. I am also testing with `float32` so that made me even more suspicious. Would you expected the results to be identical? This is my first time porting a PyTorch model to Flax. Thanks~<|||||>Update: I now have a full model working. I haven't checked if the pretrained weight loading wrappers (provided by the Flax GPTNeo implementation) work yet, but once they are it will be ready for review. I'll simultaneously clean it up and add some missing features whilst it is being reviewed.<|||||>Hey! Thanks for the progress update here @vvvm23 and great questions regarding numerical equivalence between models.
Generally, for any model less than 1B params we should be able to get equivalence to within 1e-5 between Flax and PyTorch. It's quite likely that you won't get this equivalence running the matmuls in bfloat16 on TPU. But you should be able to running the matmuls in float32, see https://github.com/huggingface/transformers/issues/15754 and https://github.com/google/jax/issues/10413#issue-1212211265 for details
Here's a script that I used previously for checking PT / Flax equivalence for BLOOM: https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb You can ignore the bits about JIT'ing the forward pass for the time being. You can also uncomment the check to run it on CPU to force the highest precision, or use the decorator as provided
If we don't get 1e-5 precision, it's usually an indicator that we have a divergence in our model. Here, going through layer-by-layer and checking the hidden-states might be required to pinpoint it<|||||>Okay, thanks for the guidance and helper scripts ๐ฅ I expected that this lack of precision was not normal ๐
I'll get the pretrained wrappers working first and then focus on debugging the numerical divergence.
I'm aiming for end of this week to fix those numerical issues, but my responsibilities elsewhere are pulling me a lot, so fingers crossed ๐ค <|||||>I've begun my hunt for numerical bugs ๐
The first I squashed was rather strange. It seems `torch.rsqrt` and `jax.lax.rsqrt` do not match. This is used in the RMSNorm layers. Simple test to reproduce:
```
In [19]: a = np.asarray(a, dtype=np.float32)
In [20]: a
Out[20]:
array([1.16661310, 1.46686172, 0.13794081, 1.22346771, 1.17509305],
dtype=float32)
In [21]: torch.rsqrt(torch.from_numpy(a))
Out[21]: tensor([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471])
In [22]: jax.lax.rsqrt(a)
Out[22]: Array([0.92584133, 0.82566792, 2.69248700, 0.90407354, 0.92249471], dtype=float32)
In [23]: 1 / torch.sqrt(torch.from_numpy(a))
Out[23]: tensor([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471])
In [24]: 1 / jax.numpy.sqrt(a)
Out[24]: Array([0.92584139, 0.82566792, 2.69248700, 0.90407354, 0.92249471], dtype=float32)
```
So the fix there was just to replace the `jax.lax.rsqrt` calls with `1 / jax.numpy.sqrt(...)`
Models still mismatches so I'll keep digging.<|||||>@sanchit-gandhi The model now numerically matches in fp32 on CPU. The issue was my backend has changed from CPU to GPU since fixing the `rsqrt` issue. I don't think we can expect a perfect match on GPU as the two models use fundamentally different backends. If there is anything you know of that could help remedy this, let me know.
What are the next steps to take? I am guessing some model tests, as well as trying it out on a real model checkpoint rather than random weights. However, my dev machine goes OOM when attempting to load the checkpoint on CPU.<|||||>Hey @vvvm23! Excellent work on pinpointing the difference between torch and jax.lax `rsqrt` and glad to hear we're within numerical precision using fp32 on CPU - we can be pretty confident we have an accurate Flax implantation based on these results. For GPU, there will be differences between PyTorch and JAX. This is expected since JAX fundamentally works differently to PyTorch with how it computes the matmuls, and is OK since the JAX model will typically generate predictions that are 'as good' as the PyTorch one.
Adding some tests and updating the docs would be the most sensible next steps! Again, you can refer to the Flax GPT Neo model to see the relevant tests to add: https://github.com/huggingface/transformers/blob/main/tests/models/gpt_neo/test_modeling_flax_gpt_neo.py
> However, my dev machine goes OOM when attempting to load the checkpoint on CPU.
That's interesting - are we loading the weights on GPU by accident? There shouldn't be any GPU OOM if running on CPU. We might see our RAM get full if loading extremely large weights, but the GPU memory shouldn't be affected. What model size are you loading? We can try the smallest 7b checkpoint: https://huggingface.co/meta-llama/Llama-2-7b<|||||>Awesome thanks, tests and docs it is! I am currently on leave so won't be progressing on this until the 31st.
> That's interesting - are we loading the weights on GPU by accident?
Actually, in the end no. By OOM on my dev machine, I meant out of CPU memory. Switching to a GPU backend meant I could load the model without running out of memory. So, nothing to worry about ๐
<|||||>Awesome - thanks for the update @vvvm23. Looking forward to doing a full review of the PR on your return! |
transformers | 24,586 | closed | tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') report error | ### System Info
OSError
Can't load tokenizer for 'distilroberta-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'distilroberta-base' is the correct path to a directory containing all relevant files for a RobertaTokenizerFast tokenizer.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run a Python script with the following code:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')
when the program runs at that line, it will report an OS ERR:
OSError
Can't load tokenizer for 'distilroberta-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'distilroberta-base' is the correct path to a directory containing all relevant files for a RobertaTokenizerFast tokenizer.
actually the model 'distilroberta-base' is an official model instead of a model from a hf user, its link is 'https://huggingface.co/distilroberta-base' without 'models' path
how could us deal with this problem?
### Expected behavior
let the program could run pass the following code correctly:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')
better works with a cache_dir as follow:
tokenizer = AutoTokenizer.from_pretrained('distilroberta-base', cache_dir='./')
the model could auto downloaded into my project path without any error.
it is really inconvenient for us to use, if we use default download mode(from_pretrained without args), it will download into C disk on windows,
if we assign a cache_dir, it will also report error. Even if we try to override the from_pretrained func to use tqdm to download the model into assigned path, it will still report error at request line like: ('Connection aborted.', ConnectionResetError(10054, | 06-30-2023 06:57:13 | 06-30-2023 06:57:13 | Please follow the template when reporting issue: share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.<|||||>However, I have tried `tokenizer = AutoTokenizer.from_pretrained('distilroberta-base')` and it works perfectly (for me).
Could you also check with other model checkpoints on the Hub?<|||||>I have tried it and it also works for me just fine<|||||>> Please follow the template when reporting issue: share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.
Traceback (most recent call last):
File "***\.conda\envs\tfui\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "***\.conda\envs\tfui\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "***\.conda\envs\tfui\Scripts\transformers-cli.exe\__main__.py", line 4, in <module>
File "***\.conda\envs\tfui\lib\site-packages\transformers\commands\transformers_cli.py", line 26, in <module>
from .user import UserCommands
File "***\.conda\envs\tfui\lib\site-packages\transformers\commands\user.py", line 20, in <module>
from huggingface_hub.hf_api import HfFolder, create_repo, login, logout, whoami
ImportError: cannot import name 'login' from 'huggingface_hub.hf_api' (***\.conda\envs\tfui\lib\site-packages\huggingface_hub\hf_api.py)<|||||>ValueError
Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
I guess it is mostly because of the GFW<|||||>Maybe first try to upgrade `huggingface_hub` version<|||||>> Maybe first try to upgrade `huggingface_hub` version
OSError
We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like distilroberta-base is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.
I have upgraded it to 0.15.1, but it still has error<|||||>Could you check if this is only for `distilroberta-base` or same error for other checkpoint like `gpt2` or `bert-base-uncased`.
Could you re-run `transformers-cli env`?<|||||>> Could you check if this is only for `distilroberta-base` or same error for other checkpoint like `gpt2` or `bert-base-uncased`.
>
> Could you re-run `transformers-cli env`?
OSError
We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like gpt2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.
//=================================================
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.30.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1 but is ignored because of PyTorch version too old.
- PyTorch version (GPU?): 1.9.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in><|||||>Can't really know what happens in your environment. Probably try with a clean (new) virtual python environment, and install `transformers` as `pip install transformers[dev]`. If still not working, there is nothing we can't help: it's likely your env. connection issue.<|||||>I created a totally new env and installed transformers 4.30.2 (latest ver). When I run the code below:
tokenizer = AutoTokenizer.from_pretrained(xxx)
it returns following error:
Failed to import transformers.convert_graph_to_onnx because of the following error (look up to see its traceback):
DLL load failed while importing _imaging: |
transformers | 24,585 | closed | [several models] improve readability | Honestly I had no idea what `torch.ones([]) * self.config.logit_scale_init_value` would return - it's not documented either.
~Proposing to change it to a very clear `torch.tensor(1.0)` which leaves no doubt to what it does.~
Proposing to change it to a very clear `torch.tensor(self.config.logit_scale_init_value)` which leaves no doubt to what it does. | 06-30-2023 04:22:36 | 06-30-2023 04:22:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> size ([int] โ a sequence of integers **defining the shape** of the output tensor.
It's actually mentioned, but I agree it's a bit less explicit than `torch.tensor(1)`.<|||||>Gentlemen, sleeping more on it, actually it just came to me that the cleanest most readable solution is just:
```
self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))
```
`torch.ones` isn't even needed ;)
Do you agree?
and yes, I can then fix other places.<|||||>ok, I have used:
```
self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))
```
pattern everywhere I found `torch.ones([])`, plus in a few places I used `from_numpy` where the input was numpy, so I touched on other models as well.
Please have a look.
<|||||>`from_numpy` didn't quite work in 2 models where I tried it, so going to use the normal `torch.tensor` constructor like everywhere else in this PR.<|||||>Nice finding :-) |
transformers | 24,584 | open | fsdp support bool type in trainArgs, use len(args.fsdp) would evoke TypeError when set fsdp=True | https://github.com/huggingface/transformers/blob/2dc5e1a120176594ed2dcb7d2f02a5dd62266232/src/transformers/trainer.py#L433 | 06-30-2023 01:55:12 | 06-30-2023 01:55:12 | No, because it has been converted [here](https://github.com/huggingface/transformers/blob/2dc5e1a120176594ed2dcb7d2f02a5dd62266232/src/transformers/training_args.py#L1461).
Please do not open issue without a code reproducer of the problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,583 | closed | use tokenizer model max length | # What does this PR do?
This PR change the `block_size` to the `tokenizer.model_max_length`, instead of using the `default 1024`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-29-2023 17:30:24 | 06-29-2023 17:30:24 | |
transformers | 24,582 | closed | Fix annotations | # What does this PR do?
This PR is minor. Just fixed wrong annotations.
All models were tested and confirmed based on Pytorch model.
I think all the codes I modified are made based on the template.
Please check if it is okay to modify the annotation in the template code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? | 06-29-2023 16:42:41 | 06-29-2023 16:42:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,581 | open | _keys_to_ignore_on_load_unexpected not working with GPT2 model | ### System Info
- `transformers` version: 4.30.2
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I saved a PyTorch lightning model which contained a RoGPT2-large model from version 4.27.4. When trying to load the model in version 4.30.2, I get the following error:
Traceback (most recent call last):
File "/Users/alexandrudima/home/Research/Outlook Add-ins/Optimize/backend.py", line 23, in <module>
generation_model = GenerationModel.load_from_checkpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/module.py", line 1520, in load_from_checkpoint
loaded = _load_from_checkpoint(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/saving.py", line 89, in _load_from_checkpoint
storage = _load_state(cls, checkpoint, strict=strict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/pytorch_lightning/core/saving.py", line 154, in _load_state
keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GenerationModel:
Unexpected key(s) in state_dict: "model.transformer.h.0.attn.bias", "model.transformer.h.0.attn.masked_bias", "model.transformer.h.1.attn.bias", "model.transformer.h.1.attn.masked_bias", "model.transformer.h.2.attn.bias", "model.transformer.h.2.attn.masked_bias", "model.transformer.h.3.attn.bias", "model.transformer.h.3.attn.masked_bias", "model.transformer.h.4.attn.bias", "model.transformer.h.4.attn.masked_bias", "model.transformer.h.5.attn.bias", "model.transformer.h.5.attn.masked_bias", "model.transformer.h.6.attn.bias", "model.transformer.h.6.attn.masked_bias", "model.transformer.h.7.attn.bias", "model.transformer.h.7.attn.masked_bias", "model.transformer.h.8.attn.bias", "model.transformer.h.8.attn.masked_bias", "model.transformer.h.9.attn.bias", "model.transformer.h.9.attn.masked_bias", "model.transformer.h.10.attn.bias", "model.transformer.h.10.attn.masked_bias", "model.transformer.h.11.attn.bias", "model.transformer.h.11.attn.masked_bias", "model.transformer.h.12.attn.bias", "model.transformer.h.12.attn.masked_bias", "model.transformer.h.13.attn.bias", "model.transformer.h.13.attn.masked_bias", "model.transformer.h.14.attn.bias", "model.transformer.h.14.attn.masked_bias", "model.transformer.h.15.attn.bias", "model.transformer.h.15.attn.masked_bias", "model.transformer.h.16.attn.bias", "model.transformer.h.16.attn.masked_bias", "model.transformer.h.17.attn.bias", "model.transformer.h.17.attn.masked_bias", "model.transformer.h.18.attn.bias", "model.transformer.h.18.attn.masked_bias", "model.transformer.h.19.attn.bias", "model.transformer.h.19.attn.masked_bias", "model.transformer.h.20.attn.bias", "model.transformer.h.20.attn.masked_bias", "model.transformer.h.21.attn.bias", "model.transformer.h.21.attn.masked_bias", "model.transformer.h.22.attn.bias", "model.transformer.h.22.attn.masked_bias", "model.transformer.h.23.attn.bias", "model.transformer.h.23.attn.masked_bias", "model.transformer.h.24.attn.bias", "model.transformer.h.24.attn.masked_bias", "model.transformer.h.25.attn.bias", "model.transformer.h.25.attn.masked_bias", "model.transformer.h.26.attn.bias", "model.transformer.h.26.attn.masked_bias", "model.transformer.h.27.attn.bias", "model.transformer.h.27.attn.masked_bias", "model.transformer.h.28.attn.bias", "model.transformer.h.28.attn.masked_bias", "model.transformer.h.29.attn.bias", "model.transformer.h.29.attn.masked_bias", "model.transformer.h.30.attn.bias", "model.transformer.h.30.attn.masked_bias", "model.transformer.h.31.attn.bias", "model.transformer.h.31.attn.masked_bias", "model.transformer.h.32.attn.bias", "model.transformer.h.32.attn.masked_bias", "model.transformer.h.33.attn.bias", "model.transformer.h.33.attn.masked_bias", "model.transformer.h.34.attn.bias", "model.transformer.h.34.attn.masked_bias", "model.transformer.h.35.attn.bias", "model.transformer.h.35.attn.masked_bias".
I suspect that something is not ok with the _keys_to_ignore_on_load_unexpected parameter defined in GPT2LMHeadModel, but I have no idea what could be the problem.
P.S. The model can be loaded without problems when using transformers==4.27.4.
### Expected behavior
The model should also be loadeed without problems in 4.30.2. | 06-29-2023 15:19:49 | 06-29-2023 15:19:49 | The problem stems from PyTorch lightning trying to load the model as far as I can see. Models should be loaded via the `from_pretrained` method we provide. If you are using `.load_state_dict()` it's up to you to clean the state dict you have and make sure it only contains keys the model accepts/<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,580 | closed | ๐ย [i18n-KO] Translatedย `custom_tools.mdx` to Korean | # What does this PR do?
Translated the `custom_tools.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo
May you please review this PR? | 06-29-2023 15:03:08 | 06-29-2023 15:03:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM๐<|||||>Thanks for the PR, @sim-so!
There's since been an update on main to all of our documentation files. Could you update the extension from `.mdx` to `.md` to match please?<|||||>Thanks for letting me know about the change! @amyeroberts!
I updated the extention to `.md`.
Could you review this PR?
@sgugger, @ArthurZucker, @eunseojo |
transformers | 24,579 | closed | Datasets in run_translation.py | ### System Info
Hello there! ๐
I'm following along the [run_translation.py ](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py) example.
Thanks for making it! It expands gratefully from the translation docs [tutorial ](https://huggingface.co/docs/transformers/tasks/translation)
### Context
Managed to configure flags for training, When launching in CLI
`
python train_model.py --model_name_or_path '/Users/.../The-Lord-of-The-Words-The-two-frameworks/src/models/t5-small' --output_dir '/en-ru-model' --dataset_name '/Users/.../The-Lord-of-The-Words-The-two-frameworks/src/data/opus_books' --dataset_config_name en-ru --do_train --source_lang en --target_lang ru --num_train_epochs 1 --overwrite_output_dir
`
the following error appears
```
raise TypeError("Dataset argument should be a datasets.Dataset!")
TypeError: Dataset argument should be a datasets.Dataset!
```
Then read [forum recommendation](https://discuss.huggingface.co/t/quick-tour-train-using-tensorflow-gives-dataset-argument-should-be-a-datasets-dataset-error/33657), tried to launch the training commenting the `tf_eval_dataset `creation , and launched training. The **model trained** without having the [eval_dataset](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L535).
When I passed the flag `--do_eval` it raised error flagged [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L444)
I downloaded the [opus books dataset](https://huggingface.co/datasets/opus_books) and I saw in the README.md that it donยดt have a validation split
```
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5190880
num_examples: 15496
download_size: 1613419
dataset_size: 5190880
```
### Issue 1. Reproducibility coming from tutorial
* Can you please confirm that this **example runs straightforward** with [WMT19](https://huggingface.co/datasets/wmt19) and that I might not have this issue taking this dataset and not the opus books one?
* Would you be willing to accept a PR with a comment in the example either pointing to the readme table or making more explicit that this example comes with a specific dataset with its link around [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/translation/run_translation.py#L311) ? Is there a way you think I could help those users having the path from docs tutorial to script example ?
Am I missing something ? I think it's dataset related but Im not sure anymore...
### Issue 2. Broken link
Found a [broken link](https://huggingface.co/docs/datasets/loading_datasets.html.), if you are ok iยดll fix it with [this](https://huggingface.co/docs/datasets/loading)
#### Dependencies
```
transformers==4.31.0.dev0
tensorflow-macos==2.10.0
```
#### Tangential and mental model
I'm actually following [this script](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py) which is a copy, that came recommended from #24254 . Please let me know if something has changed. Im seeing the [history](https://github.com/huggingface/transformers/commits/main/examples/tensorflow/translation/run_translation.py) and last commit seems from Jun7 and mine is Jun13
I grouped the broken link with dataset in one issue as it might impact 1 PR for Reproducibility, but let me know if you prefer them separately.
Thanks so so much for your help ๐ & thanks for the library!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. run the script
2. download opus books dataset
3. config flags
4. run script with and without eval_dataset logic
### Expected behavior
- Dataset ? Either with link in README.md or in script commented?
- Correct link for
Tagging @sgugger
| 06-29-2023 14:54:55 | 06-29-2023 14:54:55 | cc @Rocketknight1 since this is a TensorFlow example.<|||||>Hey @Rocketknight1 ๐ I think we crossed in #24341 . Thanks for the Notebook repository discovery .
Was a nice quick fix !
Iยดve given another try to some vector thoughts posted in the issue.
* Regarding **reproducibility**. Read the script again, and tried to get some distance and analyze it as an isolated example coming from the library. The script is quite _structured_ and _documentation comments are well-suited_, it _generalizes really well_. Adding here the dataset name wouldnยดt really work . Besides, if the dataset associated to the examples changes, it would require a change. At this point maybe would add a small sentence with a recommendation to go through the README.md , so the example remains general/scalable across various datatasets? But minor in retrospective. Makes sense to you ?
* Sent a PR to fix the broken link
Thanks for the script structure and the guidance! ๐
<|||||>### Comments on Issue 1
Currently the run_translation.py script works well with [wtm16](https://huggingface.co/datasets/wmt16) dataset , as it provides splitting for train, test and validation .
Iยดm closing this issue, as the dataset for running the script has been found, and the broken link was fixed through a PR #24594 |
transformers | 24,578 | closed | fix peft ckpts not being pushed to hub | # What does this PR do?
1. Currently, when ckpts are saved every `save_steps` and `push_to_hub=True`, PEFT ckpts aren't being pushed to hub.
Reason:
every `save_steps` , the ckpts are being written to local `checkpoint-xx` folders. Now, in the trainer's function `_push_from_checkpoint` , it will copy the model files from the latest ckpt folder `checkpoint-xx` to the `output_dir` which gets pushed to the Hub. Note that the modeling_files doesn't have `ADAPTER_WEIGHTS_NAME` or `ADAPTER_SAFE_WEIGHTS_NAME` or `ADAPTER_CONFIG_NAME` leading to them not being pushed.
This PR fixes it.
cc @younesbelkada | 06-29-2023 14:44:21 | 06-29-2023 14:44:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,577 | open | Add imageArray | # What does this PR do?
Adds a new class `ImageArray` for use as part of the image processing pipeline. It acts as an array container, which we can use to store information about the image e.g. the data format. This is the recommended way to create 'array-like' numpy objects with persistent attributes: https://numpy.org/doc/stable/user/basics.dispatch.html
The intention is to enable users to explicitly set information about the image e.g. data_format and have that carried through the processing pipeline in a stateful way. This addresses issues where the input image(s) information is repeatedly inferred unnecessarily in functions, or when it's ambiguous e.g. image of shape `(3, 3, 3)`. See:
* #21981
* #21638
* #22577
Defining `__array_ufunc__` and `__array_function__` means `ImageArray` can have numpy operations e.g.
```python
>>> from transformers.image_utils import ImageArray
>>> import numpy as np
>>> x = np.random.randint(0, 256, (2, 2, 3))
>>> img = ImageArray(x)
>>> img
ImageArray([[[ 20 232 120]
[197 244 147]]
[[ 47 241 95]
[ 73 251 140]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Standard array operations - multiplication, addition etc. are possible
>>> img * 2
ImageArray([[[ 40 464 240]
[394 488 294]]
[[ 94 482 190]
[146 502 280]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
>>> img + img
ImageArray([[[ 40 464 240]
[394 488 294]]
[[ 94 482 190]
[146 502 280]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Numpy functions and array methods can be used
>>> np.mean(img, axis=-1)
ImageArray([[124. 196. ]
[127.66666667 154.66666667]], data_format=none, num_channels=0, shape=(2, 2))
>>> img.mean(axis=-1)
ImageArray([[124. 196. ]
[127.66666667 154.66666667]], data_format=none, num_channels=0, shape=(2, 2))
# Supports slicing
>>> img[:, :, 1]
ImageObject([[232 244]
[241 251]], data_format=none, num_channels=0, shape=(2, 2))
# Supports type casting
>>> img.astype(np.float32)
ImageObject([[[ 20. 232. 120.]
[197. 244. 147.]]
[[ 47. 241. 95.]
[ 73. 251. 140.]]], data_format=channels_last, num_channels=3, shape=(2, 2, 3))
# Can be cast back as a numpy array
>>> np.array(img)
array([[[ 20, 232, 120],
[197, 244, 147]],
[[ 47, 241, 95],
[ 73, 251, 140]]])
# Is a numpy array isinstance
>>> isinstance(img, np.ndarray)
True
```
## ๐ช ๐ช ๐ช Tricky bits ๐ช ๐ช ๐ช
Although this enables the ImageArray to be used directly in existing numpy logic, it does create issues when interfacing between other frameworks like `torch` or `PIL`. The following operations fail:
```
PIL.Image.fromarray(img)
torch.from_numpy(img)
```
This is because these libraries directly access the underlying memory using python's buffer protocol. As far as I can tell, there is no direct way of exposing this on the Python side, and it would require writing c code to enable. This seems like overkill to me. The only case I know this to cause an issue, is in the pix2struct image processor which uses some [torch specific logic](https://github.com/huggingface/transformers/blob/6eedfa6dd15dc1e22a55ae036f681914e5a0d9a1/src/transformers/models/pix2struct/image_processing_pix2struct.py#L260) (which ideally would be removed).
As image processor are almost exclusively used with direct calls i.e. `image_processor(img, return_tensors="pt")`, and the torch.tensor batch conversion still works, I don't expect this to cause many issues.
One way of getting this to work is to return numpy arrays when array methods are called:
* `np.mean(arr) would return an ImageArray, `image_array.mean(...)` would return a numpy array.
Tbh, I wasn't able to completely figure out the interplay between this functionality as `torch.from_numpy` seems to be just calling C code.
## Next steps
- [ ] Adapt functionality in `image_transforms` to use the
- [ ] Add some logic for array operations to remove repeatedly finding e.g. `num_channels` when resulting array is created
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-29-2023 14:27:41 | 06-29-2023 14:27:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24577). All of your documentation changes will be reflected on that endpoint.<|||||>@ydshieh Adding you as a first reviewer as you're always able to give good SWE advice :) This is quite a big design decision, so will be following up with requests from reviews from others once we've ironed out details here, if that's OK. <|||||>@amyeroberts Thanks for requesting me for the first review. I would like to hear a bit more from you despite not looking through the changes yet ๐
- So `ImageObject` will be only used inside the methods of image processor/processing, and the arguments/return values will remain as numpy array?
- From the changes in image processor files, I don't see how this new class helps reducing/simplifying the logic of processing images. Am I missing anything ...?<|||||>@ydshieh
> So ImageObject will be only used inside the methods of image processor/processing, and the arguments/return values will remain as numpy array?
Mostly. In most cases, this won't be seen by users because the image processors are called with a framework specified e.g. `return_tensors="pt"`
If the user doesn't specify `return_tensors`, then the returned objects would be a list of `ImageObject`. Currently it `return_tensors=None` returns a list of numpy arrays.
I could make sure to return numpy arrays at the end of processing with something like:
```py
images = [image.numpy() for image in images]
```
before passing to `BatchFeature`.
In the future. once the image transformers have been adapted to use the `ImageObject` attributes, then users will see `ImageObject` returned if they called methods directly, either on the image processor or from the transforms library:
```py
from transformers.image_transforms import resize
# Returned resized_image is an ImageObject object
resized_image = image_processor.resize(image, size={"height": h, "width": w})
resized_image = resize(image, size={"height": h, "width": w})
```
> From the changes in image processor files, I don't see how this new class helps reducing/simplifying the logic of processing images. Am I missing thing ...?
It doesn't yet. This is just introducing the object and replacing the current numpy arrays to ensure everything still works as-is. Part of the next steps is updating logic in e.g. `image_transforms` and in some of the array logic to simplifying things are reduce repeated calculations.
<|||||>OK Thank you.
In this PR, `image` argument of `preprocess` remains as numpy array, which is โ
. The return values should keep as numpy array (if not returning tensor) for which you will update in this PR โ
.
In the next PR, you will update the file `src/transformers/image_transforms.py` to use `ImageObject` โ
.
The only thing I am a bit worried is if the input/output of methods in that file will be changed: Those are still considered as public methods (right? We should discuss this with @sgugger anyway.) and we should keep them still accepting numpy array input, and return numpy array if it is currently. This might cause the conversion between numpy array <--> ImageObject several times, for which I am not 100% sure if you would love.
However, this is a question for the next PR, not for this PR.
<|||||>Hi @amyeroberts
Could you remind me the reason to remove `ImageObject` and only use `ImageArray`. I just need to refresh my memory, thank you ๐ <|||||>@ydshieh Of course :) Sorry, I should have added some explanatory comments.
I actually just renamed `ImageObject` to `ImageArray` - the class hasn't been removed.
I did remove casting inputs to `ImageObject` / `ImageArray` in the image processors as it make the PR big and required tackling a few parts of the processing logic which I believe is out of scope. <|||||>@sgugger Yes, I understand. Tbh, I'd rather not have this class. Originally I wanted just a wrapper around the array that could be passed along with the image instead of additional arguments everywhere in the processing methods and functions. Unfortunately, it's necessary to wrap the array like this to have the state persist with numpy array operations and not needing tonnes of extra handling code.
I'm going to quickly write up an alternative with passing arguments around and compare the two.
|
transformers | 24,576 | closed | Fix ESM models buffers | # What does this PR do?
Apparently the keys are important to be loaded from the state dict even if we create them at init. | 06-29-2023 14:15:02 | 06-29-2023 14:15:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24576). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,575 | open | ๐ Text Generation docs rework | # What is this?
This is an issue to discuss and track the rework of the docs for text generation. Comments and feedback are appreciated, as always ๐ค
# Current issues
1. Our [main reference for text generation](https://huggingface.co/blog/how-to-generate) is not in the docs and is quite outdated
2. The docs regarding text generation are scattered, and it is not simple to navigate between them -- the reader has to know where to look for them
3. We lack examples beyond the simplest forms of text generation
4. We have undocumented advanced use cases, such as setting a custom stopping criteria
5. We are not clear about what the user can't do
# Proposed plan
EDIT -- incorporated feedback up to [this comment](https://github.com/huggingface/transformers/issues/24575#issuecomment-1617747844) (including)
I'd like to split the plan into three parts:
1. Designing a simpler entry point to text generation, from which all related documentation is discoverable
2. Upgrading the developer guides to cover the full potential of text generation
3. Make our code more self-documenting and other code changes
## 1. Designing a simpler entry point for text generation docs
Tackles issues 1 and 2.
This part is further divided into two actions:
- [x] The (blog post)[https://huggingface.co/blog/how-to-generate] is still a solid reference for the background in text generation, but it holds old examples (`tensorflow`!) and focuses a bit too much on `top_p`/`top_k`. Let's retouch it.
- [ ] Create a short tutorial to serve as an entry point to the multiple forms of text generation. Like the other tutorials, it contains references to related docs throughout the text (let's see if it is enough to handle discoverability -- we can create a stand-alone related docs section in the future if needed). It would also cover a few basics like "use left-padding when doing batched generation with decoder-only models" and "double-check your generate kwargs".
Related docs:
1. Tasks
2. Related developer guides
3. API reference
4. Outside `transformers` (e.g. `optimum`, `text-generation-inference`, LLM leaderboard, non-HF libs like `autogptq`?)
## 2. Upgrading the developer guides
Tackles issues 3 and 4.
We currently have [one developer guide](https://huggingface.co/docs/transformers/generation_strategies), which writes about the API and a few basic ways to manipulate text generation. I propose we improve the existing one and add 2 new guides, preferably with examples that cover more modalities and use cases:
- [ ] 1. Improve the existing guide -- Add a section about the impact of logits processors, and another on how stopping conditions operate.
- [ ] 2. "Prompting" -- Some basic "do and don'ts" regarding prompting and how different types of models respond differently to it (encoder-decoder vs decoder, instruction-tuned vs base), the importance of prompting on chat applications
- [ ] 3. Using LLMs, with a focus on the 1st L (large) -- write about variable types, quantization, device mapping, advanced architectures (alibi, rope, MQA/GQA), flash attention
- [ ] 4. Advanced examples (name?) -- Concrete use cases that make use of many features at once, to serve as inspiration: how to control between extractive and abstraction summarization, retrival-augmented generation, and other modality-specific examples
## 3. Self-documenting code and other code changes
Tackles issues 3 and 5.
- [ ] Let's be honest -- the best user experience is when no docs are needed at all. We can improve our game here, by performing parameterization validation. Currently, our validation step is very superficial, and users are allowed to do things like passing `temperature` with `do_sample=False`, ultimately resulting in GH issues. I'd suggest performing a hard validation and throwing informative exceptions, pointing to the redesigned docs ๐ค
- [ ] In parallel, our logits processors and stopping condition classes are missing docstring examples on how to use them. This should make our API reference much more robust. | 06-29-2023 14:14:22 | 06-29-2023 14:14:22 | Hello there @gante ! ๐
First and foremost, thanks for opening a discussion about this .
Would like to glimpse a couple of vector thoughts about point 1, as point 2 and 3 seem consistent and structured enough
### 1. Designing a "home page" for text generation docs
As a user , the underlying pattern that Iยดm glimpsing from the **Tutorials** section make reference to _how to use the library from the engineering perspective, in abstract terms_ ( preprocess data, Share your model, fine-tune etc ) . The tutorials seem like a library approach for a โHOW-TO โ do things. In fact, in Tutorials section, several examples about vision, audio and language are displayed.
I would think about putting a **Text Generation** section directly in Task guides , inside Natural Language processing, at the top , as it is related to a challenge to solve ( Text classification, Token classification ) . This doesnโt entail that one of the main โHOW-TOsโ related to text generation would be included inside Tutorials as a section. From what Iโm taking for the [guide](https://huggingface.co/blog/how-to-generate), there is an insightful section of _Search_ and _Sampling_, that could be added to the Tutorials, and a more detailed clarification added in Tasks and Developer guides.
The thing is that following this schema, at first sight ( abstracting **main challenge** from guide in **Tutorials** and add a **robust example or some "home-page"** references in **Tasks** with link to developer guides ) seems more _coherent with your current structure._
On the other hand, and tangential, why not adding a [LLMs leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) link somewhere (maybe in point 2) so the users can be mindful about the state of the art of the models in terms of perf for **Text generation** tasks ?
Hope I explained myself clearly enough ๐งญ
Thanks again for the open discussion! And for making the library! ๐
<|||||>Big +1 on the 3.) point. I think self-explanatory code with better doc string examples and arg/kwarg checking would take us a long way!
RE: 1.) Yes this makes sense, but I think a single concise page is probably better than a homepage that links to everything we already have as this might be too difficult to keep up to date and might also be too complex. A single concise page under "Tutorials" which is a strong iteration on the blog post "how to generate" (which is still one of our most read blog posts) could be a good starting point. The blog post does a very good job at explaining how LLMs fundamentally work. It is however not up to date anymore and also puts too much focus on things like top_k and top_p. So a strong incremental improvement (more tailored to new LLMs) could serve as the main point of introduction to text-generation and be put under "Tutorials".
RE: 2.) Yes I think these could all go into Developer guides and be a nice iterative improvement to "Customize Text generation strategy"<|||||>Hi @gante - I love the plan.
Here are a couple of quick suggestions:
1. Big +1 on validation with parameters passed to generate, or even just changing the error message to point to [text generation strategies post](https://huggingface.co/docs/transformers/generation_strategies)
2. I agree with @patrickvonplaten - Instead of a bouquet of docs, just one simple and concise doc page would do more wonders than not.
3. I think a good way to structure would be starting from the basics - explaining the default behaviour (greedy search) and work our way up to other strategies. What would be helpful is to provide suggested parameter values along with the strategy as well.
4. Each of the above strategy can be paired with two `toggle` based snippets, how to generate with `pipeline` and how to generate with `processor + generate` -> this will help cater to our rough user base.
5. We can end the blog post with all the cool tricks that are not part of the generate `yet`, link it to a gh repo or gist. These are examples like generate with `ggml`, `gptq` integration and so on.
6. [Long term] once we have this page in place we can work our way to update the model cards on text-gen models to add a link to it. I reckon it'll just be a batch PR.
Cheers!<|||||>Adding here that it would be nice to include a section on batched generation (I made a PR for GPT-2 [here](https://github.com/huggingface/transformers/pull/24432)). This is not that intuitive for people as you need to pad from the left, set the padding token appropriately in the tokenizer, etc.<|||||>Thank you for the feedback folks ๐
I've incorporated your feedback in the plan, which got edited. Main differences:
- Part 1 now consists in updating the existing blog post and creating a short usage tutorial (with references to the blog post and advanced docs over its contents, as opposed to a stand-alone section with links, like the other tutorials)
- Part 2 got condensed to reduce the long-term maintenance burden<|||||>Thanks for taking the time to outline this detailed rework! Big +1 for the additional developer guides and tutorial. โค๏ธ
I would consider leaving the blog post as is also to reduce long-term maintenance and instead keep all the relevant text generation content in the docs. In general, I feel like a blog post is more appropriate for "timely" content like announcements/news or explaining why something was designed the way it was. Content that needs to be maintained and updated is better in the docs I think. As you mentioned, the how-to-generate blog post still contains super useful background info about text generation so I think we should definitely find a way to preserve that info. My suggestions would be to:
- link from the blog post to the docs for the latest changes (could be a simpler banner at the top like [this](https://play.tailwindcss.com/MqrXeJutFi))
- create a doc in the Conceptual Guide section to hold the background info from the how-to-generate blog post |
transformers | 24,574 | closed | Revert "Fix typing annotations for FSDP and DeepSpeed in TrainingArguments" | Reverts huggingface/transformers#24549 | 06-29-2023 12:14:38 | 06-29-2023 12:14:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24574). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,573 | closed | Check all objects are equally in the main `__init__` file | # What does this PR do?
Add one more check in `check_repo.py` to further ensure objects are in the main `__init__`. (using our pytorch objects as the reference)
### Actions to take
(Do you agree @sgugger ?)
- The following should be added to the main `__init__`:
```bash
TFAutoModelForAudioClassification should be defined in the main `__init__` file.
TFAutoModelForMaskGeneration should be defined in the main `__init__` file.
TFAutoModelForMaskedImageModeling should be defined in the main `__init__` file.
```
- A fix is required:
```bash
TF_AutoModelForSemanticSegmentation should be defined in the main `__init__`
file.
```
due to the mistake
```
TF_AutoModelForSemanticSegmentation = auto_class_update(
TFAutoModelForSemanticSegmentation, head_doc="semantic segmentation"
)
```
- The following (the **pytorch** one) should be **removed** from the main `__init__`:
```bash
TFBertLayer should be defined in the main `__init__` file.
FlaxBertLayer should be defined in the main `__init__` file.
FlaxBigBirdLayer should be defined in the main `__init__` file.
TFLxmertEncoder should be defined in the main `__init__` file.
TFLxmertXLayer should be defined in the main `__init__` file.
TFMPNetLayer should be defined in the main `__init__` file.
TFMobileBertLayer should be defined in the main `__init__` file.
FlaxRoFormerLayer should be defined in the main `__init__` file.
TFSegformerLayer should be defined in the main `__init__` file.
TFViTMAELayer should be defined in the main `__init__` file.
```
| 06-29-2023 10:28:56 | 06-29-2023 10:28:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>We cannot remove things from the main init, as it would be a breaking change. The `BertLayer` and the likes should never have been made public like this but it's too late now. And we shouldn't add the corresponding TF/Flax objects either, so this defeats the purpose of this new check.<|||||>I can add exceptional cases list though and not to touch those problematic entries.<|||||>Sure! |
transformers | 24,572 | closed | Docs: 4 bit doc corrections | # What does this PR do?
Some 4-bit references were written with "8" instead of "4" | 06-29-2023 10:23:59 | 06-29-2023 10:23:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,571 | closed | Fix annotations | # What does this PR do?
I found wrong annotations in `MBart`, `Pegasus` model.
Fixed wrong annotation
- (seq_len, batch, embed_dim) -> (batch, seq_len, embed_dim)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? | 06-29-2023 10:23:02 | 06-29-2023 10:23:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,570 | closed | Removal of deprecated vision methods and specify deprecation versions | # What does this PR do?
* Removes a bunch of properties, methods and logic which had deprecation warnings for a few versions ago.
* Adds some specific version deprecations for methods that didn't have a specified version for deprecation
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-29-2023 09:37:53 | 06-29-2023 09:37:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger Just wanting to double check it's OK to remove these deprecated methods now before I merge in |
transformers | 24,569 | closed | LlamaTokenizer: Slow implementation opts for whitespace-lead token (different from fast) | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @youn
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
Comparing slow and fast `LlamaTokenizer` instances with `huggyllama/llama-7b`.
```
from transformers import AutoTokenizer
model = "huggyllama/llama-7b"
fast = AutoTokenizer.from_pretrained(model)
slow = AutoTokenizer.from_pretrained(model, use_fast=False)
# use tokenize()
print(fast.tokenize("<s>uns"), slow.tokenize("<s>uns"))
# -> (['โ<s>', 'uns'], ['<s>', 'โuns'])
# use __call__
print(fast(f"{fast.bos_token}uns", add_special_tokens=False), slow(f"{slow.bos_token}uns", add_special_tokens=False))
# -> ({'input_ids': [1, 6948], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]},
# {'input_ids': [1, 9644], 'attention_mask': [1, 1]})
# round-tripping
print(fast.convert_tokens_to_string(fast.tokenize("<s>uns")), fast.convert_tokens_to_string(slow.tokenize("<s>uns")))
# -> ('<s>uns', '<s> uns')
```
### Expected behavior
It looks like the slow LlamaTokenizer wrongly tokenises `uns`. I would not expect the additional whitespace when round-tripping or when tokenising in the first place.
Thanks a lot in advance. | 06-29-2023 08:26:24 | 06-29-2023 08:26:24 | Thanks for reporting, will have a look<|||||>Hi @ArthurZucker! Are you currently working on this? If not, I think I could fix it pretty quickly :)<|||||>Sure! Feel free to take it! ๐ I'll have a look soon otherwise
<|||||>@ArthurZucker @lbeurerkellner I have done some debugging and I have a few observations. Firstly I have checked other tokenizers that use `LlamaTokenizer` or `LlamaTokenizerFast` and the results are pretty weird:
1) the issue is not with `uns` but with any word after a special token like `<s>`. Why this is happening is pretty straightforward
```
# <s> is added to Trie so there is a split after its encounter in the text
tokens = self.tokens_trie.split(text) # tokenization_utils.py:517
```
So it seems like it was a deliberate decision to split special tokens like this?
1) because of the above split, all slow tokenizers based on `LLaMaTokenizer` return `['<s>', 'โuns']`
2) more interesting thing is that most of the tokenizers based on `LlamaTokenizerFast` split text into `['โ<s>', 'uns']` (e.g `fxmarty/tiny-llama-fast-tokenizer`). But for example `openlm-research/open_llama_3b` which is one of the most downloaded llama based models outputs `['<s>', 'โuns']` even thought it has the same tokenizer config like the one from fxmarty.
```LlamaTokenizerFast(name_or_path='openlm-research/open_llama_3b', vocab_size=32000, model_max_length=2048, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False)```<|||||>the fast is working properly! As suspected, this is linked to #24622 and #24565. I am working on a fix for all our spm based models.
For other tokenizers, I wouldnโt refer to them since a lot are outdated/donโt include some fixes<|||||>Actually this is fixed, the output is now `['โ<s>', 'uns'] ['<s>', 'uns']`. The fast just works that way for tokenization, but the output is the same. Use
```python
slow = AutoTokenizer.from_pretrained(model, use_fast=False, legacy = False)
```
|
transformers | 24,568 | closed | Set FSDP `transformer_layer_cls_to_wrap` to `model._no_split_modules` ? | ### Feature request
Currently, when training with FSDP, the Trainer expects to receive an `fsdp_config` argument specifying `fsdp_transformer_layer_cls_to_wrap`.
https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/trainer.py#L1394-L1406
I am wondering if we can set this automatically, when the model has a `_no_split_modules` attribute, e.g.
https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/models/opt/modeling_opt.py#L401
### Motivation
It would be a convenient feature to set this automatically. This argument is model-specific, but it might be nice to define training arguments independently of a specific model type.
### Your contribution
Happy to help make a PR. Would be great if you can confirm whether this would be desirable or if I am misunderstanding something. Thanks! | 06-29-2023 05:52:12 | 06-29-2023 05:52:12 | cc @pacman100 <|||||>Any thoughts about this? Maybe also cc @stas00?<|||||>Unfortunately I don't have experience with FSDP to contribute to this discussion.<|||||>@pacman100 Friendly ping<|||||>Hello @apoorvkh, the code part you highlighted is enabled now only when using FSDP+XLA. For general FSDP, internally everything is handled by Accelerate. It happens here: https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/src/transformers/training_args.py#L1533-L1556
`fsdp_transformer_layer_cls_to_wrap` support specifying multiple modules but most of the time it is enough to specify the `_no_split_modules`. So, we can have `_no_split_modules` as a default in case the user doesn't specify it when passing `--fsdp full_shard auto_wrap`.
<|||||>PRs https://github.com/huggingface/accelerate/pull/1753 and https://github.com/huggingface/transformers/pull/24980 should add this capability wherein it will try `model. _no_split_modules ` if `fsdp_transformer_layer_cls_to_wrap ` isn't specified. Can you try it out?<|||||>Very cool, thanks a ton! I will try it out and let you know.<|||||>Just circling back, works on my end -- thanks again! |
transformers | 24,567 | open | MT5 data padding not working | ### System Info
Hello,
I am using the latest version of transformers.
I have run into this issue recently and would like to receive some help on it. I am using the MT5 and "google/base" to finetune to my own dataset, while processing the data, I run into the issue where I keep getting error message of dimension not matching even after padding and truncation like suggested in the example:
I tried the exact same code with XLMProphetNet, XLM Roberta, XLNet, all worked. Only MT5 gives me this error message. This error almost always occur at the first step when the trainer is trying to evaluate on the validation data. I suspect this has somethign to do with the evaluation loop, but so far I have found nothing that could help me resolve this issue.
`RuntimeError: output with shape [4, 12, 1, 1] doesn't match the broadcast shape [4, 12, 1, 128]
`
@alexayalamcs tagging Alex here.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, XLMProphetNetDecoder,DataCollatorWithPadding
from transformers import DataCollatorForLanguageModeling
from datasets import concatenate_datasets, load_dataset
from transformers import MT5ForConditionalGeneration, MT5Tokenizer, MT5Config, MT5Model,T5Tokenizer
import torch
from torch.utils.data import DataLoader
from transformers import Trainer
import nltk
import random
from accelerate import Accelerator
accelerator = Accelerator()
import datasets
rouge = datasets.load_metric("rouge")
import evaluate
accuracy_metric = evaluate.load("accuracy")
train = load_dataset("cnn_dailymail", "3.0.0", split = "train")
valid = load_dataset("cnn_dailymail", "3.0.0", split = "validation")
test = load_dataset("cnn_dailymail", "3.0.0", split = "test")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
encoder_max_length=512
decoder_max_length=128
def process_data_to_model_inputs(batch):
# tokenize the inputs and labels
inputs = tokenizer(batch["article"], padding="max_length",truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch["highlights"],padding="max_length", truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
return batch
train_data = train.select(range(16))
#train_data = train_init
#batch_size = 16
batch_size=4
train_data = train_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["article", "highlights", "id"]
)
train_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
val_data = valid.select(range(8))
#val_data = valid
val_data = val_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["article", "highlights", "id"]
)
val_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
from transformers import Seq2SeqTrainer,Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
num_train_epochs = 3,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=False,
output_dir="./",
logging_steps=2,
#save_steps=5000,
eval_steps=2,
# logging_steps=1000,
# save_steps=500,
# eval_steps=7500,
# warmup_steps=2000,
# save_total_limit=3,
)
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids)
label_str = tokenizer.batch_decode(labels_ids)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
trainer.train()
```
### Expected behavior
I would expect this to run through just fine like XLMPropheNet, XLM Roberta, and XLNet, but it does not. | 06-29-2023 04:43:26 | 06-29-2023 04:43:26 | cc @ArthurZucker <|||||>Thank you. One additional information: I tried to follow step by step the official text summrization tutorial here: https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb
But the same error occurred. Thanks a lot! <|||||>Hey! Thanks for reporting could you share the entire traceback of the error? ๐ <|||||>Sure, here's the whole error message. Thanks a lot!
```
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[16], line 10
1 data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
2 trainer = Seq2SeqTrainer(
3 model=model,
4 args=training_args,
(...)
8 data_collator=data_collator,
9 )
---> 10 trainer.train()
11 output = "/output/"
12 #trainer.save_model(output + "MT5-12-original-XLSUM-accuracy")
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer.py:1645, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1640 self.model_wrapped = self.model
1642 inner_training_loop = find_executable_batch_size(
1643 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1644 )
-> 1645 return inner_training_loop(
1646 args=args,
1647 resume_from_checkpoint=resume_from_checkpoint,
1648 trial=trial,
1649 ignore_keys_for_eval=ignore_keys_for_eval,
1650 )
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer.py:2011, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2008 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
2009 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
-> 2011 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
2012 else:
2013 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer.py:2312, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
2310 metrics.update(dataset_metrics)
2311 else:
-> 2312 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
2313 self._report_to_hp_search(trial, self.state.global_step, metrics)
2315 # Run delayed LR scheduler now that metrics are populated
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer_seq2seq.py:159, in Seq2SeqTrainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, **gen_kwargs)
154 gen_kwargs["num_beams"] = (
155 gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams
156 )
157 self._gen_kwargs = gen_kwargs
--> 159 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer.py:3043, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
3040 start_time = time.time()
3042 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 3043 output = eval_loop(
3044 eval_dataloader,
3045 description="Evaluation",
3046 # No point gathering the predictions if there are no metrics, otherwise we defer to
3047 # self.args.prediction_loss_only
3048 prediction_loss_only=True if self.compute_metrics is None else None,
3049 ignore_keys=ignore_keys,
3050 metric_key_prefix=metric_key_prefix,
3051 )
3053 total_batch_size = self.args.eval_batch_size * self.args.world_size
3054 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer.py:3235, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
3232 batch_size = observed_batch_size
3234 # Prediction step
-> 3235 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
3236 inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None
3238 if is_torch_tpu_available():
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\trainer_seq2seq.py:276, in Seq2SeqTrainer.prediction_step(self, model, inputs, prediction_loss_only, ignore_keys)
270 if (
271 "labels" in inputs
272 and "decoder_input_ids" in inputs
273 and inputs["labels"].shape == inputs["decoder_input_ids"].shape
274 ):
275 inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"}
--> 276 generated_tokens = self.model.generate(**inputs, **gen_kwargs)
278 # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop
279 # TODO: remove this hack when the legacy code that initializes generation_config from a model config is
280 # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183
281 if self.model.generation_config._from_model_config:
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\autograd\grad_mode.py:28, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
25 @functools.wraps(func)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\generation\utils.py:1522, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
1516 raise ValueError(
1517 "num_return_sequences has to be 1 when doing greedy search, "
1518 f"but is {generation_config.num_return_sequences}."
1519 )
1521 # 11. run greedy search
-> 1522 return self.greedy_search(
1523 input_ids,
1524 logits_processor=logits_processor,
1525 stopping_criteria=stopping_criteria,
1526 pad_token_id=generation_config.pad_token_id,
1527 eos_token_id=generation_config.eos_token_id,
1528 output_scores=generation_config.output_scores,
1529 return_dict_in_generate=generation_config.return_dict_in_generate,
1530 synced_gpus=synced_gpus,
1531 streamer=streamer,
1532 **model_kwargs,
1533 )
1535 elif is_contrastive_search_gen_mode:
1536 if generation_config.num_return_sequences > 1:
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\generation\utils.py:2339, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2336 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2338 # forward pass to get next token
-> 2339 outputs = self(
2340 **model_inputs,
2341 return_dict=True,
2342 output_attentions=output_attentions,
2343 output_hidden_states=output_hidden_states,
2344 )
2346 if synced_gpus and this_peer_finished:
2347 continue # don't waste resources running the code we don't need
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\models\mt5\modeling_mt5.py:1753, in MT5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1750 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)
1752 # Decode
-> 1753 decoder_outputs = self.decoder(
1754 input_ids=decoder_input_ids,
1755 attention_mask=decoder_attention_mask,
1756 inputs_embeds=decoder_inputs_embeds,
1757 past_key_values=past_key_values,
1758 encoder_hidden_states=hidden_states,
1759 encoder_attention_mask=attention_mask,
1760 head_mask=decoder_head_mask,
1761 cross_attn_head_mask=cross_attn_head_mask,
1762 use_cache=use_cache,
1763 output_attentions=output_attentions,
1764 output_hidden_states=output_hidden_states,
1765 return_dict=return_dict,
1766 )
1768 sequence_output = decoder_outputs[0]
1770 # Set device for model parallelism
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\models\mt5\modeling_mt5.py:1062, in MT5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1049 layer_outputs = checkpoint(
1050 create_custom_forward(layer_module),
1051 hidden_states,
(...)
1059 None, # past_key_value is always None with gradient checkpointing
1060 )
1061 else:
-> 1062 layer_outputs = layer_module(
1063 hidden_states,
1064 attention_mask=extended_attention_mask,
1065 position_bias=position_bias,
1066 encoder_hidden_states=encoder_hidden_states,
1067 encoder_attention_mask=encoder_extended_attention_mask,
1068 encoder_decoder_position_bias=encoder_decoder_position_bias,
1069 layer_head_mask=layer_head_mask,
1070 cross_attn_layer_head_mask=cross_attn_layer_head_mask,
1071 past_key_value=past_key_value,
1072 use_cache=use_cache,
1073 output_attentions=output_attentions,
1074 )
1076 # layer_outputs is a tuple with:
1077 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)
1078 if use_cache is False:
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\models\mt5\modeling_mt5.py:557, in MT5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict)
554 else:
555 self_attn_past_key_value, cross_attn_past_key_value = None, None
--> 557 self_attention_outputs = self.layer[0](
558 hidden_states,
559 attention_mask=attention_mask,
560 position_bias=position_bias,
561 layer_head_mask=layer_head_mask,
562 past_key_value=self_attn_past_key_value,
563 use_cache=use_cache,
564 output_attentions=output_attentions,
565 )
566 hidden_states, present_key_value_state = self_attention_outputs[:2]
567 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\models\mt5\modeling_mt5.py:462, in MT5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions)
451 def forward(
452 self,
453 hidden_states,
(...)
459 output_attentions=False,
460 ):
461 normed_hidden_states = self.layer_norm(hidden_states)
--> 462 attention_output = self.SelfAttention(
463 normed_hidden_states,
464 mask=attention_mask,
465 position_bias=position_bias,
466 layer_head_mask=layer_head_mask,
467 past_key_value=past_key_value,
468 use_cache=use_cache,
469 output_attentions=output_attentions,
470 )
471 hidden_states = hidden_states + self.dropout(attention_output[0])
472 outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
1047 # If we don't have any hooks, we want to skip the rest of the logic in
1048 # this function, and just call forward.
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\AppData\Local\anaconda3\envs\hface\lib\site-packages\transformers\models\mt5\modeling_mt5.py:420, in MT5Attention.forward(self, hidden_states, mask, key_value_states, position_bias, past_key_value, layer_head_mask, query_length, use_cache, output_attentions)
417 else:
418 position_bias_masked = position_bias
--> 420 scores += position_bias_masked
421 attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(
422 scores
423 ) # (batch_size, n_heads, seq_length, key_length)
424 attn_weights = nn.functional.dropout(
425 attn_weights, p=self.dropout, training=self.training
426 ) # (batch_size, n_heads, seq_length, key_length)
RuntimeError: output with shape [4, 12, 1, 1] doesn't match the broadcast shape [4, 12, 1, 32]
`
```<|||||>Hey! I did not have time to check this, if you can isolate a small reproduction script (without all the training loop) would be great. Otherwise, I am investigating
<|||||>Hi Arthur @ArthurZucker , the code that I shared initially is a small training loop without all the samples and could reproduce the error once run (the training size is set to be 16 and the evaluation set to be 8). The run time should take about 3 minutes top, because it has to download the CNNDailyMail dataset first. Thank a lot for your help!! |
transformers | 24,566 | closed | Update some torchscript tests after #24505 | # What does this PR do?
Need to update the logic in some torchscript tests after #24505 | 06-29-2023 04:41:28 | 06-29-2023 04:41:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Thanks for the suggestion. I actually copied from the common test file. However, there is
```python
model_buffers = list(model.buffers())
for non_persistent_buffer in non_persistent_buffers.values():
found_buffer = False
for i, model_buffer in enumerate(model_buffers):
if torch.equal(non_persistent_buffer, model_buffer):
found_buffer = True
break
self.assertTrue(found_buffer)
model_buffers.pop(i)
```
there (which I didn't see when working on this PR). This uses the `values`.
So I am going to copy the above block (and therefore keeping using dict). Let me know if you have other opinions instead.<|||||>If you end up using the values, no worries.<|||||>Copy the mentioned block to all `_create_and_check_torchscript` definition.
Let's not change this by looking if the model use persistent or non-persistent buffuer: just keep the logic in the common test file. |
transformers | 24,565 | closed | โ ๏ธโ ๏ธ[`T5Tokenize`] Fix T5 family tokenizersโ ๏ธโ ๏ธ | # What does this PR do?
Fixes the `T5Tokenizer` (not the fast one yet). (at the same time adresses part of https://github.com/huggingface/transformers/issues/11531)
When converting `UMT5` I created a reproduction snippet for any t5x model form the original repo. I realized that a very very small variation in the input completely changes the output for non-finetuned models. The issue lies with the way we process `<extra_id_xx>`.
Example:
```python
# t5-base tokenizer
>>> tokenizer.encode("<extra_id_0>. Hello", add_special_tokens = False)
[32099, 3, 5, 8774] # ['<extra_id_0>', ' โ', '.', 'โHello']
# seqio.SentencePieceVocabulary(vocab_path, extra_ids = 300)
>>> processor.encode("<extra_id_0>. Hello")
[32099, 5, 8774] # ['<extra_id_0>', '.', 'โHello']
#after fix:
>>> tokenizer.encode("<extra_id_0>. Hello", add_special_tokens = False)
[32099, 5, 8774] # ['<extra_id_0>', '.', 'โHello']
```
The reason is that t5x wrapps arround `sentencepiece`, and [adds the extra id to the vocab](https://github.com/google/seqio/blob/4d3097973e9e24ec2963319ec3c5ff518811060f/seqio/vocabularies.py#L362), but they are not saved that way.
We don't add them to the vocab, so when we tokenize, we split on special tokens, thus the sentencepiece model only sees:
```python
>>> tokenizer.sp_model.encode(". Hello")
[273, 274, 9]
```
While the original model never sees a `.` (or a lot of other characters) alone, and thus we add an extra space...
This is a bug fix with regards to training, it is **breaking** in the sense that is should remove the space.
TODO:
- [x] Extra checks should be added to make sure this does not add anything else (like stripping a ` `. This for example would break: ` tokenizer.encode(". Hello")` as it remove the prefix space that is normally added.
| 06-29-2023 02:10:16 | 06-29-2023 02:10:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Actually switch t5 tests have to be updated!
This means I have to check if the models were trained with this extra token (if they used HF tokenizer) or not.
- [x] `tests.models.instructblip.test_modeling_instructblip.InstructBlipModelIntegrationTest testMethod=test_inference_flant5_xl` failing on `main` too so not related.....
<img width="1560" alt="image" src="https://github.com/huggingface/transformers/assets/48595927/529a1cdf-6907-42c3-9c48-1d2a6177c8e6">
- [x] `tests.models.mt5.test_modeling_flax_mt5.MT5IntegrationTest` also fails on main...
- [x] `tests/models/t5/test_tokenization_t5.py` the issue comes from the `convert_slow` modification. Need to investigate
- [ ] tests/models/t5/test_tokenization_t5.py:399 T5TokenizationTest.test_get_sentinel_token_ids_for_fasttokenizer
- [ ] tests/test_tokenization_common.py:3425 T5TokenizationTest.test_save_pretrained
- [ ] tests/models/t5/test_tokenization_t5.py:271 T5TokenizationTest.test_special_tokens_initialization<|||||>This can also be made non "breakable" with a flag. Up to debate since it is a bug fix.<|||||>Edit: just to make sure, I did more testing and unfortunately , there is one bug:
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_', '_Hello', '<extra_id_0>']
```
instead of
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_Hello', '<extra_id_0>']
```
This is because we have to prepend a `_` instead of a space. (`text = SPIECE_UNDERLINE + text`. Not a single test caught this when runing `pytest tests -k t5` which is interesting.
Fixing asap and adding tests. This is becoming very complex ๐ <|||||>I'm getting this legacy behaviour warning come up when simply loading a T5 tokenizer - it appears even before using the tokenizer. Is there an updated way to load the tokenizer? The warning appears when running the following lines of code:
from transformers import AutoTokenizer
tokeniser = AutoTokenizer.from_pretrained("google/mt5-small")
The error is:
You are using the legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
/usr/local/lib/python3.10/dist-packages/transformers/convert_slow_tokenizer.py:470: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(<|||||>Yep, just set `legacy=False`. The goal of the warning is for you to decide wether or not you thing the legacy behaviour is alright with you or not. <|||||>so tokenizers just have to be loaded with `legacy=False` forever now? Seems like an odd choice.<|||||>Since this is a breaking change, the next version of transformers will probably have `legacy=False` by default, and remove the warning. We don't really have much choice in order to preserve backward compatibility ๐
<|||||>+1 for this proposal, since my users are asking why this error shows up for them and if the incorrect behavior is the default that is not desirable for anyone just wanting to use a model. People needing the legacy behavior can then opt-in. Hardcoding legacy=False in my own code feels wrong to me.<|||||>I am also a little confused when I use LLaMA tokenizer. Is it wrong for LLaMA?<|||||>For LlaMa, it is a bit special. The original tokenizer does not have the eos and bos as part of the special tokens This means that they are parsed (split) by the model, while this does not happen with transformers.
I am testing to make sure whether this should apply for llama based on llama2. Whatever the case, if you don't use legacy, and a special token is in the middle of a sequence:
- `Hey how are you?</s> I'm fine and you`'s encoding will be wrong for two reasons. 1, we split special tokens. 2, an extra space will be added between `</s> I`. Using `legacy=False` the extra space is not added.
- `<s>Hey how are you?` will have a different encoding because llama would tokenize it as [`<`, `s`, `>`, `Hey`] while we would tokenizer it as [`<s>`, `_Hey`] note the extra `_`. With `legacy=False` we will have [`<s>`, `Hey`] which is already better. I am implementing now the possibility to split special tokens (meaning ignore that they are special) which will bridge the final gap to allow us to have [`<`, `s`, `>`, `Hey`].
An issue can come up if:
- You manualy add the eos at the beginning using ` tokenizer.encode("<s>Hey")` the `tokenizer.sp_model` only sees `Hey` which get's encoded as [`_Hey`], but the `_` is removed by `legacy=False`. This is not what you want in this specific case because the special token is placed at the start. With #25081, the special tokens should be split by default for llama, which will make things right! <|||||>So if text does not explicitly contain special tokens, the tokenizer can work well?<|||||>and so how does one continue using the legacy mode, but remove the warning?
Surely if the model designer wants the legacy mode they should be able to select that w/o having the user puzzle over the warning.<|||||>Ok will make it default to None, that way you can be warned only if you just have not set the argument! Thanks for the feedback <|||||>that's a great idea, Arthur - thank you!<|||||>See #25131 ๐ <|||||>- I just tested the transformers=4.31.0, however, the bug is still there, shown as following:
```
>>> btokenizer = transformers.AutoTokenizer.from_pretrained(model_path,legacy=False)
>>> btokenizer.decode(btokenizer("Hey how are you?</s> I'm fine and you")['input_ids'])
<s> Hey how are you?</s> I'm fine and you
```
Note that there are two spaces between` </s> and I`.
- Another issue that may require more clarification that should a token at the start of a string, with the current tokenizer, be added with a prefix space, no matter with or without a special token in front of it?
- In transformers of LLama , it says
> The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. โBananaโ), the tokenizer does not prepend the prefix space to the string.
It is confusing how can the first token is not a start of a word?
<|||||>Hey! A few answers:
- you are using the fast version, which was never mentioned to be fixed yet.
- you are comparing the decoded outputs, which were not touched either.
- I don't understand your question. My recommendation is to wait a little bit until we fix the default stripping mecanism with #23909, then will adresse the final issues with Llama. Decoding issue was mentioned here #25073. The first token is always a start of a word, but when you are the first token after a special token, you are just a token, not the first one ๐ <|||||>@ArthurZucker You can fix whatever possible now, but many people have finetuned v2 models without using `legacy=False`, so that's not really fair to put on the user. There should have been a strict error to prevent this, not a warning as most people ignore, and while I appreciate what the HF team does, this was handled very poorly. <|||||>I hope the above can be resolved in a patch release, I had expected that earlier. Waiting until 4.32 while the default is causing issues in real world usage seems to long, if we have 4.31.1 with legacy=false by default it solves the issues.<|||||>The released llama2 models on the hub were all using `legacy = False`, so if they were not using the original tokenizer, not entirely sure what we could have done better. A lot of people trained `llama1` with `legacy=False` and use the `LlamaTokenizer` for other places, we cannot have a breaking change raising an error this way.
More over, a lot of people use the `fast` version of the tokenizer, which was not changed either.
Our goal is not to default to `legacy=False` but leave users choose the best solution for them, thus a warning. We are now defaulting to `True` if the `legacy` parameter was not set, with a warning. If you want to suggest maybe improvements for the warning I am all ears! We can maybe make it more visible?
Note that the decoding is the same regardless of `legacy`, and `legacy` fixes only affects slow tokenizers.
```python
>>> from transformers import AutoTokenizers
>>> btokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf",legacy=True)
>>> btokenizer.decode(btokenizer("Hey how are you?</s> I'm fine and you")['input_ids'])
"<s> Hey how are you?</s> I'm fine and you"
```
@official-elinas if they finetuned the model, then they can keep using `legacy=False`, since the model learned that there is an extra space after the special tokens. There are a lot of applications, but if you added a new token like `<bot>`, then anyway the model learns a new concepts, will learn at the same time that there is an extra space afterwards.
Anyway I'm also really sorry there was a hidden bug, and the first Llama release was also very messy, I should have checked more than twice! Thanks for your feedbacks ๐ค
<|||||>Just to give two cents here. I run my tests with both fast and slow tokenizers and I found that using legacy=True leads to more inconsistent behavior between the two. In the examples below the slow tokenizer yields `unk` when the fast one does not. Interestingly, I would expect the legacy one to be broken, _not_ the non-legacy one. I do not quite understand why the legacy one seems to work fine but the new one does not, so I will stick with the legacy behavior.
```
from transformers import AutoTokenizer
# LEGACY
########
s = "</s> quiet" # </s> is a special token
tokenizer_slow = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=False, legacy=True)
tokenizer_fast = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=True, legacy=True)
print(tokenizer_slow.decode(tokenizer_slow(s).input_ids))
# </s> quiet</s>
print(tokenizer_fast.decode(tokenizer_fast(s).input_ids))
# </s> quiet</s>
# Without space
s = "</s>quiet" # </s> is a special token
tokenizer_slow = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=False, legacy=True)
tokenizer_fast = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=True, legacy=True)
print(tokenizer_slow.decode(tokenizer_slow(s).input_ids))
# </s> quiet</s>
print(tokenizer_fast.decode(tokenizer_fast(s).input_ids))
# </s> quiet</s>
# NOT LEGACY
############
s = "</s> quiet" # </s> is a special token
tokenizer_slow = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=False, legacy=False)
tokenizer_fast = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=True, legacy=False)
print(tokenizer_slow.decode(tokenizer_slow(s).input_ids))
# </s><unk></s>
print(tokenizer_fast.decode(tokenizer_fast(s).input_ids))
# </s> quiet</s>
# Without space
s = "</s>quiet" # </s> is a special token
tokenizer_slow = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=False, legacy=False)
tokenizer_fast = AutoTokenizer.from_pretrained("google/mt5-base", use_fast=True, legacy=False)
print(tokenizer_slow.decode(tokenizer_slow(s).input_ids))
# </s><unk></s>
print(tokenizer_fast.decode(tokenizer_fast(s).input_ids))
# </s> quiet</s>
```<|||||>Yes yes, if you look at #25224 will fix the unknown. Reported in #25176. It's a bit of a mess, but should all be fixed now! Pretty deep bug indeed<|||||>Hey @ArthurZucker -- so if I want to add a new token to the LLamaTokenizer like 'your_token', should I add the token to the sentencepiece model [#25224](https://github.com/huggingface/transformers/pull/25224) and then load the tokenizer using huggingface? Do I have to change the tokenizer.json/special_tokens_map.json/tokenizer_config.json of the LLamaTokenizer as well and how should I change them?
Currently using huggingface version 4.31.0.<|||||>It depends on the expected behaviour. ( simply put, if you want an extra space or not. But should not make a difference overall in results)
If you train the model, it won't make a difference (intuition mostly but you are either teaching him that `<new_token>` always has a space after it or not).
|
transformers | 24,564 | closed | InstructBlipProcessor not working with load_in_4bit and load_in_8bit | ### System Info
transformers @ git+https://github.com/huggingface/transformers@68c92981ff2b804979d2e6107eeefe298d1e5183
Python 3.11.4
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... Off | 00000000:00:04.0 Off | 0 |
| N/A 36C P0 50W / 400W | 845MiB / 40960MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 12365 C ...nda/envs/myenv/bin/python 843MiB |
+-----------------------------------------------------------------------------+
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Currently trying to run the following script:
```
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
torch.cuda.empty_cache()
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
image = Image.open('examples/test1.jpeg')
inputs = processor(images=image, text='', return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
```
But obtaining the following error (see below). Is it possible that InstructBlipForConditionalGeneration does not support yet `load_in_4bit`?
Error logs:
```
RuntimeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 outputs = model.generate(
2 **inputs,
3 do_sample=False,
4 num_beams=5,
5 max_length=256,
6 min_length=1,
7 top_p=0.9,
8 repetition_penalty=1.5,
9 length_penalty=1.0,
10 temperature=1,
11 )
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:1517, in InstructBlipForConditionalGeneration.generate(self, pixel_values, qformer_input_ids, qformer_attention_mask, input_ids, attention_mask, **generate_kwargs)
1514 self._preprocess_accelerate()
1516 batch_size = pixel_values.shape[0]
-> 1517 image_embeds = self.vision_model(pixel_values, return_dict=True).last_hidden_state
1519 image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long, device=image_embeds.device)
1521 query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:538, in InstructBlipVisionModel.forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
535 if pixel_values is None:
536 raise ValueError("You have to specify pixel_values")
--> 538 hidden_states = self.embeddings(pixel_values)
540 encoder_outputs = self.encoder(
541 inputs_embeds=hidden_states,
542 output_attentions=output_attentions,
543 output_hidden_states=output_hidden_states,
544 return_dict=return_dict,
545 )
547 last_hidden_state = encoder_outputs[0]
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/transformers/models/instructblip/modeling_instructblip.py:113, in InstructBlipVisionEmbeddings.forward(self, pixel_values)
111 batch_size = pixel_values.shape[0]
112 target_dtype = self.patch_embedding.weight.dtype
--> 113 patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
114 patch_embeds = patch_embeds.flatten(2).transpose(1, 2)
116 class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/envs/myenv/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File /opt/conda/envs/myenv/lib/python3.11/site-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
455 if self.padding_mode != 'zeros':
456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
460 self.padding, self.dilation, self.groups)
RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same
```
### Expected behavior
Produce output string as expected | 06-29-2023 00:06:32 | 06-29-2023 00:06:32 | cc @younesbelkada <|||||>Hi @fraferra
In https://github.com/huggingface/transformers/pull/24555 I have fixed the a silent issue with processors that you are currently facing, can you try to install transformers from source and run:
```python
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
torch.cuda.empty_cache()
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b", load_in_4bit=True, torch_dtype=torch.bfloat16)
image = Image.open('examples/test1.jpeg')
inputs = processor(images=image, text='', return_tensors="pt").to(device, torch.bfloat16)
outputs = model.generate(
**inputs,
do_sample=False,
num_beams=5,
max_length=256,
min_length=1,
top_p=0.9,
repetition_penalty=1.5,
length_penalty=1.0,
temperature=1,
)
```
Check a more concrete example here: https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/tests/models/instructblip/test_modeling_instructblip.py#L524<|||||>Thank you @younesbelkada for looking into it! Is it possible that `BatchEncoding.to()` needs to be updated?
I can see in the source code that `BatchEncoding.to()` (https://github.com/huggingface/transformers/blob/9e28750287df57942d716083ae53bb4e766104c2/src/transformers/tokenization_utils_base.py#L756) only takes 1 argument.
I am getting the following error when trying to run your code snippet:
```
1 device = "cuda" if torch.cuda.is_available() else "cpu"
2 image = Image.open('examples/test1.jpeg')
----> 3 inputs = processor(images=image, text='', return_tensors="pt").to(device, torch.bfloat16)
4 outputs = model.generate(
5 **inputs,
6 do_sample=False,
(...)
13 temperature=1,
14 )
TypeError: BatchEncoding.to() takes 2 positional arguments but 3 were given
```<|||||>Pretty weird since `InstructBlipProcessor.__call__` should return `BatchFeature` which the `to` method can take `*args, **kwargs` unlike the one from `BatchEncoding` which only takes `device` as an argument.<|||||>Hi @fraferra
Can you install transformers from the main branch and try again?
```
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers.git
```<|||||>@younesbelkada it worked, thank you! for some reason it wouldnt update to the latest transformers' version in the conda env. After uninstalling it and reinstalling it after it returned `BatchFeature`<|||||>Thank you @fraferra feel free to close the issue ! Let us know if you have more questions<|||||>as the vision is a model, and uses the llm model vicunia or optB7 etc... would there be a way to just use the already loaded model to ask a txt question and get a text answer irrelevant to the image? just use llm as an llm? |
transformers | 24,563 | open | Dataset features disappear after initlizing Trainer | ### System Info
- `transformers` version: 4.4.2
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
class CTCTrainer(Trainer):
def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]:
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
kwargs = dict(device=self.args.device)
if self.deepspeed and inputs[k].dtype != torch.int64:
kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype()))
inputs[k] = v.to(**kwargs)
if k == 'labels': # labels are list of tensor, not tensor, special handle here
for i in range(len(inputs[k])):
kwargs = dict(device=self.args.device)
if self.deepspeed and inputs[k][i].dtype != torch.int64:
kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype()))
inputs[k][i] = inputs[k][i].to(**kwargs)
if self.args.past_index >= 0 and self._past is not None:
inputs["mems"] = self._past
return inputs
def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (:obj:`nn.Module`):
The model to train.
inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument :obj:`labels`. Check your model's documentation for all accepted arguments.
Return:
:obj:`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
inputs = self._prepare_inputs(inputs)
if self.use_amp:
with autocast():
loss = self.compute_loss(model, inputs)
else:
loss = self.compute_loss(model, inputs)
if self.args.n_gpu > 1:
loss = loss.mean()
if self.args.gradient_accumulation_steps > 1:
loss = loss / self.args.gradient_accumulation_steps
if self.use_amp:
self.scaler.scale(loss).backward()
elif self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
elif self.deepspeed:
self.deepspeed.backward(loss)
else:
loss.backward()
return loss.detach()
```
### Expected behavior
I am trying to run the codes from https://github.com/TideDancer/interspeech21_emotion
I tried my best to recreate the environment.
I am using datasets=1.4.2
transformers=4.4.2
I manually print out the dataset value at each stage to debug.
The datasets contains the following features:
```
Dataset({ features: ['emotion', 'file', 'input_values', 'sampling_rate', 'speech', 'text'],
num_rows: 507})
```
The dataset lost all its headers after initializing the Trainer. `my test1` is working well but `my test2` pops error.
```
print(val_dataset[0]['file'])
print('my test1----------------------------------')
val_dataset_original = val_dataset
trainer = CTCTrainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=processor.feature_extractor,
)
print(val_dataset_original[0]['file'])
print('my test2----------------------------------')
```
It then pops `KeyError: file`. Then I print the dataset again, it turns out only 'input_values' is left.
If this is difficult to reproduce, is there a way I can deep copy the dataset? Because I need the 'file' information to write the output results.
I have tried `val_dataset_copy=val_dataset` but both the dataset variables will be affected by the initialization of the trainer.
| 06-28-2023 23:35:27 | 06-28-2023 23:35:27 | Yes, the `Trainer` removes any inputs not accepted by your model our your model won't be able to do a forward pass. You can remove (at your own risk) this by setting [`remove_unused_coumns`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.remove_unused_columns) in your `TrainingArguments` to `False`.<|||||>@sgugger Thank you for replying.
So I am facing this problem during model testing, the `do_predict` part.
I still need the feature information from the dataset before the Trainer is initialized. For instance, the file name.
So based on your answer above, I am thinking of deep copying the dataset so I can loop through the identical dataset by index to get the information I need while feeding the `val_dataset` to the Trainer.
I am new to Huggingface, may I know what's the conventional way to do so?
```
trainer = CTCTrainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=processor.feature_extractor,
)
# print(val_dataset_original[0]['file'])
# print('my test2----------------------------------')
if last_checkpoint is not None:
checkpoint = last_checkpoint
elif model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
if training_args.do_train:
trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model()
if training_args.do_predict:
logger.info('******* Predict ********')
data_collator.audio_only=True
predictions, labels, metrics = trainer.predict(val_dataset, metric_key_prefix="predict")
logits_ctc, logits_cls = predictions
pred_ids = np.argmax(logits_cls, axis=-1)
pred_probs = F.softmax(torch.from_numpy(logits_cls).float(), dim=-1)
print(val_dataset)
with open(data_args.output_file, 'w') as f:
for i in range(len(pred_ids)):
f.write(val_dataset[i]['file'].split("/")[-1] + " " + str(len(val_dataset[i]['input_values'])/16000) + " ")
pred = pred_ids[i]
f.write(str(pred)+' ')
for j in range(4):
f.write(' ' + str(pred_probs[i][j].item()))
f.write('\n')
f.close()
```<|||||>Hey @lxrswdd - I see the `_prepare_inputs` method that you've overridden in the Trainer class is purely to get your dataset in the right format for the model
What you're probably better off doing here is pre-processing your dataset ahead of time, transforming the raw audio values to normalised model input values using an appropriate feature extractor. You can do this quite straightforwardly using ๐ค Datasets's `.map` method
Once you have your pre-processed input values, you can collate them into _batches_ by defining an appropriate data collator. We have several end-to-end examples that will perform the pre-processing a collate steps for you: all you need to do is switch the dataset id for your dataset on the Hub. See [examples/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#connectionist-temporal-classification) for details
Likewise, you can follow this excellent blog post for fine-tuning a CTC system with the ๐ค Trainer API: https://huggingface.co/blog/fine-tune-wav2vec2-english
The only real engineering work you'll have to do if you follow these guides is getting your dataset in the right format, for which you can follow this page: https://huggingface.co/docs/datasets/audio_dataset<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,562 | open | eval_loss returning nan | ### System Info
- `transformers` version: 4.28.0
- Platform: Linux-3.10.0-1160.24.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Still new to this, don't know
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the set up before I run the code:
```
trained_path = '/data/user/home/nchendri/biobertuab/modelInfo'
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir=trained_path,
overwrite_output_dir=True,
num_train_epochs=32,
per_device_train_batch_size=16,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
optim="adamw_torch",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dsTok['train'],
eval_dataset=dsTok['valid'],
)
eval = trainer.evaluate()
```
### Expected behavior
It was supposed to output an eval_loss, very new at this, could have missed something in above code setting stuff up. | 06-28-2023 21:41:08 | 06-28-2023 21:41:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,561 | closed | llama fp16 torch.max bug fix | This PR explicitly sets the threshold tensor for `torch.max` to the same dtype as that of attn_weights to avoid accidental upcasting during mixed precision training. This unblocks ONNX Runtime integration because, without this fix, the torch onnx exporter receives mismatched types into the `torch.max` operation resulting in the following error:
```
Traceback (most recent call last):
File "run_clm_ort.py", line 641, in <module>
main()
File "run_clm_ort.py", line 589, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 454, in train
return inner_training_loop(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 749, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 365, in compute_loss
return super().compute_loss(model_with_loss, inputs, return_outputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2767, in compute_loss
outputs = model(**inputs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1675, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_utils.py", line 375, in _forward
return ortmodule._torch_module.forward(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_utils.py", line 355, in _forward
return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_training_manager.py", line 274, in forward
self._fallback_manager.handle_exception(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_fallback.py", line 160, in handle_exception
raise exception
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_training_manager.py", line 210, in forward
self._initialize_graph_builder()
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 502, in _initialize_graph_builder
self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:786 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Max) bound to different types (tensor(float16) and tensor(float) in node (/_original_module/base_model/model/layers.0/self_attn/Max).
```
Reproduction Intructions:
```Dockerfile
FROM mcr.microsoft.com/azureml/aifx/stable-ubuntu2004-cu117-py38-torch1131
# language-modeling dependencies taken from: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/requirements.txt
RUN pip install accelerate datasets sentencepiece protobuf evaluate scikit-learn
# additional Hugging Face dependencies
RUN pip install optimum peft transformers
RUN git clone https://github.com/huggingface/optimum.git && \
cd optimum/examples/onnxruntime/language-modelling && \
python run_clm.py --model_name_or_path openlm-research/open_llama_7b_400bt_preview --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --output_dir output_dir --overwrite_output_dir --fp16 --deepspeed zero_stage_1.json --num_train_epochs 1 --logging_steps 1 --optim adamw_ort_fused
```
Who can review?
- text models: @ArthurZucker and @younesbelkada | 06-28-2023 21:15:25 | 06-28-2023 21:15:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker I addressed your commment, please have another look. Thank you.<|||||>@amyeroberts I addressed your comment. Please merge asap, thank you. |
transformers | 24,560 | closed | Update masked_language_modeling.md | Improves masked_language_modeling documentation. See https://github.com/huggingface/transformers/issues/24546
| 06-28-2023 21:14:05 | 06-28-2023 21:14:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,559 | closed | Fix Typo | # What does this PR do?
Fixed wrong annotation
- `(seq_len, BS, model_dim) -> (BS, seq_len, model_dim)` -> `(BS, seq_len, model_dim) -> (seq_len, BS, model_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? | 06-28-2023 21:04:36 | 06-28-2023 21:04:36 | Can you check if this is correct @ArthurZucker ?<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,558 | open | Error when setting a high batch-size: `AttributeError: 'NoneType' object has no attribute 'backward'` | ### System Info
Transformers version: latest@github
Accelerate version: latest@github
Deepspeed version: latest@github
### Who can help?
@pacman100 @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py
Use a high `per_device_batch_size` and let `Trainer` drop the batch size. Torchrun launcher with Deepspeed-Zero2.
```
[INFO|trainer.py:1786] 2023-06-28 09:03:54,973 >> ***** Running training *****
[INFO|trainer.py:1787] 2023-06-28 09:03:54,973 >> Num examples = 338
[INFO|trainer.py:1788] 2023-06-28 09:03:54,973 >> Num Epochs = 4
[INFO|trainer.py:1789] 2023-06-28 09:03:54,973 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1790] 2023-06-28 09:03:54,973 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1791] 2023-06-28 09:03:54,973 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1792] 2023-06-28 09:03:54,973 >> Total optimization steps = 8
[INFO|trainer.py:1793] 2023-06-28 09:03:54,974 >> Number of trainable parameters = 8,388,608
0%| | 0/8 [00:00<?, ?it/s][INFO|trainer.py:1786] 2023-06-28 09:04:12,933 >> ***** Running training *****
[INFO|trainer.py:1787] 2023-06-28 09:04:12,933 >> Num examples = 338
[INFO|trainer.py:1788] 2023-06-28 09:04:12,934 >> Num Epochs = 4
[INFO|trainer.py:1789] 2023-06-28 09:04:12,934 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1790] 2023-06-28 09:04:12,934 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1791] 2023-06-28 09:04:12,934 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1792] 2023-06-28 09:04:12,934 >> Total optimization steps = 12
[INFO|trainer.py:1793] 2023-06-28 09:04:12,936 >> Number of trainable parameters = 8,388,608
0%| | 0/8 [00:16<?, ?it/s]
Traceback (most recent call last):t/s]
File "/app/finetune.py", line 796, in <module>
main()
File "/app/finetune.py", line 732, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/memory.py", line 132, in decorator
return function(batch_size, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1938, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2770, in training_step
self.accelerator.backward(loss)
File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1849, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
AttributeError: 'NoneType' object has no attribute 'backward'
```
In this case, I set the `per_device_train_batch_size` to 32 which is too large for an A100-80 (knowingly). Trainer drops the batch-size from 32 to 16 when it overflows (which is expected behavior) but then fails because of ` self.accelerator.backward(loss)`.
Don't see this issue when I set a batch-size that fits the GPU, only when it overflows. I suspect `accelerator.prepare` needs to be called again with the corrected batch-size.
### Expected behavior
Trainer drops the batch size from 32 to 16 and training continues without failure. | 06-28-2023 19:39:18 | 06-28-2023 19:39:18 | cc @muellerzr (?)<|||||>@pacman100 could there be something more I need to check/do related to the deepspeed plugin when doing this that we might be missing? (basically is there a separate param that we should set on the batch size for the train bs here)<|||||>I can repro this so let me know if you need more logs. I'm trying to debug this myself too.<|||||>@orangetin can you tell us more about the deepspeed configuration you are using, how you are launching the script, and the args used? It looks like deepspeed isn't being properly set in the Accelerator hence the issue (or something on those lines). I have a feeling if you don't use deepspeed it will work<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,557 | closed | Make PT/Flax tests could be run on GPU | # What does this PR do?
We don't have jax/flax on our CI runner, so no issue. But when trying to run those tests on a GPU machine with jax/flax installed, we have error. See comment. | 06-28-2023 17:43:47 | 06-28-2023 17:43:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,556 | closed | Update PT/Flax weight conversion after #24030 | # What does this PR do?
Similar to #24547 | 06-28-2023 17:21:37 | 06-28-2023 17:21:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,555 | closed | [`InstructBlip`] Add instruct blip int8 test | # What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24490#discussion_r1242098216
Also fixes an inconsistency with Blip / Blip2 processors in order for users to be able to call `.to()` with both device and dtype argument to cast the input in half precision if necessary. Happy to move that in another PR if needed
cc @sgugger @ydshieh
| 06-28-2023 16:37:26 | 06-28-2023 16:37:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,554 | closed | Fix processor __init__ bug if image processor undefined | # What does this PR do?
Fixes a bug which occurs if a processor is initialized with `image_processor=None`. An exception should be raised, saying an image processor should be defined, but at the moment fails because the code references `feature_extractor` which is not defined if it's not in the kwargs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-28-2023 15:33:46 | 06-28-2023 15:33:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,553 | closed | Update `EncodecIntegrationTest` | # What does this PR do?
Some test fail since the addition of this model. This PR just updates the expect output values. | 06-28-2023 15:28:16 | 06-28-2023 15:28:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks @ydshieh - just to confirm, these updated values come from running the model on CUDA? (versus the original values, which were obtained on CPU)
Yes, it's on GPU. More specifically, on our CI runner's T4 GPU.<|||||>Perfect, thanks for updating the tests for CUDA! Also cc @ArthurZucker as a heads-up since we discussed this offline previously |
transformers | 24,552 | closed | Update old existing feature extractor references | # What does this PR do?
Updates a bunch of old references to feature extractors for vision models.
Most of the code isn't public facing in e.g. docs, but is often copied when new models are added.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-28-2023 14:52:27 | 06-28-2023 14:52:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Thanks for pointing out the extra places I missed. I've updated + other vision files needing the same update. |
transformers | 24,551 | closed | LoRA training adapter_model.bin is 888 bytes always | ### System Info
Ubuntu 22.04
Python 3.10.11
transformers 4.30.2
peft 0.4.0.dev0
accelerate 0.20.3
### Who can help?
@ArthurZucker
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code is at https://github.com/kallewoof/alpaca-lora/blob/202306-ooba-imports/finetune.py
You need a text file with text content (i.e. not instruction-based). Anything goes.
Run the above script: `python ./finetune.py --base_model=MODEL --raw_data_path=TEXTFILE --batch_size=32 --micro_batch_size=1 --num_epochs=2 --lora_r=128 --lora_alpha=256 --cutoff_len=512 --overlap_len=256 --save_steps=1`
### Expected behavior
Expected: After one (each) iteration, a checkpoint should be saved with an adapter_model.bin file that contains the LoRA weights.
Got: The checkpoints are made, but the adapter_model.bin file is only 888 bytes and does not grow (there are two versions; both are wrong, one contains only the LoRA stuff, the other one contains optimizer.pt, rng_state.pth, scheduler.pt, etc. which I have no idea why they are saved for a LoRA.)
Note: using the exact same conda environment, Oobabooga is able to generate LoRAs after loading a model in 4 bit with double quant, where the above finetune.py fails to do so. I have also verified that the resulting raw data dataset is identical between the two code bases. | 06-28-2023 14:22:37 | 06-28-2023 14:22:37 | cc @younesbelkada <|||||>I backtracked until I was on the original tloen/alpaca-lora script, using the original dataset on int8 model, and I am still seeing the 888 byte adapter_model.bin file, along with a bunch of other files that you normally don't see in a LoRA output (optimizer.pt, etc).<|||||>I am closing this as I don't think I will be able to provide adequate feedback for a clean fix, and I've moved on to a different approach. Sorry for wasted time. |
transformers | 24,550 | closed | fix type annotations for arguments in training_args | # What does this PR do?
Fixes #24538 which is basically fixing type annotations for `fsdp`, `fsdp_config`, and `sharded_ddp` in training_args.py
## Who can review?
maybe @sgugger | 06-28-2023 13:59:36 | 06-28-2023 13:59:36 | Fixing CI errors! <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I don't particularly understand why this error occurs in examples_flax - ``` argparse.ArgumentError: argument --sharded_ddp: invalid typing.Union[str, bool, typing.List[transformers.trainer_utils.ShardedDDPOption], NoneType] value: '' ```<|||||>@sgugger `bool` breaks `--sharded_ddp`, I think we can still maintain Boolean arguments with string itself and
https://github.com/huggingface/transformers/blob/20d6b84613984f2497587a62774704882ccbeee6/src/transformers/hf_argparser.py#L168-L173
with this `--sharded_ddp` and `--fsdp` defaults to string <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24550). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,549 | closed | Fix typing annotations for FSDP and DeepSpeed in TrainingArguments | # What does this PR do?
According to the the docstrings and to the code, options `fsdp_config` and `deepspeed` of `TrainingArguments` accept dictionaries with configs for FSDP and DeepSpeed correspondingly. However, the typing annotations for both of them mention only `str` as valid arguments, which makes typing checkers fail when trying to pass a dictionary for these options.
This PR fixes the problem by making these options accept `Dict` as well, also fixing a couple of minor typos in their descriptions.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger | 06-28-2023 13:57:34 | 06-28-2023 13:57:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||> argument --deepspeed: invalid Dict value: './ds_config_zero3.json' i am face this after merge, i have change deepspeed to Optional[str] and it worked <|||||>Ok, let's revert then as it's a purely cosmetic change. |
transformers | 24,548 | open | VS Code Pylance does not highlight transformers imports | ### System Info
When Pylance is used as language server for VSCode, it does not highlight `transformers` imports even though library is correctly installed. Classes imports and library name itself are gray instead of yellow, see the enclosed figure. I'm not sure if it relates to Pylance itself, but it would be nice if this behavior was fixed.

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Create and activate new virtual environment
2. Install transformers 4.30.2
3. Write into a new python module: `from transformers import BatchEncoding`
### Expected behavior
The import statement should be highlighted the same way as other libraries imports. | 06-28-2023 13:43:34 | 06-28-2023 13:43:34 | Sounds like an issue for VsCode or Pylance no?<|||||>This is probably the result of lazy module loading, or even the simple absence of a `__all__` (as far as I can see). This may not be something fixable within `transformers` but it does result from a design choice. https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py#L7139<|||||>The `__all__` is defined in the `_LazyModule` as you can see [here](https://github.com/huggingface/transformers/blob/6c57ce15587810968d64fb4b5700b63726397194/src/transformers/utils/import_utils.py#L1048).<|||||>> Sounds like an issue for VsCode or Pylance no?
Might be, but itโs surprising that tokenizers and datasets which are two most closely related libriaries from the HF ecosystem are functioning correctly. <|||||>Pretty sure this is fixed now, per the above issue on the Pylance repo.
Fix repros on my side; if anyone else would care to confirm, we can probably close this issue. |
transformers | 24,547 | closed | Update PT/TF weight conversion after #24030 | # What does this PR do?
Update PT/TF weight conversion due to the change in #24030.
(can do PT/Flax too in the same PR, but request a review first anyway)
### Code snippet to show issues and verify this PR's effect
(Failing for `main + nightly torch`. Pass for `PR + nightly torch` and `main/PR + stable torch`)
```python
import transformers
from transformers import TFWav2Vec2Model
from tests.models.wav2vec2.test_modeling_tf_wav2vec2 import TFWav2Vec2ModelTest
self = TFWav2Vec2ModelTest()
self.setUp()
model_class = TFWav2Vec2Model
allow_missing_keys = False
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
pt_model_class_name = model_class.__name__[2:] # Skip the "TF" at the beginning
pt_model_class = getattr(transformers, pt_model_class_name)
tf_model = model_class(config)
pt_model = pt_model_class(config)
tf_inputs_dict = self._prepare_for_class(inputs_dict, model_class)
# Check we can load pt model in tf and vice-versa with model => model functions
try:
_tf_model = transformers.load_pytorch_model_in_tf2_model(
tf_model, pt_model, tf_inputs=tf_inputs_dict, allow_missing_keys=allow_missing_keys
)
except:
_tf_model = None
try:
_pt_model = transformers.load_tf2_model_in_pytorch_model(
pt_model, tf_model, allow_missing_keys=allow_missing_keys
)
except:
_pt_model = None
if _tf_model is None:
print("_tf_model fails")
else:
print("_tf_model OK")
if _pt_model is None:
print("_pt_model fails")
else:
print("_pt_model OK")
``` | 06-28-2023 13:03:31 | 06-28-2023 13:03:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,546 | closed | DataCollatorForLanguageModeling call of tokenizer.pad causes crash | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, DataCollatorForLanguageModeling
tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")
multiple_length_batch = [
{
"input_ids": [0, 51, 51, 2],
"labels": [0, 51, 51, 2],
},
{
"input_ids": [0, 10, 11, 12, 13, 2],
"labels": [0, 10, 11, 12, 13, 2],
},
]
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm_probability=0.15,
)
data_collator.torch_call(multiple_length_batch)
```
Causes
```sh
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
I simplified the problem analysis, the minimal example of the bug reproduction was found by calling a trainer using this same datacollator, (the failing function gets called here : https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L45)
In the case input_ids and labels are not padded manually before, this causes DataCollatorForLanguageModeling to crash here :
https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L732
, before "labels" are padded a few line after here :
https://github.com/huggingface/transformers/blob/e84bf1f734f87aa2bedc41b9b9933d00fc6add98/src/transformers/data/data_collator.py#L748
I suspect the conversion to pytorch tensor should be done after labels are also padded by this line and not before.
| 06-28-2023 12:18:06 | 06-28-2023 12:18:06 | This data collator is not meant to be used with labels in your data. It builts the labels from the input IDs (either by masking random tokens or copying the inputs).<|||||>I understand thank you. Then this line in the documentation is misleading, since it prepares a 'labels' feature (which works in the example because all samples are the same size) :
https://github.com/huggingface/transformers/blob/daccde143d646e4fec8d52cc870b8c7fd1d2581c/docs/source/en/tasks/masked_language_modeling.md?plain=1#L171
Maybe in the case a 'labels' feature is provided, the error could be caught earlier than trying to convert to pytorch tensor ?<|||||>Yes this line should probably be removed. I think it's a copy-paste from a case in our examples where we use the standard data collator after.<|||||>Ok thank you I will pull-request this line removal. |
transformers | 24,545 | open | open_llama tokenization modules import | ### System Info
I stumbled upon this issue, which is not an issue with the OpenLLaMa implementation; I did not try to use OpenLLaMA.
I am importing `transformers.pipeline` in a project including tests using [freezegun](https://github.com/spulec/freezegun) to freeze dates. It seems like freezegun recursively checks all imports of all imported modules, and I am getting the following error:
`ModuleNotFoundError: No module named 'transformers.models.open_llama.tokenization_open_llama'`
and the same for `transformers.models.open_llama.tokenization_open_llama_fast`.
This is probably just an import error, since it seems that open_llama uses `LlamaTokenizer` and `LlamaTokenizerFast`; creating stubs for `transformers.models.open_llama.tokenization_open_llama` and `transformers.models.open_llama.tokenization_open_llama_fast` seems to solve the import issue and tests run just fine.
### Who can help?
@s-JoL
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal code to reproduce:
```
from freezegun import freeze_time
@freeze_time('2022-01-01 12:01:00')
def test_a():
from transformers import pipeline
```
### Expected behavior
no import errors | 06-28-2023 11:01:55 | 06-28-2023 11:01:55 | There is not a single line in the library trying that import, so it looks like an issue on `freezegun` more than on Transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Just hit this issue too with `freezegun`
<img width="1899" alt="image" src="https://github.com/huggingface/transformers/assets/4619775/a2149e8a-19ac-493f-8284-c26109621b88">
<|||||>While this might be an issue with how freezegun loads dependencies, `freezegun` did not come up with those names: `transformers.models.open_llama.tokenization_open_llama`, `transformers.models.open_llama.tokenization_open_llama_fast`. They are referenced [here](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/deprecated/open_llama/__init__.py#L35).
The temporary workaround I used is to create stubs for them using [surrogate](https://github.com/ikostia/surrogate) like this:
```
def pytest_sessionstart(session):
@surrogate('transformers.models.open_llama.tokenization_open_llama')
@surrogate('transformers.models.open_llama.tokenization_open_llama_fast')
def freezegun_initial_import():
pass
freezegun_initial_import()
```<|||||>@npapasarantopoulos The link you show gives the name `tokenization_open_llama` defined in `transformers.models.deprecated.open_llama`, so it does seem like `freezegun` is making those names up. |
transformers | 24,544 | open | Issue while training Donut model for parsing with custom decoder and tokenizer | Hey all, I was trying to train donut model for parsing, which contains Arabic(only) information, in order to achieve this i had collected `Arabic corpus` from various sources and then trained,
1. `Mbart Tokenizer` for arabic corpus.
2. `Mbart decoder` with the same dataset.
Initially the model was training well meaning the loss was decreasing gradually but, during Validation, all my dataset tokens are predicting as `<UNK>` tokens. Because of this the `Normed ED` value is above `0.9` but still the loss is decreasing.
Is there anything I am missing out , any inputs will help a lot. @gwkrsrch , @Vadkoz ,@NielsRogge
Thanks regards. | 06-28-2023 10:59:58 | 06-28-2023 10:59:58 | Hi @akashlp27
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค
However, you might want to take a look
https://github.com/huggingface/transformers/issues/18190#issuecomment-1273482690
https://github.com/huggingface/transformers/issues/18190#issuecomment-1216584872
(Not sure though)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,543 | closed | [`gpt2-int8`] Add gpt2-xl int8 test | # What does this PR do?
Addresses: https://github.com/huggingface/transformers/pull/24504#discussion_r1243813429
Currently the test is failing because of the following reason (that I need to explore and as discussed offline @sgugger):
1- If a buffer is defined as `persistent=False` in the modeling file
2- and not present in the state_dict
The dispatch_model seems to fail as I face device mismatch issue. I think a potential fix needs to be addressed on accelerate
Facing the same issue for blip2 int8 tests, as the `cos_cached` and `sin_cached` are defined as buffers with persistent=False and not present in the state dict.
Seems that the issue does not happen before 5791d949ff93733c102461ba89c8310745a3fa79 on accelerate
cc @sgugger
EDIT: https://github.com/huggingface/accelerate/pull/1660 should fix the issue I am facing | 06-28-2023 10:22:01 | 06-28-2023 10:22:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can confirm all the tests pass now (with the accelerate PR mentioned above) |
transformers | 24,542 | closed | Memory leak after repeated inference | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I am using the following script, the memory rises slowly and memory leak happend. I change the transformers version and the torch version, but it doesn't work. When using TFBertModel instead of BertModel, or moving model to CPU, the memory leak still exists.
##########
from transformers import BertTokenizer, BertModel
import torch
import psutil
device = "cuda:0"
model_path = "bert-base-uncased"
model = BertModel.from_pretrained(model_path)
tokenizer = BertTokenizer.from_pretrained(model_path)
model.to(device)
model.eval()
query = "Replace me by any text you'd like."
for i in range(10000):
with torch.no_grad():
encoded_input = tokenizer(query, return_tensors='pt').to(device)
a = psutil.Process(os.getpid()).memory_info().rss / 1024 # memory before
output = model(**encoded_input)
b = psutil.Process(os.getpid()).memory_info().rss / 1024 # memory after
if b != a and i > 0:
print(i)
print("b-a=%s kb" % (b - a))
##########
result:
62
b-a=80.0 kb
6195
b-a=8.0 kb
### Expected behavior
Memory is almost stable. | 06-28-2023 07:39:29 | 06-28-2023 07:39:29 | Hi @huu3301
Please format the code snippet properly (with proper indent too). See for example
<img width="332" alt="Screenshot 2023-06-27 111112" src="https://github.com/huggingface/transformers/assets/2521628/ec2852fb-695a-456b-b09f-8f99ef0bdd30">
Regarding your question about leak:
Do you only get iteration `62` and `6195` with `b != a`? In this case, there is no memory leak: it's just python process gets to use a bit more memory to perform something under the hood.
<|||||>@ydshieh
The indent disappeared when I submitted the issue.
I increased the number of iteration to 1000000 , and the memory increment was less than 50kb. There was no memory leak. Thank you for answering. |
transformers | 24,541 | closed | Unpin DeepSpeed and require DS >= 0.9.3 | # What does this PR do?
From @pacman100
> (for accelerate) the minimum DeepSpeed version now is 0.9.3
| 06-28-2023 07:28:27 | 06-28-2023 07:28:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,540 | closed | Issue Loading 4-bit and 8-bit language models: ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`. | ### System Info
### System Info
I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago around June 8th.
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running in Colab on an A100 in Colab PRro
```
!pip install git+https://www.github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate
!pip install bitsandbytes
!pip install einops
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-40b-instruct"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
input_text = "Describe the solar system."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Cell output:
```
Collecting git+https://www.github.com/huggingface/transformers
Cloning https://www.github.com/huggingface/transformers to /tmp/pip-req-build-6pyatvel
Running command git clone --filter=blob:none --quiet https://www.github.com/huggingface/transformers /tmp/pip-req-build-6pyatvel
warning: redirecting to https://github.com/huggingface/transformers.git/
Resolved https://www.github.com/huggingface/transformers to commit e84bf1f734f87aa2bedc41b9b9933d00fc6add98
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (3.12.2)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.31.0.dev0)
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 11.6 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2022.10.31)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2.27.1)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.31.0.dev0)
Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 114.2 MB/s eta 0:00:00
Collecting safetensors>=0.3.1 (from transformers==4.31.0.dev0)
Downloading safetensors-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.3/1.3 MB 79.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (4.65.0)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (4.6.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (3.4)
Building wheels for collected packages: transformers
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.31.0.dev0-py3-none-any.whl size=7228417 sha256=5867afa880111a40f7b630e51d9f1709ec1131236a31c2c7fb5f97179e3d1405
Stored in directory: /tmp/pip-ephem-wheel-cache-t06u3u6x/wheels/c1/ac/11/e69d454307e735e14f4f95e575c8be27fd99835ec36f504c13
Successfully built transformers
Installing collected packages: tokenizers, safetensors, huggingface-hub, transformers
Successfully installed huggingface-hub-0.15.1 safetensors-0.3.1 tokenizers-0.13.3 transformers-4.31.0.dev0
Collecting git+https://github.com/huggingface/accelerate
Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-76ziff6x
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-76ziff6x
Resolved https://github.com/huggingface/accelerate to commit d141b4ce794227450a105b7281611c7980e5b3d6
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (23.1)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (5.9.5)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (6.0)
Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (2.0.1+cu118)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (4.6.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (1.11.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1.2)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (16.0.6)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->accelerate==0.21.0.dev0) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->accelerate==0.21.0.dev0) (1.3.0)
Building wheels for collected packages: accelerate
Building wheel for accelerate (pyproject.toml) ... done
Created wheel for accelerate: filename=accelerate-0.21.0.dev0-py3-none-any.whl size=234648 sha256=71b98a6d4b1111cc9ca22265f6699cd552325e5f71c83daebe696afd957497ee
Stored in directory: /tmp/pip-ephem-wheel-cache-atmtszgr/wheels/f6/c7/9d/1b8a5ca8353d9307733bc719107acb67acdc95063bba749f26
Successfully built accelerate
Installing collected packages: accelerate
Successfully installed accelerate-0.21.0.dev0
Collecting bitsandbytes
Downloading bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 97.1/97.1 MB 18.8 MB/s eta 0:00:00
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.39.1
Collecting einops
Downloading einops-0.6.1-py3-none-any.whl (42 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 42.2/42.2 kB 3.8 MB/s eta 0:00:00
Installing collected packages: einops
Successfully installed einops-0.6.1
Downloading (โฆ)lve/main/config.json: 100%
658/658 [00:00<00:00, 51.8kB/s]
Downloading (โฆ)/configuration_RW.py: 100%
2.51k/2.51k [00:00<00:00, 227kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)main/modelling_RW.py: 100%
47.1k/47.1k [00:00<00:00, 3.76MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)model.bin.index.json: 100%
39.3k/39.3k [00:00<00:00, 3.46MB/s]
Downloading shards: 100%
9/9 [04:40<00:00, 29.33s/it]
Downloading (โฆ)l-00001-of-00009.bin: 100%
9.50G/9.50G [00:37<00:00, 274MB/s]
Downloading (โฆ)l-00002-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 340MB/s]
Downloading (โฆ)l-00003-of-00009.bin: 100%
9.51G/9.51G [00:28<00:00, 320MB/s]
Downloading (โฆ)l-00004-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 317MB/s]
Downloading (โฆ)l-00005-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 210MB/s]
Downloading (โฆ)l-00006-of-00009.bin: 100%
9.51G/9.51G [00:34<00:00, 180MB/s]
Downloading (โฆ)l-00007-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 307MB/s]
Downloading (โฆ)l-00008-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 504MB/s]
Downloading (โฆ)l-00009-of-00009.bin: 100%
7.58G/7.58G [00:27<00:00, 315MB/s]
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-b20acq94qsrp --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
Loading checkpoint shards: 100%
9/9 [05:45<00:00, 35.83s/it]
Downloading (โฆ)neration_config.json: 100%
111/111 [00:00<00:00, 10.3kB/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-1-c89997e10ae9>](https://localhost:8080/#) in <cell line: 15>()
13
14 config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
---> 15 model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
16
17 tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in to(self, *args, **kwargs)
1894 # Checks if the model has been loaded in 8-bit
1895 if getattr(self, "is_quantized", False):
-> 1896 raise ValueError(
1897 "`.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the"
1898 " model has already been set to the correct devices and casted to the correct `dtype`."
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Expected behavior
Model should be loaded and able to run inference. | 06-28-2023 06:07:36 | 06-28-2023 06:07:36 | Hi @DJT777
Thanks for the report
Are you using the main branch of accelerate + single GPU? If that's the case https://github.com/huggingface/accelerate/pull/1652 should solve the issue. Will try to reproduce later without that fix<|||||>I wasn't able to test it using that commit. However running everything with the versioning from my June 8th run got the model loaded back up again. I am using this to run the notebook:
!pip install git+https://www.github.com/huggingface/transformers@2e2088f24b60d8817c74c32a0ac6bb1c5d39544d
!pip install huggingface-hub==0.15.1
!pip install tokenizers==0.13.3
!pip install safetensors==0.3.1
!pip install git+https://github.com/huggingface/accelerate@040f178569fbfe7ab7113af709dc5a7fa09e95bd
!pip install bitsandbytes==0.39.0
!pip install einops==0.6.1
<|||||>Thanks @DJT777
Can you try with `pip install git+https://github.com/huggingface/accelerate.git@fix-to-int8` ? <|||||>@younesbelkada
I'll have an attempt at running things again with that.<|||||>Great thanks! <|||||>> Thanks @DJT777
>
> Can you try with `pip install git+https://github.com/huggingface/accelerate.git@fix-to-int8` ?
Using https://github.com/huggingface/accelerate@d1628ee, didn't solve.<|||||>I went for
!pip install git+https://github.com/huggingface/transformers.git@6ce6d62b6f20040129ec9831e7c4f6576402ea42
!pip install git+https://github.com/huggingface/accelerate.git@5791d949ff93733c102461ba89c8310745a3fa79
!pip install git+https://github.com/huggingface/peft.git@e2b8e3260d3eeb736edf21a2424e89fe3ecf429d
!pip install transformers[deepspeed]
I had to include transformers[deepspeed] yesterday, and earlier today I had to cherrypick commits to make things work..
Development is going so fast, hard to keep up with every change ๐
<|||||>Hi @DJT777
I just ran the script below:
```python
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-40b-instruct"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
input_text = "Describe the solar system."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=10)
print(tokenizer.decode(outputs[0]))
```
and transformers' main branch & the `fix-to-int8` branch of accelerate and I can confirm the script worked fine. I am running on 2x NVIDIA T4 16GB<|||||>@younesbelkada
I'm not able to confirm if it is working in Colab.<|||||>I get the same error in Google Colab ("ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`."), things were working perfectly well yesterday... Copy-pasting this code in a Colab notebook cell and running it might allow for the reproduction of that error:
```python
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q datasets
!pip install -q einops
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "ybelkada/falcon-7b-sharded-bf16"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, trust_remote_code=True, device_map={"":0})
```
Notebook settings/runtime type are/is:
- Runtime type = Python 3
- GPU = T4<|||||>Hi @Maaalik
I can confirm the PR mentioned above on accelerate fixes your issue on GColab, can you try on a new runtime / fresh environment:
```python
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git@fix-to-int8
!pip install -q datasets
!pip install -q einops
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "ybelkada/falcon-7b-sharded-bf16"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, trust_remote_code=True, device_map={"":0})
```
I just tested it on GColab<|||||>Works like a charm! Thank you very much, @younesbelkada!<|||||>https://github.com/huggingface/accelerate/pull/1652 being merged you can now install `accelerate` from source and it should work |
transformers | 24,539 | closed | 4-Bit and 8-Bit Models not being loaded | ### System Info
I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago around June 8th.
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running in Colab on an A100 in Colab PRro
```
!pip install git+https://www.github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate
!pip install bitsandbytes
!pip install einops
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
import torch
model_path="tiiuae/falcon-40b-instruct"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
input_text = "Describe the solar system."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Cell output:
```
Collecting git+https://www.github.com/huggingface/transformers
Cloning https://www.github.com/huggingface/transformers to /tmp/pip-req-build-6pyatvel
Running command git clone --filter=blob:none --quiet https://www.github.com/huggingface/transformers /tmp/pip-req-build-6pyatvel
warning: redirecting to https://github.com/huggingface/transformers.git/
Resolved https://www.github.com/huggingface/transformers to commit e84bf1f734f87aa2bedc41b9b9933d00fc6add98
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (3.12.2)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.31.0.dev0)
Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 236.8/236.8 kB 11.6 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2022.10.31)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2.27.1)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.31.0.dev0)
Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.8/7.8 MB 114.2 MB/s eta 0:00:00
Collecting safetensors>=0.3.1 (from transformers==4.31.0.dev0)
Downloading safetensors-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.3/1.3 MB 79.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (4.65.0)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (4.6.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (3.4)
Building wheels for collected packages: transformers
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.31.0.dev0-py3-none-any.whl size=7228417 sha256=5867afa880111a40f7b630e51d9f1709ec1131236a31c2c7fb5f97179e3d1405
Stored in directory: /tmp/pip-ephem-wheel-cache-t06u3u6x/wheels/c1/ac/11/e69d454307e735e14f4f95e575c8be27fd99835ec36f504c13
Successfully built transformers
Installing collected packages: tokenizers, safetensors, huggingface-hub, transformers
Successfully installed huggingface-hub-0.15.1 safetensors-0.3.1 tokenizers-0.13.3 transformers-4.31.0.dev0
Collecting git+https://github.com/huggingface/accelerate
Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-76ziff6x
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-76ziff6x
Resolved https://github.com/huggingface/accelerate to commit d141b4ce794227450a105b7281611c7980e5b3d6
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (1.22.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (23.1)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (5.9.5)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (6.0)
Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (2.0.1+cu118)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (4.6.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (1.11.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1.2)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (16.0.6)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->accelerate==0.21.0.dev0) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->accelerate==0.21.0.dev0) (1.3.0)
Building wheels for collected packages: accelerate
Building wheel for accelerate (pyproject.toml) ... done
Created wheel for accelerate: filename=accelerate-0.21.0.dev0-py3-none-any.whl size=234648 sha256=71b98a6d4b1111cc9ca22265f6699cd552325e5f71c83daebe696afd957497ee
Stored in directory: /tmp/pip-ephem-wheel-cache-atmtszgr/wheels/f6/c7/9d/1b8a5ca8353d9307733bc719107acb67acdc95063bba749f26
Successfully built accelerate
Installing collected packages: accelerate
Successfully installed accelerate-0.21.0.dev0
Collecting bitsandbytes
Downloading bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 97.1/97.1 MB 18.8 MB/s eta 0:00:00
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.39.1
Collecting einops
Downloading einops-0.6.1-py3-none-any.whl (42 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 42.2/42.2 kB 3.8 MB/s eta 0:00:00
Installing collected packages: einops
Successfully installed einops-0.6.1
Downloading (โฆ)lve/main/config.json: 100%
658/658 [00:00<00:00, 51.8kB/s]
Downloading (โฆ)/configuration_RW.py: 100%
2.51k/2.51k [00:00<00:00, 227kB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- configuration_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)main/modelling_RW.py: 100%
47.1k/47.1k [00:00<00:00, 3.76MB/s]
A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct:
- modelling_RW.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (โฆ)model.bin.index.json: 100%
39.3k/39.3k [00:00<00:00, 3.46MB/s]
Downloading shards: 100%
9/9 [04:40<00:00, 29.33s/it]
Downloading (โฆ)l-00001-of-00009.bin: 100%
9.50G/9.50G [00:37<00:00, 274MB/s]
Downloading (โฆ)l-00002-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 340MB/s]
Downloading (โฆ)l-00003-of-00009.bin: 100%
9.51G/9.51G [00:28<00:00, 320MB/s]
Downloading (โฆ)l-00004-of-00009.bin: 100%
9.51G/9.51G [00:33<00:00, 317MB/s]
Downloading (โฆ)l-00005-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 210MB/s]
Downloading (โฆ)l-00006-of-00009.bin: 100%
9.51G/9.51G [00:34<00:00, 180MB/s]
Downloading (โฆ)l-00007-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 307MB/s]
Downloading (โฆ)l-00008-of-00009.bin: 100%
9.51G/9.51G [00:27<00:00, 504MB/s]
Downloading (โฆ)l-00009-of-00009.bin: 100%
7.58G/7.58G [00:27<00:00, 315MB/s]
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-b20acq94qsrp --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
Loading checkpoint shards: 100%
9/9 [05:45<00:00, 35.83s/it]
Downloading (โฆ)neration_config.json: 100%
111/111 [00:00<00:00, 10.3kB/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-1-c89997e10ae9>](https://localhost:8080/#) in <cell line: 15>()
13
14 config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
---> 15 model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto")
16
17 tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in to(self, *args, **kwargs)
1894 # Checks if the model has been loaded in 8-bit
1895 if getattr(self, "is_quantized", False):
-> 1896 raise ValueError(
1897 "`.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the"
1898 " model has already been set to the correct devices and casted to the correct `dtype`."
ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Expected behavior
Model should be loaded and able to run inference. | 06-28-2023 06:04:44 | 06-28-2023 06:04:44 | Hello I know it's a bit off-topic. I'm actually never inferred a model than runs in either 4-bit and 8-bit, but i'm really wondering. Does the model will inference much slower if we infer it in 8-bit quantized model or will be much faster than the fp16/fp32 models?
Thank you in advance if you have this kind of information and wanted to share |
transformers | 24,538 | closed | Incorrect typing of `fsdp`, `fsdp_config`, and `sharded_ddp` in `TrainingArguments` | ### System Info
- `transformers` version: 4.29.2
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.0.dev20220902 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When initializing a `pydantic.BaseModel` as follows:
```python
from pydantic import BaseModel
from transformers.training_args import TrainingArguments
class MyTrainingArguments(TrainingArguments):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.my_arg = "my_arg"
class MyModel(BaseModel):
training_args: MyTrainingArguments
model = MyModel(training_args=MyTrainingArguments(output_dir=""))
```
The following `ValidationErrors` occur:
```shell
ValidationError: 4 validation errors for MyModel
training_args -> debug
str type expected (type=type_error.str)
training_args -> sharded_ddp
str type expected (type=type_error.str)
training_args -> fsdp
str type expected (type=type_error.str)
training_args -> fsdp_config
str type expected (type=type_error.str)
```
Since `debug` has been fixed in #24033, my main concern are the others.
After investigation, I discovered that the `__post_init__()`-method changes these parameters from their default `str` values to for example `dict`, `bool`, or `List`. This becomes a problem for Pydantic (and other type-checkers) since the validation will be incorrect, while the docstring of `TrainingArguments` describes the following for these parameters:
```python
"""
sharded_ddp (`bool`, `str` or list of [`~trainer_utils.ShardedDDPOption`], *optional*, defaults to `False`)
fsdp (`bool`, `str` or list of [`~trainer_utils.FSDPOption`], *optional*, defaults to `False`)
fsdp_config (`str` or `dict`, *optional*)
"""
```
### Expected behavior
I would like to resolve these issues by providing the correct typehinting. This could look as follows:
```python
sharded_ddp: Union[Optional[str], bool, List[ShardedDDPOption]]
fsdp: Union[Optional[str], bool, List[FSDPOption]]
fsdp_config: Union[Optional[str], Dict]
```
I checked this configuration and it resolves the issue. | 06-28-2023 05:39:18 | 06-28-2023 05:39:18 | We do not support any automatic type checker and our type annotations are only here to give documentation. We welcome PRs to make them more exact as long as it's not at the cost of code readability but the bottomline is that you shouldn't use a type-checker with hard errors on Transformers.<|||||>@sgugger So should I create a PR for this? |
transformers | 24,537 | closed | Finetuning Whisper with multi-languages | ### Feature request
Finetuning Whisper with multi-languages
### Motivation
Finetuning Whisper with multi-languages
### Your contribution
Finetuning Whisper with multi-languages | 06-28-2023 03:19:45 | 06-28-2023 03:19:45 | |
transformers | 24,536 | open | Add Classifier-Free Guidance sampling | EDIT: ===========================
As I see many people copy pasting this initial code that was meant to be a basis for discussion, here is a cleaner version (yet not perfect! We're still doing improvement rounds with the huggingface team to improve it! Check the state of the PR until it's not merged! https://github.com/huggingface/transformers/pull/24654 ).
```python
from transformers import (GPT2Tokenizer, AutoModelForCausalLM,
GPTNeoXForCausalLM, AutoTokenizer)
import numpy as np
import torch
from transformers import (LogitsProcessor, LogitsProcessorList,
MinLengthLogitsProcessor, TemperatureLogitsWarper,
TopKLogitsWarper, TopPLogitsWarper,
TypicalLogitsWarper)
from transformers.generation import LogitNormalization
import torch.nn.functional as F
class CFGLogits(LogitsProcessor):
r"""Logits processor for Classifier-Free Guidance (CFG). The processors
computes a weighted average across scores from prompt conditional and prompt unconditional (or negative) logits,
parameterized by the `guidance_scale`. The unconditional scores are computed internally by prompting `model` with
the `uncond` branch. Finally, according to CFG Rescale, the reweighted logits are interpolated back with weight
`rescale_factor` the conditional ones to smooth the effect and increase output quality.
See [the paper](https://arxiv.org/abs/2306.17806) for more information.
Args:
guidance_scale (float):
The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`.
Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer quality.
uncond (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary for the unconditional branch.
model:
The LM computing the unconditional scores. Supposedly the same as the one computing the conditional scores.
Both models must use the same tokenizer.
"""
def __init__(self, guidance_scale, uncond, model):
self.guidance_scale = guidance_scale
self.uncond = uncond
self.model = model
self.out = None
self.rescale_factor = rescale_factor
def __call__(self, input_ids, scores):
scores = F.log_softmax(scores, dim=-1)
if self.guidance_scale == 1:
return scores
if self.out is None:
self.out = self.model(self.uncond, use_cache=True)
else:
self.out = self.model(
input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values,
)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.guidance_scale * (scores - unconditional_logits) + unconditional_logits
return out
# paper usage: (copying and editing @grantCelley 's answer)
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LogitsProcessorList, TemperatureLogitsWarper, TopPLogitsWarper
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-160m")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
prompt = tokenizer("Today a dragon flew over Paris, France,", return_tensors='pt')
# either provide a negative prompt:
neg_prompt = tokenizer("A sad event happened,", return_tensors='pt')['input_ids']
# or don't:
# neg_prompt = prompt['input_ids'][:, -1:]
device='cuda:0'
model.to(device)
outputs = model.generate(
input_ids=prompt['input_ids'].to(device),
attention_mask=prompt['attention_mask'].to(device),
max_new_tokens=125,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(1.5, neg_prompt.to(device), model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
print(tokenizer.decode(outputs[0]))
```
===============================
### Feature request
Hello!
I wish to contribute CFG sampling. I'm working with EleutherAI and @StellaAthena and will have a paper about it by Friday. CFG brings non trivial improvements on many standard benchmarks. It contrast the logits for the next token $P(w_t|w_{..t}, prompt)$ to that of the input deprived of the prompt $P(w_t|w_{..t})$, by defining
$$
\log P_{\text{cfg}}(w|w_{..t}, prompt) = \log P(w|w_{..t}) + \text{cfg} * (\log P(w|w_{..t}, prompt) - \log P(w|w_{..t})
$$
And then we can blend $\log P_{\text{cfg}}$ with $\log P(w|w_{..t}, prompt)$ to smoothen that distribution a bit, but it's optional.
### Motivation
My current implementation is:
```python
class CFGLogits(LogitsWarper):
def __init__(self, cfg, inputs, model, verbose=True):
self.cfg = cfg
self.inputs = inputs
self.model = model
self.out = None
self.verbose = verbose
def __call__(self, input_ids, scores):
if self.cfg == 1:
return F.log_softmax(scores, dim=-1)
scores = F.log_softmax(scores, dim=-1)
if self.out is None:
self.out = self.model(self.inputs.to(device), use_cache=True)
else:
self.out = self.model(input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.cfg * (scores - unconditional_logits) + unconditional_logits
out = F.log_softmax(out, dim=-1)
return 0.7 * out + 0.3 * scores
# usage:
outputs = model.generate(
input_ids=inputs['input_ids'].to(device),
attention_mask=inputs['attention_mask'].to(device),
max_new_tokens=l,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(cfg, inputs_cfg, model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
```
I am not familiar enough with the design guidelines of HF to know if this implementation as a LogitsWarper is satisfactory.
just a few figures supporting the claims:





### Your contribution
I can contribute the code but I need to be guided as I don't know the exact design guidelines and overall architecture of HF.
Thank you for your time! | 06-28-2023 02:09:29 | 06-28-2023 02:09:29 | cc @gante
But let's see if the community requests this added feature before implementing it in the library proper :-)<|||||>Hey @Vermeille ๐
I have the impression that our MusicGen PR (still open, expected to get merged soon) introduces the bulk of the logic to make it happen -- see [this file](https://github.com/huggingface/transformers/pull/24109/files#diff-d23b812af8462833ad280d968f3e6e2ee7558bacfc2716cdde44a07bead5e065R1070)
It is the same thing with a slightly different code implementation, correct? In the MusicGen PR, the model does a forward pass with 2x the batch size, where half of the batch corresponds to the unprompted tokens<|||||>Indeed @gante !
I don't fully get how the 2x batch size thing works, but if it does, it's cool.
The paper makes some more additions to that base implementation:
1) the `uncond_logits` might in fact have a different prompt than the `cond_logits`, which is commonly called "negative prompt".
2) the comment says "usually at the expense of poorer quality". This can be mitigated with linearly interpolating the cfg `scores` back with with the initial `scores`
3) We had better results `log_softmax`ing both scores before cfg, which normalizes both logits sets to a common "scale".<|||||>cc @sanchit-gandhi, who's probably better equipped to comment on potential differences :)<|||||>Hey @Vermeille - thanks for the comprehensive write-up! Just a clarifying question: in your implementation, how do you construct the token ids for the model based on the conditional ids and the un-conditional ones? You mention:
> inputs_cfg usually is the last token of the prompt but there are
Which suggests you concatenate them together in the same batch item?
In MusicGen (and also the HF Diffusers library for models like Stable Diffusion), we construct our input ids by concatenating the input ids for the conditional prompt and the un-conditional prompt along the batch dimension (`dim=0`):
```python
input_ids = torch.concatenate([conditional_ids, unconditional_ids], dim=0)
```
This is what's referred to by the 2x batch size 'trick' (concatenating the conditional prompt and unconditional prompt over the batch dim). There's no restriction to how these unconditional ids are formed - they can be from a 'null' input, or from a negative prompt. So we can do negative prompting in exactly the way you've described.
When we run our model forward, the logits for the first half of the batch corresponds to the conditional prompt, and the second half to the unconditional prompt (or negative prompt if we use one).
By splitting along the batch dim, we can partition the conditional logits and the unconditional ones:
```python
conditional_logits, unconditional_logits = torch.split(logits, batch_size // 2)
```
-> we then perform our weighted sum over the conditional and unconditional logits for CFG.
Hope that explains how the 2x batch size trick works - would be keen to hear whether this aligns with how you've run CFG in your experiments.
Regarding implementing a new logits processor, we'd probably want to add this new logits processor when the time comes for integrating the model you've worked on into `transformers`, rather than adding it solely as a standalone logits processor. `transformers` is less of a modular toolbox for building new models, more a library for housing the most popular OS ML models
Have you trained a new model that uses this processor? Or built on-top of an existing one? (if it's the latter, then adding the CFG logits processor standalone makes sense, otherwise let's integrate it all in one go)<|||||>Thank you for your detailed answer @sanchit-gandhi !
The part I'm the most unclear with regarding the 2x batch trick is how the sampling happen. Do you actually sample the same continuation token for the conditional and unconditional branch, or do they diverge in their own direction (which would be weird imho)?
Regarding the integration, _there is no need to train models to support CFG, it works out of the box_. The paper will be out in few days, but as you can see on the figures, we employed it with LLaMA models, all Pythias, GPT-2 family, and even GPT4All. We don't train a new model. It's meant to be an addition to the .generate() method that is totally model agnostic and don't need training nor finetuning. Hence the PR with the standalone logits processor :)<|||||>[The paper is out](https://arxiv.org/abs/2306.17806)<|||||>Maybe this helps!
Pre-processing:
* conditional text -> `conditional_ids` (bsz)
* negative text -> `unconditional_ids` (bsz)
* `input_ids` = `[conditional_ids, unconditional_ids]` (2 * bsz since we've done a concat)
Forward pass:
* `logits` (2 * bsz since they come from the `input_ids`)
CFG:
* `conditional_logits`, `unconditional_logits` = `logits[:bsz]`, `logits[bsz:]` (so each one is bsz since we've done a split)
* `scores` = weighted_sum(`conditional_logits`, `unconditional_logits`; `guidance_scale`) (bsz)
Sampling:
* next token = sample(`scores`) (bsz num tokens -> we combined the cond/uncond logits to get the scores, so we only have bsz `scores`, and thus bsz num tokens)
How have you been getting the conditional and unconditional logits in your experiments? Through two forward passes? (one with the conditional inputs and then a second with the unconditional ones). This batch size concatenation trick means you only have to run one forward pass, but with 2x the batch size
The only pain point I see with getting this work in `transformers` is this batch size change as we go from our forward pass to our sampling loop. But we can add some logic to change the batch size on the fly if we're doing CFG (kind of like we did for MusicGen @gante - we need to trick the forward pass into using 2 * bsz, then the decoder ids to use bsz).
> _here is no need to train models to support CFG, it works out of the box_
Very cool indeed! Would be nice to have this as a standalone PR then as suggested<|||||>Thank you!
Yeah if the cond and uncond prompts gets the same next token sampled, it's good wrt to our experiments! That's how you manage to loop around in the .generate() to grow the continuation token per token and zigzaging between bsz and 2bsz that I'm not 100% clear with. I totally see how it works for _one_ forward pass. Totally an implementation detail :) But apparently that's a new trick you had to implement for MusicGen too so it makes sense that I'm not perfectly clear with that.
> Would be nice to have this as a standalone PR then as suggested
I'm happy to address the changes that have to be made to contribute this into the lib :)<|||||>Awesome - feel free to open a PR and tag myself and @gante! How do you do it without the 2x batch size trick? Do you do two forward passes? Just asking in case there's a simpler way we can integrate this!<|||||>(catching up on the paper and thinking a bit about usage experience -- will comment tomorrow with specific suggestions, but I think @Vermeille's suggested implementation above will be pretty close to a great user experience with minimal compute overhead)<|||||>here is an alternative implementation we used for some of our other experiments in the paper, for your consideration.
it was designed with huggingface's typical `*ModelFor*` code-style in mind, which just puts the base model in the `init` and extends the `forward()` method
https://github.com/Vermeille/lm-evaluation-harness-cfg/blob/cfg-alex/log_logits_on_p3.py#L30-L97<|||||>> Awesome - feel free to open a PR and tag myself and @gante! How do you do it without the 2x batch size trick? Do you do two forward passes? Just asking in case there's a simpler way we can integrate this!
Yes. Two consecutive passes. Which is indeed not that great wrt latency.<|||||>Would be great to have both the 2x batch size and two forward passes. Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines
(unless I missunderstood)<|||||>So given you already have this ( https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py#L1070 )
What do you want me to add / change in the PR?<|||||>> Would be great to have both the 2x batch size and two forward passes. Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines
>
> (unless I missunderstood)
This is correct: our focus was on getting the best results for a fixed amount of VRAM in our experiments. Hence it didn't occur to us to simply 2x the batch size. I agree that having this be togglable is a good idea and don't have any preference about the default.<|||||>The application to LLMs seems more of a situational sampling technique. With smaller conditional generative models like MusicGen, trained from-scratch with (explicit) condition dropout, it's practically part of the model. MusicGen isn't the first AR Transformer here, last year's DALL-E Mega [already did it](https://github.com/borisdayma/dalle-mini/blob/cb2cf37d07a83a92f37b5e1e0568efdb89e52812/src/dalle_mini/model/modeling.py#L1896) (itself inspired by https://twitter.com/RiversHaveWings/status/1478093658716966912 ), and in these models it's essential for performance.
So I'd expect "batch size 1 dramatically underutilizes available resources" to be the more common case.
> Since 2x batch size is better for throughput but the two forward passes are much better for VRAM usage, as the Paper outlines
Depending on model and hardware, "biggest batch size that fits" isn't necessarily optimal. On decent hardware, you can hit optimal compute utilisation before VRAM limits with batched inference in smaller models.
---
Normalizing the summands, then interpolating with the original scores is intriguing. If adding this to the CFG implementation that's now in Transformers is still being considered, this would be unexpected as default behavior though. In diffusion models, it's not applicable, and in sequence prediction, I've only seen people combine the unnormalized scores.<|||||>@drdaxxy
> Normalizing the summands, then interpolating with the original scores is intriguing. [...] In diffusion models, it's not applicable
This is a technique we borrowed from [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/abs/2305.08891) they call CFG Rescale. You can see [Imagen](https://arxiv.org/abs/2205.11487) doing some normalizing trick too.
> in sequence prediction, I've only seen people combine the unnormalized scores.
That's what we started with, and our results were a little bit worse.<|||||>This method is interesting to implement from an engineering and maintenance point of view!
The simplest approach would be to proceed as @Vermeille suggested: add a logits processor that calls a model forward pass for the unconditional part of the input. It would be a small self-contained piece of code, which means low long-term maintenance on our end. On the negative side, we have the 2x latency, which is more impactful than the extra VRAM (IMO).
If we go the 2x batch size route, we need to implement a function like `greedy_search` or `sample` -- a long function with non-negligible maintenance costs on our end. I believe this would be the best form of CFG sampling. However, we are severely constrained by our ability to keep the machine up and running at a good pace, so we can quickly add new features like CFG sampling :D
We have a plan to reorganize `generate` such that it is entirely made of small functions, making it much more composable. In the way I'm envisioning it, the 2x batch size version of CFG sampling would need a few extra lines of code, as opposed to a new large function.
How about we go with @Vermeille's proposal now, which will make CFG sampling available this week with low overhead on our end, and we implement the 2x batch size version after the `generate` refactor is complete? The new logits processor class would need a different name, as we already have `ClassifierFreeGuidanceLogitsProcessor` for the 2x batch size case (perhaps `UnbatchedClassifierFreeGuidanceLogitsProcessor`?)<|||||>Expect a PR in few hours.
Thank you for your interest and answers!<|||||>@gante There is a name clash for the arguments to .generate(). For this PR, unless instructed otherwise before I submit it, `cfg_scale` (mine) will live next to `guidance_scale` (MusicGen's). Idk how to resolve this competition, give that .generate() does not seem ready to use the 2x batch trick yet.<|||||>@Vermeille Adding more (and partially redundant) parameterization is highly undesirable, and we'd want to favor the more general case (yours). You also have the additional requirement of renormalizing the logits before applying your logits processor. Fortunately, we haven't officially released a `transformers` version with `MusicGen`, so we still have some wiggle room!
Let's try to fit everything together -- here's my suggestion:
- your logits processor uses the same parameter, `guidance_scale`, and it's triggered by its presence
- EDIT: this is not needed ~your logits processor is added after the normalization one (after [this if](https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/generation/utils.py#L948)), and the normalization step is now also triggered when `guidance_scale` is non-`None`~
- `ClassifierFreeGuidanceLogitsProcessor` (`MusicGen`'s) is removed from the function that prepares the logits processors, and we modify [MusicGen's generation function](https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/models/musicgen/modeling_musicgen.py#L1184) to handle its special processor: if `guidance_scale` is present when we generate with `MusicGen`, we pop it and manually add its CFG processor. I can take care of this part if you don't feel comfortable touching `MusicGen` :)
This way the two strategies can coexist, share the argument, and not clash ๐ค <|||||>Great! Thank you for the walkthrough.
On it.<|||||>Wait @gante, integrating it after the LogitNormalization is not something we want: all the prior processing (temperature, top_p, etc), will be used only on the conditional branch and not the unconditional, and will be executed _before_ computing the CFG logits. To be fair, we haven't tested this transformation order, but being asymmetrical like this scares me.
And this is is even invalid. Top-k/p may not even select the same tokens in both branches, so that will misbehave.
I'm afraid I can't do that. CFG has to happen as one of the first logitprocessor<|||||>@Vermeille looking at your code example above, I didn't notice it already had normalization inside the processor. My bad -- feel free to add it as the 1st one :)
(will edit my comment above accordingly, for clarity)<|||||>So this is the code I got to get it working. It is just a hack but if you want to playwith it just use this code
```python3
from transformers import LogitsWarper
import torch
from torch.nn import functional as F
device = 'cpu'
if torch.has_cuda:
device = 'cuda'
class CFGLogits(LogitsWarper):
def __init__(self, cfg, inputs, model, verbose=True):
self.cfg = cfg
self.inputs = inputs
self.model = model
self.out = None
self.verbose = verbose
def __call__(self, input_ids, scores):
if self.cfg == 1:
return F.log_softmax(scores, dim=-1)
scores = F.log_softmax(scores, dim=-1)
if self.out is None:
self.out = self.model(self.inputs.to(device), use_cache=True)
else:
self.out = self.model(input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.cfg * (scores - unconditional_logits) + unconditional_logits
out = F.log_softmax(out, dim=-1)
return 0.7 * out + 0.3 * scores
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LogitsProcessorList, TemperatureLogitsWarper, TopPLogitsWarper
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-160m")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
prompt = "Salve, dispiculi."
inputs = tokenizer(prompt, return_tensors='pt')
model.to(device)
outputs = model.generate(
input_ids=inputs['input_ids'].to(device),
attention_mask=inputs['attention_mask'].to(device),
max_new_tokens=125,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(3, inputs['input_ids'], model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
print(tokenizer.decode(outputs[0]))
```
This worked on my end<|||||>@grantCelley 's code works for me.
## With CFG (pythia 160m)

## Without CFG

<|||||>@grantCelley @chris-aeviator
The line `CFGLogits(3, inputs['input_ids'], model),` should really be `CFGLogits(3, inputs['input_ids'][:, -1:], model),`<|||||>thanks for pointing it out, my 30 was a typo, but your prev. code doesnt seem to mention the [:, -1:] ?!<|||||>@chris-aeviator notice how it uses `input_cfg`:
```python
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(cfg, inputs_cfg, model),
```<|||||>I will change it when I get home from work<|||||>@Vermeille I'm currently working on ways to implement logits-processing capability in extensions in oobabooga's `text-generation-webui` (originally improve the OpenAI extension to mirror OpenAI's logprobs and logit_bias) and came across this as part of my changes. (_I was trying to understand how to add LogitsProcessor's to Exllama._)
I'd love to implement this as an example of a plugin that adds a logits processor; is it okay if I use the code at the top of this issue for that?
You can see the whole change set so far (_I need to split it up a bit and make individual pull requests, and I'll make sure to give clear credit when I do!_) here: https://github.com/oobabooga/text-generation-webui/pull/3001/files and the CFG plugin is at the top. The simplest idea is you'd add it with `--extensions cfg` on the command line. Probably with some warning that it's going to make things slower, but hopefully better.
I'll add some configuration (the cfg hard codes 1.5 right now, as that appeared to be the best point in the paper), but mostly I want to make sure I'm not stepping on any toes.<|||||>@cyberfox Thank you for your interest! I see your PR reuses as is the code I initially submitted. I strongly advise you to update to the updated code (edit in first post) to have a cleaner and better experience.<|||||>> @chris-aeviator notice how it uses `input_cfg`:
>
> ```python
> # inputs_cfg usually is the last token of the prompt but there are
> # possibilities of negative prompting that are explored in the paper
> CFGLogits(cfg, inputs_cfg, model),
> ```
The results start to become a bit garbage when I run with `inputs['input_ids'][:, -1:]`
This is a fine tuned Pythia 12b
### vanilla/ no CFG:
> Beetle larvae are hard to love because they eat their way through many of our favorite plants. But we should love them anywayโat least for one thing. Itโs called โkilling with a smile,โ and itโs what these little guys do to protect us from predators. Beetle larvae protect themselves from predators by producing a deadly toxin after feeding on Commiphora tree. In Kenya, for example, where the leaf beetle is found, that means they harvest the toxic protein called mimeticotoxin from the leaves and apply it to an arrowhead to kill any large animals that might otherwise eat them. The toxin on a single arrow could kill an antelope.
### CFG w. `inputs['input_ids']` (wrong usage)
> One might think that one species of insects would be anotherโs food source, but this is true for many organisms. For example, mosquitoes are eaten by birds and bats; moths provide food for hummingbirds; and so forth. In fact, some insects have evolved such self-protection mechanisms to ward off their own destruction by other predators. One remarkable case involves leaf beetles (Chrysomelidae) of southern Africa. These pests of plants like acacia trees feed exclusively on leaves of the Commiphora tree. They produce toxins only when youngโand only enough to kill or repel the smallest animals likely to eat them. Thus, they develop a defense mechanism capable of killing an antelope size animal.
To deter predators, lar
### CFG w. `inputs['input_ids'][:, -1:]` (correct usage)
> Leaf beetles doncrautaiaeare North African desert insects that eat right through beetle doorsโleaves damaged or preparing to be browsed upon by Commiphora matabueillevis goatsโ killing off nearby predators such as antelopes and Cape buffalo. Sure enough, South African Bushmen ancient warriors would aim and shoot these poisoned arrows at grazing animals to defeat themsingleadultantelope.
<|||||>This output is typical of a guidance strength that's too high. You can either reduce it or reduce the rescale_factor. Try cfg 1.5 and if it's still not there, 1.25. Then you can try ramping the guidance strength up while reducing the rescale_factor.<|||||>@chris-aeviator I very quickly tuned the parameters in the example. It generated:
> Today a dragon flew over Paris, France, and flew around London, England.
>
> And he flew into the heavens, to be welcomed, and to be welcomed by his own people. This dragon was a master of disguise, and used his powers to give a great signal to the earth to let him pass through it. It was the first time the dragon had ever been seen flying around the world, and it was the first time he had ever used the magic to fly.
>
> One day the dragon is sitting on the ground, on the ground, with his feet bare. He is about to say something to him, and he is very interested in his own
We can clearly see the effect of the negative prompt. Don't hesitate to fiddle with both values though, they're not set in stone.<|||||>Here is the updated code I got.
```python3
from transformers import LogitsWarper
import torch
from torch.nn import functional as F
device = 'cpu'
if torch.has_cuda:
device = 'cuda'
class CFGLogits(LogitsWarper):
def __init__(self, cfg, inputs, model, verbose=True):
self.cfg = cfg
self.inputs = inputs[:, -1:]
self.model = model
self.out = None
self.verbose = verbose
def __call__(self, input_ids, scores):
if self.cfg == 1:
return F.log_softmax(scores, dim=-1)
scores = F.log_softmax(scores, dim=-1)
if self.out is None:
self.out = self.model(self.inputs.to(device), use_cache=True)
else:
self.out = self.model(input_ids[:, -1:],
use_cache=True,
past_key_values=self.out.past_key_values)
unconditional_logits = F.log_softmax(self.out.logits[0][-1:], dim=-1)
out = self.cfg * (scores - unconditional_logits) + unconditional_logits
out = F.log_softmax(out, dim=-1)
return 0.7 * out + 0.3 * scores
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LogitsProcessorList, TemperatureLogitsWarper, TopPLogitsWarper
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-160m")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
#This is the prompt to be the actual auto complete
#Means "Hello, Students. Now"
prompt = "Salve, dispiculi. Nunc"
"This is what it will go towrds"
cfg_prompt = "Latin"
inputs = tokenizer(prompt, return_tensors='pt')
cfg_inputs = tokenizer(cfg_prompt, return_tensors='pt')
model.to(device)
outputs = model.generate(
input_ids=inputs['input_ids'].to(device),
attention_mask=inputs['attention_mask'].to(device),
max_new_tokens=125,
logits_processor=LogitsProcessorList([
# inputs_cfg usually is the last token of the prompt but there are
# possibilities of negative prompting that are explored in the paper
CFGLogits(1.5, cfg_inputs['input_ids'], model),
TemperatureLogitsWarper(0.8),
TopPLogitsWarper(0.95),
]),
do_sample=True,
)
print(tokenizer.decode(outputs[0]))
```
The output is:
```text
Salve, dispiculi. Nunc conciliat est.
Lacchius autem erat et. fructus.
Ut esse,
Quod
.
Hecce,
In quem
um
et
o
de
quattu
.
Merci,
Ai,
Besci sunt
Cum
um
in
.
Dic quam
.
In
.
Merci,
Ai,
Bos.
```<|||||>I just saw the updated code from @Vermeille <|||||>@grantCelley Pythia models are trained on English. I'm really confused by what you're trying to achieve there.<|||||>I was just trying to get it to work. Also it does continue in latin for a little which is interesting then goes into a romance language. But it just showed how to do it. I didn't realize that you updated the original codeblock.<|||||>@Vermeille
> This output is typical of a guidance strength that's too high. You can either reduce it or reduce the rescale_factor. Try cfg 1.5 and if it's still not there, 1.25. Then you can try ramping the guidance strength up while reducing the rescale_factor.
Ok this helped, generation for the same amount of tokens takes longer now, is this expected?
### Vanilla / no CFG, 512 token / 3 min
> Fungus-growing ants (Myrmecocystus ants) farm their own food, feed larvae and queen with it, and maintain the garden through vigorous harvesting and composting practices. Like farmers who grow vegetables, they plant crops, harvest, store, and distribute the fruits of the harvest to themselves and their dependents.
### CFG, neg_token = last token, cfg_scale=1.5, 512 token / 5 min
> Leaf-cutting ants (or โantlion antsโ) are found in tropical regions around the world. They farm fungi to eat as adults and feed their larvae. Fungi provide food not only for adult ants but also for the gardens that they maintain across vast distances in the form of fungus farms. These farms can contain tens of thousands of acres of fungal colonies. The largest known leaf-cutting ant fungus farm has over 20,000 colonies with a total area of nearly 2 million square meters (2 hectares).
### CFG, neg_token = last token, cfg_scale=1.25, 512 token / 5 min
> Leaf-cutting ants (or โantlion antsโ) are found in tropical regions around the world, where they farm fungus to feed their young. Fungus farming has been observed in several ant species, including the Acromyrmex octospinosus ant, endemic to South and Central America and the southern United States. Farmers remove leaves from native plants and chew them into small pieces, which they place directly onto the soil around newly established colonies. The leaf fragments provide food for the larvae inside the nest as well as for the colonyโs queen.
<|||||>@grantCelley shouldnt a negative prompt of 'Latin' prohibit latin output? Do I misunderstand the concept of negative prompts?<|||||>@chris-aeviator
> Ok this helped, generation for the same amount of tokens takes longer now, is this expected?
Yes, there are two forward passes per token now.
> @grantCelley shouldnt a negative prompt of 'Latin' prohibit latin output? Do I misunderstand the concept of negative prompts?
You are correct<|||||>> @grantCelley shouldnt a negative prompt of 'Latin' prohibit latin output? Do I misunderstand the concept of negative prompts?
It is hard to say what negative prompt does in certain terms. I had it generate a poem and specified negative prompt as happy and it used somehow gloomy language and vice versa - so it "does" work, but beyond that I think only further experimentation will tell.
It does affect the output, but not too dramatically.
In the paper they put negative prompt the system prompt the model was trained with... not sure about the reasoning for that. <|||||>> It is hard to say what negative prompt does in certain terms. I had it generate a poem and specified negative prompt as happy and it used somehow gloomy language and vice versa - so it "does" work, but beyond that I think only further experimentation will tell.
Yes. Neg prompts in language are somewhat harder to pull off than in vision. Especially because the continuation should be kinda grammatical with the neg prompt too. Not gonna lie, we were under time constraints and having a clear neg prompt methodology was unsolved in that time frame. But we're working on it, and the example in the first post works.
> It does affect the output, but not too dramatically.
Hard to say yet, but it _should_ depend on the guidance strength (decrease the rescale_factor as you increase the guidance strength)
> In the paper they put negative prompt the system prompt the model was trained with... not sure about the reasoning for that.
from the paper:
> We set the negative prompt to be the default system-prompt for the models we use [...] This approach not only makes the sampling more prompt-aware in general, but **directly emphasizes the difference between our system-prompt and the modelโs default system-prompt**.
<|||||>Thanks for explanation.
I can't wait to see more about this. It is really fascinating trying to see how the code works - it is quite something!
A little question for my own curiosity. (referring to the code on top of the page)
In case I'm using it with a full negative prompt, wouldn't be better in the very first round (when self.out==None) to do just
out = self.guidance_scale * (scores - unconditional_logits)
(not to add the + unconditional_logits - which are logits of the negative prompt in the first round? Then of course continue normally as t is in next rounds.
Or it doesn't really matter? Or I don't understand the nuance of the code (which I of course don't)?
<|||||>> In case I'm using it with a full negative prompt, wouldn't be better in the very first round (when self.out==None) to do just out = self.guidance_scale * (scores - unconditional_logits) (not to add the + unconditional_logits - which are logits of the negative prompt in the first round? Then of course continue normally as t is in next rounds. Or it doesn't really matter? Or I don't understand the nuance of the code (which I of course don't)?
Because the `key_values` are passed back in each time, the negative side of the prompting (the second `self.out`) will **_always_** contain the prediction based on the entire negative context so far. Even if you did it from the second token generation, you'd still be adding in the negative prompt's effect on the generated 'unconditional' logits...
This implies to me that the two things should be separate concepts, with separate implementations...but if you (reasonably) wanted to use both focus-on-first-prompt, and negative prompting, it would be compute expensive to do them separately.
That said, I do feel a little like the 'adding them back in' is a fudge-factor, trying to reduce the effect slightly. But I don't understand the math symbology in the original paper very well, so I'm very cautious about that. |
transformers | 24,535 | closed | Finishing tidying keys to ignore on load | # What does this PR do?
A couple of tests are failing on main due to #24505 (and PRs merged between the start of the work on this branch and its merge). This should fix all of them.
Note: will merge as soon as CI is green to make main green again. | 06-28-2023 01:23:40 | 06-28-2023 01:23:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24535). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,534 | closed | `AutoModelForCausalLM.from_config` doesn't handle `revision="79ec93c"` correctly | ### System Info
To reproduce:
```
from transformers import AutoModelForCausalLM, AutoConfig
config = AutoConfig.from_pretrained("mosaicml/mpt-7b", trust_remote_code=True, revision="79ec93c")
config.n_layers = 0
config.d_model = 16
config.n_heads = 4
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
```
and the error is:
```
In [21]: model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 model = AutoModelForCausalLM.from_config(config, trust_remote_code=True, revision = "79ec93c")
File /opt/conda/envs/torch2.0.1/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:422, in _BaseAutoModelClass.from_config(cls, config, *
*kwargs)
420 model_class = get_class_from_dynamic_module(class_ref, repo_id, **kwargs)
421 _ = kwargs.pop("code_revision", None)
--> 422 return model_class._from_config(config, **kwargs)
423 elif type(config) in cls._model_mapping.keys():
424 model_class = _get_model_class(config, cls._model_mapping)
File /opt/conda/envs/torch2.0.1/lib/python3.10/site-packages/transformers/modeling_utils.py:1143, in PreTrainedModel._from_config(cls, config, **kwargs)
1141 model = cls(config, **kwargs)
1142 else:
-> 1143 model = cls(config, **kwargs)
1145 # restore default dtype if it was modified
1146 if dtype_orig is not None:
TypeError: MPTForCausalLM.__init__() got an unexpected keyword argument 'revision'
```
### Who can help?
Probably @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Provided in the issue message
### Expected behavior
Loading the model not crashing | 06-27-2023 23:56:18 | 06-27-2023 23:56:18 | You should pass `code_revision` and not `revision` in your second call: you are loading the model code for that revision, not the weights.<|||||>Works. Thank you. |
transformers | 24,533 | closed | [BUG] Protobuf not being correctly installed | ### System Info
`transformers==4.30.2`
Protobuf not being installed along with transformers even when it's specified in `setup.py`.
This can be an issue if the environment already installs latest version of protobuf, which is not compatible with tranformers
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install transformers -U
pip freeze | grep protobuf
### Expected behavior
protobuf with correct pinned version should be installed | 06-27-2023 21:01:54 | 06-27-2023 21:01:54 | Transformers is only compatible with protobuf<4.0 because packages we depend on (sentencepiece if I recall correctly) do not support protobuf 4 yet. There is little we can do on our side until they support it.
cc @ydshieh in case I said something wrong.<|||||>Hi @yinweisu
You can uninstall `protobuf` and reinstall it to get `3.20.3`.
Regarding protobuf 4, unfortunately, we get errors (see below) for sentencepiece based tokenizers. I am also told that `google/sentencepiece` doesn't support protobuf 4 yet, but from its GitHub page, I can't see this information. I plan to open an issue to ask questions there.
```bash
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
```<|||||>@yinweisu
Maybe you want/could/love to give [this issue](https://github.com/google/sentencepiece/issues/889) a ๐ ๐ค ๐ <|||||>Thanks! I know you can reinstall it. The issue is that we are an open source project, which depends on transformers and other libraries. Other libraries will install newer version of protobuf, which doesn't work with transformers. This means users consume our package will run into protobuf errors. Ideally, transformers should correctly pin protobuf and install the correct version while doing `pip install`. The fact is that, transformers does pin it in the setup.py, though it's not taking effect<|||||>> Transformers is only compatible with protobuf<4.0 because packages we depend on (sentencepiece if I recall correctly) do not support protobuf 4 yet. There is little we can do on our side until they support it.
>
> cc @ydshieh in case I said something wrong.
@sgugger I understand it doesn't work with protobuf>4.0. But transformers should correctly install protobuf < 4.0 as its dependency, which is not what's happening right now<|||||>@yinweisu
I probably need to check what happens if a higher version is already installed when a user installs `transformers`.
Could you show us your way of installing `transformers`? ๐ Thanks.<|||||>It will just use the higher version. We've got this problem in our own package installation.
You should be easily reproduce it via
```bash
pip install -U protobuf
pip install transformers
pip freeze | grep protobuf
```<|||||>@yinweisu After looking `setup.py`, at the end in that file, the `install_requires` only contains a few libraries:
https://github.com/huggingface/transformers/blob/66954ea25e342fd451c26ec1c295da0b8692086b/setup.py#L415-L426
and `protobuf` is not in this list. It's why `protobuf` is not checked and updated in the case you mentioned.
From README:
> Then, you will need to install at least one of Flax, PyTorch or TensorFlow.
I believe `install_requires` is set to be minimal as it is in current version. But @sgugger knows better than me regarding this and may have some comments.
<|||||>protobuf is not a core dependency of Transformers, it is only installed through other packages (most likely sentencepiece). I don't think any of the packages here have a dependency on protobuf but I may be msitaken.<|||||>https://github.com/huggingface/transformers/blob/main/setup.py#L145
Protobuf is listed here. So if the library requires a specific version of dependency to work, it should be installed along, right? <|||||>This list only contains soft dependencies. This one is used when you do `pip install transformers[sentencepiece]` for instance.<|||||>So I only install transformers, but I can still run into this problem if protobuf is either already presented in my env or being installed along with other packages. If transformers requires a specific version of protobuf to function, it should pin that version and install it.<|||||>As mentioned in
> You should install ๐ค Transformers in a (new) virtual environment.
And you can simply install as `pip install transformers[dev]` or something smaller `pip install transformers["dev-torch"]`.<|||||>@ydshieh I really don't want to argue with you and I appreciate your fast response, but imagine such an use case:
Our package AutoGluon, depends on `transformers` and package `foo`, both of which are in our setup.py.
package `foo` has `protobuf` in its dependency as `protobuf>=3.15.3` let's say while transformer didn't pin `protobuf`
When user creates a new virtual environment and do a pip install autogluon, it will install `foo` and `transformers,` which will then install the latest `protobuf` because the latest `protobuf` satisfy `protobuf>=3.15.3`
You see the problem here? It's not about if it's a fresh env or not.
The only way to solve this as AutoGluon for now is that we had to manually pin `protobuf` by looking at both requirements from `transformers `and `foo` and figure out what's the version that satisfy both even if its not a direct dependency for us. This is not maintainable as we need to do this everytime `foo` or `transformers` updated their dependency on `protobuf`. Then imagine what if there are multiple packages in our dependency that requires `protobuf`. If `transformer` pin `protobuf`, pip will auto-resolve the version<|||||>> if its not a direct dependency for us.
It's not a direct dependency for `transformers` neither. It's only for the usage of the tokenizers based on `sentencepiece`.
It's not practical for us to put all packages in the `setup.py` as hard dependency.
> imagine what if there are multiple packages in our dependency that requires protobuf.
Let's not overthink and just make your package `AutoGluon` work for now. Maintenance is not an easy task, and if there has something to be updated in the future, that's it.<|||||>@yinweisu
FYI: after #24599, we no longer pin `protobuf` in `transformers`. |
transformers | 24,532 | closed | Allow backbones not in backbones_supported - Maskformer Mask2Former | # What does this PR do?
Updates configuration creation to allow backbone configs to be pased in which aren't listed in `backbones_supported` - downgrading the exception raised to a warning.
Tested with:
```python
from transformers import (
Mask2FormerConfig,
MaskFormerConfig,
Mask2FormerModel,
MaskFormerModel,
FocalNetConfig
)
# Test with a backbone not officially supported
backbone_config = FocalNetConfig(out_indices=(-2, -1))
maskformer_config = MaskFormerConfig(backbone_config=backbone_config)
model = MaskFormerModel(maskformer_config)
mask2former_config = Mask2FormerConfig(backbone_config=backbone_config)
model = Mask2FormerModel(mask2former_config)
```
Fixes #24244
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-27-2023 19:04:18 | 06-27-2023 19:04:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,531 | closed | Document QA Pipeline Suddenly Down? | ### System Info
Transformers 4.29.2, MacOS, Python 3.9.6 64-bit,
I just have this piece of code:
```
query_pipeline = pipeline(
"document-question-answering",
model="impira/layoutlm-document-qa"
)
```
and on this Python version it gives me:
```
RuntimeError: Failed to import transformers.models.layoutlm.modeling_tf_layoutlm because of the
following error (look up to see its traceback):
No module named 'keras.engine'
```
when it was working perfectly yesterday.
On Version: 4.30.2, MacOS, Python 3.10.12 64-bit, I get a different error somehow
```
File "/opt/homebrew/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 988, in pipeline
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/transformers/pipelines/document_question_answering.py", line 145, in __init__
self.check_model_type(MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING)
NameError: name 'MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING' is not defined
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
Paste the code snippet and run the Python script.
### Expected behavior
I expect the pipeline to work properly as intended. | 06-27-2023 18:27:00 | 06-27-2023 18:27:00 | Please follow the issue template and paste the result of `transformers-cli env`. It looks like you don't have PyTorch installed in your environment but it's hard to be sure without that. The document QA pipeline requires torch, it doesn't have an implementation in TensorFlow.<|||||>Resolved!! Thank you ๐ |
transformers | 24,530 | closed | Fix Typo | # What does this PR do?
Fixed wrong annotation
- `(seq_len, batch, embed_dim)` -> `(batch, seq_len, embed_dim)`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
| 06-27-2023 18:25:09 | 06-27-2023 18:25:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,529 | closed | Fixed OwlViTModel inplace operations | # What does this PR do?
This PR replaces the "/=" operator in OwlViTModel which causes an error in the backward pass with the non-inplace version.
Fixes #24525
#24525
## Who can review?
@ydshieh
| 06-27-2023 16:37:36 | 06-27-2023 16:37:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@pasqualedem Could you apply the commit suggestions, then we can merge ๐ Thanks.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24529). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,528 | closed | Add `finetuned_from` property in the autogenerated model card | # What does this PR do?
We already extract the info but it wasn't properly put in the metadata. This PR adds it.
cc @julien-c @osanseviero for the exact name of the tag to use. | 06-27-2023 15:56:36 | 06-27-2023 15:56:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>What did you have in mind exactly? The only text added is
```
"This model is a fine-tuned version of f" [{self.finetuned_from}](https://huggingface.co/{self.finetuned_from}) on "
```
which doesn't contain the `finetuned_from` tag.<|||||>@sgugger i would rename the variable everywhere<|||||>That's a breaking change in Transformers for cosmetic reasons only, so a big no no from my side.<|||||>I would go ahead with this PR as-is :fire: |
transformers | 24,527 | closed | Update `huggingface_hub` commit sha | # What does this PR do?
To use `timeout=10.0` in [this commit](https://github.com/huggingface/huggingface_hub/commit/e4a419bf6bbaa95d14704cc781d3e81a49cef413).
The goal is to make sure the fix from infra team really works. But once verified, we can keep it as `10.0`. | 06-27-2023 15:29:14 | 06-27-2023 15:29:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24527). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,526 | closed | Find module name in an OS-agnostic fashion | # What does this PR do?
Splitting on `os.path.sep` to get the filename is not fully-compatible on Windows as the OS recognizes a local path written as `folder/file.ext`, but `os.path.sep` is `\\`. Hopefully this way is better.
Fixes #24517 | 06-27-2023 15:27:04 | 06-27-2023 15:27:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>https://github.com/huggingface/transformers/blob/8a968e572c335d82c87dc3708fe0a9da79664d64/src/transformers/dynamic_module_utils.py#L274
This line should be change also~<|||||>Good point, it's added. |
transformers | 24,525 | closed | Backward pass error during OWL-ViT finetuning | ### System Info
Google Colab
### Who can help?
@ArthurZucker
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1GfspAHLGTMmzfNShAVebzM04s32q1LKb?usp=sharing
### Expected behavior
I was trying to finetune Owl-ViT, and I came across this backward pass error. I investigated and found out that the error is the operator "/=" which does an inplace operation.
At lines:
- https://github.com/huggingface/transformers/blob/4e8929dcbb9040f54f52d898a600260f338ef54f/src/transformers/models/owlvit/modeling_owlvit.py#L1296
- https://github.com/huggingface/transformers/blob/4e8929dcbb9040f54f52d898a600260f338ef54f/src/transformers/models/owlvit/modeling_owlvit.py#L1297 | 06-27-2023 14:54:03 | 06-27-2023 14:54:03 | @pasqualedem
Would you like to open a PR to fix it ? ๐ค |
transformers | 24,524 | open | model.generate single CPU core bottleneck | ### System Info
I'm running inference on a GPU EC2 instance using CUDA. After doing a little profiling I noticed the model.generate method was the clear bottleneck. Upon closer inspection running htop showed that during this method call only a single cpu core is used and is maxed out to 100%. I've made sure sure all of my weights, biases and activations are all on the gpu. Running nvidia-smi shows the proper amount of VRAM usage. My question is, is there a way to speed this method up using multiple CPU cores? I haven't dug deep into the method to see exactly what is causing the issue yet. Just curious if there's something obvious I'm missing.
(Basic sample code)
```
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
import torch
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
PROMPT = f"""### Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
### Input: "What is the difference between a llama and a vicuna?"
### Response:"""
inputs = tokenizer(
PROMPT,
return_tensors="pt",
)
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=0.6,
top_p=0.95,
repetition_penalty=1.15,
)
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
for s in generation_output.sequences:
print(tokenizer.decode(s))
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run this method with any model and profile the cpu cores to see it's only using a single core:
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
### Expected behavior
Multiprocessing to help balance the load. | 06-27-2023 14:46:16 | 06-27-2023 14:46:16 | cc @gante <|||||>Hey @dhcracchiolo ๐
Good timing on your question, we are precisely exploring what we can do to speed up LLaMA within python + PyTorch + `.generate` API, in [this repo](https://github.com/fxmarty/accelerated-pytorch-transformers-generation).
In a nutshell, the model is CPU bound, as is most PyTorch code for models of this size. If you install the repo above, run `python run_llama.py --model huggingface/llama-7b --preallocate --profile`, and check the profiler results on TensorBoard, you'll see that the CPU bottleneck is in orchestrating the PyTorch execution on GPU. This is why you see high-speed inference packages to go straight into CUDA, to get rid of this overhead.
Feel free to use the code from the repo above, it should get you ~50% speedup :) It doesn't have all the options that we have in `.generate`, though. If you have concrete suggestions on how to further speed it up, we're all ears ๐ <|||||>> Hey @dhcracchiolo ๐
>
> Good timing on your question, we are precisely exploring what we can do to speed up LLaMA within python + PyTorch + `.generate` API, in [this repo](https://github.com/fxmarty/accelerated-pytorch-transformers-generation).
>
> In a nutshell, the model is CPU bound, as is most PyTorch code for models of this size. If you install the repo above, run `python run_llama.py --model huggingface/llama-7b --preallocate --profile`, and check the profiler results on TensorBoard, you'll see that the CPU bottleneck is in orchestrating the PyTorch execution on GPU. This is why you see high-speed inference packages to go straight into CUDA, to get rid of this overhead.
>
> Feel free to use the code from the repo above, it should get you ~50% speedup :) It doesn't have all the options that we have in `.generate`, though. If you have concrete suggestions on how to further speed it up, we're all ears ๐
Thanks for the response! Excellent! I'll check the repo out and report back. ๐ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,523 | closed | Falcon port | This PR adds the Falcon model to the main library. It's still a work in progress, and integration tests / model checkpoints still need to be added!
TODO:
- [x] Migrate custom code checkpoints to the new architecture
- [x] Confirm tokenizer can be loaded correctly with `AutoTokenizer` for all checkpoints
- [x] Upload a ported 1B model.
- [x] Add integration tests for the 1B model
- [x] Add support for `output_attention`
- [x] Add support for `output_hidden_states`
- [x] Ensure all tests pass
- [x] Ensure any other issues addressed (see comments on Slack)
- [x] Address review comments
- [x] Ensure tokenizers are ported correctly (token_type_ids issue)
- [x] Upload library ports of all Falcon checkpoints and migrate/redirect to them | 06-27-2023 14:37:29 | 06-27-2023 14:37:29 | Update: Slightly delayed because there are some breaking architecture changes between the different Falcon checkpoints - I'm merging the various layers and using config variables to switch between the behaviours.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>feel free to ping me for a review anytime! <|||||>Hi @Rocketknight1
Would this PR allow to export falcon to onnx?
As today using the latest release (4.30.1):
```
Traceback (most recent call last):
File "hf2onnx.py", line 99, in <module>
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature= feature)
File "site-packages/transformers/onnx/features.py", line 728, in check_supported_model_or_raise
model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)
File "site-packages/transformers/onnx/features.py", line 575, in get_supported_features_for_model_type
raise KeyError(
KeyError: "refinedwebmodel is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support refinedwebmodel please propose a PR or open up an issue."
```
best<|||||>Hey all! The main modeling code should be ready for final review now. Thanks @ArthurZucker for the comprehensive review - it was really helpful! There's one bug left that's causing a failing test, but I think it's a one-line fix that I can track down tomorrow. This may also be the issue that's causing assisted generation to fail, but those tests are currently skipped.
I also need to figure out porting the tokenizer, and then once this is merged I'll need to prepare the repos to transition over to the library code.
cc @amyeroberts for core maintainer review! |
transformers | 24,522 | closed | LLamaTokenizer padding_side='left' | Trying to Fine-tune the LLama model I encountered that the LLama tokenizer adds padding tokens on the left side by default. I expect the padding to be added to the right side. Is this a bug or supposed to be like this?
https://github.com/huggingface/transformers/blob/6fe8d198e3e30b709a7d233240f273804b886dcc/src/transformers/models/llama/tokenization_llama_fast.py#L85C5-L85C5 | 06-27-2023 13:07:52 | 06-27-2023 13:07:52 | Please ask questions like this on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.
All decoder models use padding on the left side since they can't properly generate the next token after a sentence if it ends with pad tokens. Llama is a decoder model, so follows the same rule. |
transformers | 24,521 | closed | Fix LR scheduler based on bs from auto bs finder | # What does this PR do?
This PR presents an alternative to https://github.com/huggingface/transformers/pull/23038, since we can't actually modify the schedulers realistically in Accelerate, this sets the correct "total batch size" based on the new bs being used, which gets trickled down to the creation of the scheduler via max steps if applicable. We can't go further than this however, as modifying the scheduler further would allude to auto-gradient accumulation, which while good, should be done after this :)
(and might be as simple as just modifying `self.accelerator.gradient_accumulation_steps`
| 06-27-2023 13:06:39 | 06-27-2023 13:06:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @mzamini92<|||||>> cc @mzamini92
yup that works. ๐ <|||||>@muellerzr I think there might be a problem with this PR, after building from source, the following snippet gives me a learning rate of 0 before the end of training:
```python
import evaluate
import numpy as np
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TrainingArguments, Trainer
dataset = load_dataset("yelp_review_full", split="train[99%:]")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
tokenized_datasets = dataset.map(tokenize_function, batched=True).shuffle(seed=42)
tokenized_datasets = tokenized_datasets.train_test_split(test_size=0.1)
small_train_dataset = tokenized_datasets["train"]
small_eval_dataset = tokenized_datasets["test"]
metric = evaluate.load("accuracy")
training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch", report_to=[], num_train_epochs=3, auto_find_batch_size=True,
lr_scheduler_type="linear", per_device_train_batch_size=1024, logging_strategy="steps", logging_steps=5, fp16=True)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```<|||||>@thomas-schillaci what version of Accelerate are you using? <|||||>@muellerzr `0.20.3` using `accelerate launch` and the following config:
```
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 3
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```<|||||>Thank you so much @thomas-schillaci! https://github.com/huggingface/transformers/pull/24758 should fix it right up, I verified it was being decayed properly and in-turn, a new lr scheduler actually being made. You can toy with `pip install git+https://github.com/huggingface/transformers@fix-train-bs` to try :) |
transformers | 24,520 | closed | set model to training mode before accelerate.prepare | - trainer: @sgugger
| 06-27-2023 12:58:33 | 06-27-2023 12:58:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,519 | open | I have a question in the source code called modeling_llama.py | ### System Info
@ArthurZucker @gante
path : "src/transformers/models/llama/modeling_llama.py"
Line 85 and 232 of this code contains float32 as a constant.
I think, it looks like a bug. Or is there another reason?
Thanks.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
class LlamaRMSNorm(nn.Module):
def __init__(self, hidden_size, eps=1e-6):
"""
LlamaRMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.variance_epsilon = eps
def forward(self, hidden_states):
input_dtype = hidden_states.dtype
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
return (self.weight * hidden_states).to(input_dtype)
======
class LlamaAttention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(self, config: LlamaConfig):
super().__init__()
self.config = config
self.hidden_size = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_dim = self.hidden_size // self.num_heads
self.max_position_embeddings = config.max_position_embeddings
if (self.head_dim * self.num_heads) != self.hidden_size:
raise ValueError(
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
f" and `num_heads`: {self.num_heads})."
)
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# [bsz, nh, t, hd]
if past_key_value is not None:
# reuse k, v, self_attention
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
raise ValueError(
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
attn_weights = torch.max(
attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)
)
# upcast attention to fp32
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
attn_output = torch.matmul(attn_weights, value_states)
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2)
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
### Expected behavior
may be float32 --> dtype | 06-27-2023 12:15:04 | 06-27-2023 12:15:04 | Hey @park1200656 ๐
Some operations degrade the quality of the outputs if not performed at a certain minimum precision. The softmax in the attention layer and the variance accumulation in RMSNorm performed in FP32 are two examples of that :) Related read: [this issue](https://github.com/huggingface/transformers/pull/17437)
_________________________________________
Following our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค If this is your first issue with us, check [this guide](https://huggingface.co/course/chapter8/5?fw=pt).<|||||>I had the same question [yesterday](https://discuss.huggingface.co/t/why-models-llama-in-particular-upcasts-softmax-to-fp32/44787). Can we make it optional? At least softmax
BF16 is good enough. And by "good enough" I mean it "not crashes at long context at my laptop's 3080TI " and "return values are the same anyway, instability might be overstated"
Example. Making it optional:
```diff
diff --git a/src/transformers/models/llama/modeling_llama.py b/src/transformers/models/llama/modeling_llama.py
index 24231c3f7..230e5333c 100755
--- a/src/transformers/models/llama/modeling_llama.py
+++ b/src/transformers/models/llama/modeling_llama.py
@@ -228,8 +228,12 @@ class LlamaAttention(nn.Module):
attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)
)
- # upcast attention to fp32
- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
+ # optionally upcast attention to fp32
+ if self.config.use_attn_upcast:
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
+ else:
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1).to(query_states.dtype)
```
Test script:
```python
from transformers import AutoModelForCausalLM
import torch
import sys
model = AutoModelForCausalLM.from_pretrained("./models/open_llama_3b/", torch_dtype=torch.bfloat16).cuda()
model.config.use_attn_upcast = "--no-oom" not in sys.argv
print("Predict that OOM will happen: ", model.config.use_attn_upcast)
input_ids = torch.arange(20)[None].cuda()
print(model(input_ids).logits.mean(-1))
input_ids = torch.arange(1000)[None].cuda()
print(model(input_ids).logits.mean())
```
With upcast removed
```console
$ python demo_py.py --no-oom
Predict that OOM will happen: False
tensor([[-9.0000, -6.0938, -1.8281, -7.7812, -7.5000, -7.5000, -7.6250, -7.7500,
-7.1250, -7.0000, -7.7188, -7.5625, -6.9688, -5.5312, -6.1562, -6.5312,
-7.5938, -7.0000, -7.1875, -6.8750]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<MeanBackward1>)
tensor(-6.9062, device='cuda:0', dtype=torch.bfloat16, grad_fn=<MeanBackward0>)
```
With upcast:
```console
$ python demo_py.py
Predict that OOM will happen: True
tensor([[-9.0000, -6.0938, -1.8281, -7.7812, -7.5000, -7.5000, -7.6250, -7.7500,
-7.1250, -7.0000, -7.7188, -7.5625, -6.9688, -5.5312, -6.1562, -6.5312,
-7.5938, -7.0000, -7.1875, -6.8750]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<MeanBackward1>)
Traceback (most recent call last):
File "/home/fella/src/llama/text-generation-webui/demo_py.py", line 14, in <module>
print(model(input_ids).logits.mean())
^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 690, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 580, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 295, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 232, in forward
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fella/src/sd/sd/lib/python3.11/site-packages/torch/nn/functional.py", line 1845, in softmax
ret = input.softmax(dim, dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 124.00 MiB (GPU 0; 15.74 GiB total capacity; 14.83 GiB already allocated; 134.38 MiB free; 15.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```<|||||>@Maykeye we have other options to reduce the memory footprint at inference time -- have you tried playing with our [support for 4-bit inference](https://huggingface.co/docs/transformers/v4.30.0/en/perf_infer_gpu_one#bitsandbytes-integration-for-fp4-mixedprecision-inference)? On a 3080 TI you may be able to run the 7B LLaMA model this way :)<|||||>Yes and quantized models produce noticeably different results. <|||||>In general, lowering the precision of these operations will have a more significant impact on downstream performance ([take it from the person that initially added the upcast at Meta](https://github.com/huggingface/transformers/pull/17437#issuecomment-1139723683)).
Since we have other memory reduction strategies, we will not add the flag you're proposing. (Still, the code is open-source, feel free to fork `transformers` and keep your changes ๐ค )<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,518 | closed | [i18n-<English>] Translating docs to <Chinese> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
I translated all the English documents into Chinese.
Let's bring the documentation to all the <languageName>-speaking community ๐ (currently 0 out of 267 complete)
Who would want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers ๐ค).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/).
<!--
Keep on adding more as you go ๐ฅ
-->
| 06-27-2023 11:11:36 | 06-27-2023 11:11:36 | Please completely fill the template with your language and also search the existing GitHub issues to avoid duplicates. |
transformers | 24,517 | closed | [Bug]Non-robust directory splitting and detection at get_cached_module_file | ### System Info
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I specified the correct model directory path under windows, but the library doesn't seem to find it correctly.
Just like this:
```python
# Directory THUDM/chatglm2-6b is existed at same directory with code.
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
```
2. Then I found the reason was that the `submodule` was not splitted correctly.
**Source Code At**: `transformers/dynamic_module_utils:get_cached_module_file`
```python
# line: 235 This code returns true on both linux and windows.
is_local = os.path.isdir(pretrained_model_name_or_path)
if is_local:
# line: 237 But this code cann't split path correctly, because os.path.sep is "\\" rather than "/".
submodule = pretrained_model_name_or_path.split(os.path.sep)[-1]
```
3. So I solved the problem by modifying my path.
```python
tokenizer = AutoTokenizer.from_pretrained("THUDM\\chatglm2-6b", trust_remote_code=True)
```
### Expected behavior
Even on windows code, it is common to specify paths by forward slashes. Although it may not be a standard paradigm.
So I think there are two possible approaches:
1. Use a more robust way to split `submodule`, such as the pathlib.
2. Explicitly throws a warning asking windows users to pass in a standard windows path.
| 06-27-2023 10:32:03 | 06-27-2023 10:32:03 | Are you really certain `os.path.isdir(pretrained_model_name_or_path)` returns `True` with a path containing `/` on Windows? This seems really weird to me.<|||||>> Are you really certain `os.path.isdir(pretrained_model_name_or_path)` returns `True` with a path containing `/` on Windows? This seems really weird to me.
yes~
This is the screenshot of the debugging I just did on vscode.

<|||||>Annoying (that it's super inconsistent like this). Will look at this later today and try to come up with a fix!<|||||>Could you try if the PR mentioned above fixes your issue?<|||||>> Could you try if the PR mentioned above fixes your issue?
No problem~<|||||>> Could you try if the PR mentioned above fixes your issue?
Problem has been solved by this PR! Should I close this issue now?<|||||>It will be closed auto-magically by GitHub when the PR is merged :-) |
transformers | 24,516 | closed | RuntimeError: Cross Attention in GPTBigCodeModel | ### System Info
Code: https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py
In the case of cross attention, if `self.c_attn` is initialised to give output of dimension `2 * self.embed_dim` **(Line 112 & 227)**
The `key_value` split in **line 246**,
`key, value = key_value.split((self.head_dim, self.head_dim), dim=-1)`
* would raise an exception _(when self.embeded_dim != self.head_dim)_
`RuntimeError: split_with_sizes expects split_sizes to sum exactly to 2*self.embeded_dim`
_PS - I could be mistaken, but it would be worth having a look (and correct me if I am wrong!)._
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
inp = torch.rand((2, 4, 16))
g = GPTBigCodeAttention(config, is_cross_attention=True)
g(inp, encoder_hidden_states=inp)
```
`RuntimeError: split_with_sizes expects split_sizes to sum exactly to 32 (input tensor's size at dimension -1), but got split_sizes=[4, 4]`
### Expected behavior
The forward method should return the attention output of the shape [2, 4, 16]. | 06-27-2023 10:12:45 | 06-27-2023 10:12:45 | Hey! Would you mind sharing a full reproduced? (ex the config you are using)?
This might just be a raise Value error missing and a check on the shapes! <|||||>Sure @ArthurZucker
Reproducible Code:
```
import torch
from transformers import GPTBigCodeConfig
config = GPTBigCodeConfig(multi_query=False, n_embd=16, n_head=2)
attn = GPTBigCodeAttention(config, is_cross_attention=True)
inp = torch.rand((2, 4, 16))
attn(inp, encoder_hidden_states=inp)
```
Thanks in advance :)<|||||>Bump :)
@ArthurZucker<|||||>Okay! Thanks for bumping.
Few things here:
- `GPTBigCodeAttention` is not in the `__init__` so not on the surface of transformers: we don't really support using it outside in such a way. Having a working snipper which uses a `GPTBigCodeModel` is better.
- You are not properly setting the arguments that you want to set: `attn.embed_dim` will show `768`. To change the number of heads and the `attn.embed_dim`, use `config = GPTBigCodeConfig(multi_query=False,hidden_size=16, num_attention_heads=2)`
- I don't recommend trying to use it this way, as it is not intended. I tried a few things to see if a quick fix was possible, but it seems that during integration all edge cases were not tested. Especially `add_cross_attention` which is not part of the configuration.
<|||||>Got it.
Thanks @ArthurZucker! |
transformers | 24,515 | closed | Update hyperparameter_search.py | # What does this PR do?
1. PR #24384 results in many tests failing related to HP Search. https://huggingface.slack.com/archives/C02CH2YP4EQ/p1687823783572709
2. Error being `tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hyperparameter_search
(line 122) TypeError: is_available() missing 1 required positional argument: 'self'` when `default_hp_search_backend` is called.
3. This PR fixes it.
Tested this via:
```
export CUDA_VISIBLE_DEVICES="0"
export RUN_SLOW="yes"
export LOGLEVEL=INFO
cd transformers
pytest -sv tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hyperparameter_search
``` | 06-27-2023 09:07:52 | 06-27-2023 09:07:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,514 | closed | LlamaModel.forward() got an unexpected keyword argument 'token_type_ids' | ### System Info
* transformers version: **4.30.2**
* Python version: **3.10.11**
* System: Ubuntu 22.04.2 LTS
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For the following code,
```python
model_name = 'TheBloke/guanaco-7B-HF'
model = AutoModel.from_pretrained(model_name, torch_dtype=torch.bfloat16, trust_remote_code=True)
model = model.to('cuda')
inputs = tokenizer(['hello'], max_length=100, truncation=True, return_tensors="pt").to(model.device)
outputs = model(**inputs)
```
I got following error messages:
```js
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[34], line 2
1 inputs = tokenizer(['hello'], max_length=100, truncation=True, return_tensors="pt").to(model.device)
----> 2 outputs = model(**inputs)
File ~/miniconda/envs/vocab/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: LlamaModel.forward() got an unexpected keyword argument 'token_type_ids'
```
The `inputs` value is following:
```js
{'input_ids': tensor([[ 1, 22172]], device='cuda:0'), 'token_type_ids': tensor([[0, 0]], device='cuda:0'), 'attention_mask': tensor([[1, 1]], device='cuda:0')}
```
There is `token_type_ids` in the value returned from `tokenizer`, however, `LlamaModel.forward()` don't accept the arguments.
And I compared `LlamaTokenizer` and `LlamaTokenizerFast`, I found they behave differently.
```python
from transformers import LlamaTokenizer, LlamaTokenizerFast
print(f"LlamaTokenizer: {LlamaTokenizer.from_pretrained(model_name)('hello')}")
print(f"LlamaTokenizerFast: {LlamaTokenizerFast.from_pretrained(model_name)('hello')}")
```
the results are:
```js
LlamaTokenizer: {'input_ids': [1, 22172], 'attention_mask': [1, 1]}
LlamaTokenizerFast: {'input_ids': [1, 22172], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]}
```
### Expected behavior
Should `LlamaTokenizerFast` remove the `token_type_ids` in the returned value? or should `LlamaModel.forward()` accept the `token_type_ids` in the function arguments list? Thanks. | 06-27-2023 07:05:35 | 06-27-2023 07:05:35 | cc @ArthurZucker <|||||>Hey, this is a duplicate of #23818 , #24042. Make sure to use the latest version of transformers! <|||||>@ArthurZucker I just double checked, as provided above, the `transformers` I used is version `4.30.2`, which is the latest version for now.
<img width="1186" alt="ๆชๅฑ2023-06-28 ไธๅ2 26 09" src="https://github.com/huggingface/transformers/assets/6299096/41da9fa9-de69-4564-9159-3c63273dafaa">
<img width="1421" alt="ๆชๅฑ2023-06-28 ไธๅ2 26 34" src="https://github.com/huggingface/transformers/assets/6299096/e9928d80-d4a6-4891-ac47-2242e915cd6e">
Is there any new version I missed?<|||||>You can use the latest version using `pip install git+https://github.com/huggingface/transformers`! The pull request is not part of the release ๐ <|||||>Oh, ๐, Thank you ๐. |
transformers | 24,513 | closed | Add Multi Resolution Analysis (MRA) (New PR) | # Add Multi Resolution Analysis (MRA) for Approximate Self-Attention
This PR adds the MRA model to the repository.
Paper: [https://arxiv.org/pdf/2207.10284.pdf](https://arxiv.org/pdf/2207.10284.pdf)
Code: [https://github.com/mlpen/mra-attention](https://github.com/mlpen/mra-attention)
To-do:
- [x] Improve loading cuda kernels
- [x] Improve formatting and documentation
- [x] Upload checkpoints | 06-27-2023 06:44:06 | 06-27-2023 06:44:06 | Copied all files over from #20573 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Could you fix the failing tests?<|||||>Hello @sgugger, I've made sure all checks pass and fixed conflicts. <|||||>Hello @amyeroberts, I've addressed your comments and made some code changes. Please take a look at the updated files. <|||||>Hi @amyeroberts, I've addressed the suggestions from the code review. Please take a look at the updated code. <|||||>Thanks for catching these errors @amyeroberts! I've applied both changes. <|||||>@novice03
It seems the CI get
```bash
(line 403) ValueError: sequence length must be divisible by the block_size.
```
when `load_cuda_kernels` loads successfully.
It's likely due to `seq_length=8` from `MraModelTester`, but I am not able to set the correct combination of `seq_length`, `block_size`, `num_blocks` to make it works.
Note, our daily CI (with torch 2.0.1 + CUDA 11.8) fails to load `custom CUDA kernels` and the execution goes to
```python
if cuda_kernel is None:
return torch.zeros_like(query).requires_grad_()
```
in `mra2_attention` and tests pass.
However, in our CI with torch 1.13 (and with CUDA 11.6.2), kernel is loaded, but the tests fail.
It would be great if you can help us to find the correct settings where the CI will pass when kernel is loaded.
Thanks in advance ๐ค .<|||||>You can run
```python
python3 -m pytest -v tests/models/mra/test_modeling_mra.py::MraModelTest::test_for_masked_lm
```
The full error log is (if custom cuda kernal is loaded successfully)
```bash
self = <tests.models.mra.test_modeling_mra.MraModelTest testMethod=test_for_masked_lm>
def test_for_masked_lm(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_for_masked_lm(*config_and_inputs)
tests/models/mra/test_modeling_mra.py:322:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/models/mra/test_modeling_mra.py:210: in create_and_check_for_masked_lm
result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=token_labels)
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:1093: in forward
outputs = self.mra(
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:1028: in forward
encoder_outputs = self.encoder(
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:782: in forward
layer_outputs = layer_module(hidden_states, attention_mask)
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:729: in forward
self_attention_outputs = self.attention(hidden_states, attention_mask)
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:681: in forward
self_outputs = self.self(hidden_states, attention_mask)
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/mra/modeling_mra.py:615: in forward
context_layer = mra2_attention(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
query = tensor([[[[ 0.0500, -0.0523, -0.0260, ..., 0.0000, 0.0000, 0.0000],
[-0.1339, 0.0844, 0.0287, ..., 0... [ 0.0293, 0.1609, 0.0547, ..., 0.0000, 0.0000, 0.0000]]]],
device='cuda:0', grad_fn=<CatBackward0>)
key = tensor([[[[ 0.0185, -0.0316, 0.0150, ..., 0.0000, 0.0000, 0.0000],
[-0.0575, -0.1123, 0.0832, ..., 0... [ 0.0608, 0.0932, -0.0973, ..., 0.0000, 0.0000, 0.0000]]]],
device='cuda:0', grad_fn=<CatBackward0>)
value = tensor([[[[ 0.0131, 0.1242, 0.0672, ..., 0.0000, 0.0000, 0.0000],
[-0.0212, 0.0600, 0.0269, ..., 0... [-0.1005, -0.0048, 0.0561, ..., 0.0000, 0.0000, 0.0000]]]],
device='cuda:0', grad_fn=<CatBackward0>)
mask = tensor([[-2.1475e+09, 1.0000e+00, 1.0000e+00, -2.1475e+09, 1.0000e+00,
-2.1475e+09, -2.1475e+09, 1.0000e+... 1.0000e+00, 1.0000e+00, -2.1475e+09, 1.0000e+00,
-2.1475e+09, -2.1475e+09, 1.0000e+00]], device='cuda:0')
num_blocks = 64, approx_mode = 'full', block_size = 32, initial_prior_first_n_blocks = 0, initial_prior_diagonal_n_blocks = 0
def mra2_attention(
query,
key,
value,
mask,
num_blocks,
approx_mode,
block_size=32,
initial_prior_first_n_blocks=0,
initial_prior_diagonal_n_blocks=0,
):
"""
Use Mra to approximate self-attention.
"""
if cuda_kernel is None:
return torch.zeros_like(query).requires_grad_()
batch_size, num_head, seq_len, head_dim = query.size()
meta_batch = batch_size * num_head
if seq_len % block_size != 0:
> raise ValueError("sequence length must be divisible by the block_size.")
E ValueError: sequence length must be divisible by the block_size.
src/transformers/models/mra/modeling_mra.py:403: ValueError
```<|||||>Hello @ydshieh, thanks for bringing this up. We will likely have to use larger values for seq_len and hidden_size. Can you please try with the values [here](https://github.com/novice03/transformers/blob/1612188d6b6d094c81cc34a77641936687b8f7b3/tests/models/mra/test_modeling_mra.py)? |
transformers | 24,512 | open | Whisper model predicts "thank you" or "you" on silence | ### System Info
wave2vec2 models predict text even there is no speech. The model predicts "thank you" and "you" on silence or empty speech.
Python 3.10
Ubuntu 20.04
transformers==4.30.2
https://github.com/gradio-app/gradio/issues/4663#issue-1772508542
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Follow these steps:
1. install gradio and transformers.
2. use automatic speech recognition pipeline and wave2vec2 base model.
3. use a microphone and don't speak.
It will keep predicting words on an empty stream.
Code to reproduce it.
```python
from transformers import pipeline
import gradio as gr
p = pipeline("automatic-speech-recognition", model="openai/whisper-base")
def transcribe(audio, state=""):
text = p(audio)["text"]
state += text + " "
return state, state
# Set the starting state to an empty string
gr.Interface(
fn=transcribe,
inputs=[
gr.Audio(source="microphone", type="filepath", streaming=True),
"state"
],
outputs=[
"textbox",
"state"
],
live=True).launch(share=True)
```
### Expected behavior
It should not predict words on an empty stream. | 06-27-2023 04:31:32 | 06-27-2023 04:31:32 | Hmm, this code snippet involves `gradio` which is usually out of the scope of the `transformers` GitHub issue pages. But trying to have a reproducible example with only `transformers` involved seems tricky in this case.
@sgugger @amyeroberts @ArthurZucker Any general comment on how we proceed in such cases?
<|||||>Hey!
Gradio can be interfering indeed, especially given the `streaming = True`. A few questions would help:
- does this happen on silence for multiple audio?
- was "thank you" said before? Seems like the example
- Can you provide an example audio file when this happens?
- Did you try without gradio (just the pipeline on the audio)?
This would help us a lot if you can have these informations! <|||||>yes
no
audios (silence, noise, recording)[audios.zip](https://github.com/huggingface/transformers/files/11882755/audios.zip)
Yes.
```python
p = pipeline("automatic-speech-recognition", model="openai/whisper-base")
p("silence.wav")['text']
/home/irfan/.pyenv/versions/3.10.10/envs/WhisperDemo/lib/python3.10/site-packages/transformers/generation/utils.py:1353: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
' you'
```<|||||>Same issue, using gradio and non-gradio wav sound sources. I've also seen the behavior in [Unity Undertone](https://leastsquares.io/docs/unity/undertone) , a Whisper package for Unity 3D. So it may be in Whisper and not the ASR pipeline. Maybe a few more switches to control returned info might help.
[Whisper: Decode with condition_on_previous_text=False](https://github.com/huggingface/transformers/issues/21467#top)](https://github.com/huggingface/transformers/issues/21467)
[A possible solution to Whisper hallucination](https://github.com/openai/whisper/discussions/679) |
transformers | 24,511 | closed | Decoding adds space between special tokens when skip_special_tokens = True | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.109+-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: -
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id = "lmsys/fastchat-t5-3b-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast = False)
inp = tokenizer.encode("Hello world")
print(inp)
out = tokenizer.decode(inp, skip_special_tokens = False,spaces_between_special_tokens=True)
print(out)
> 'Hello world</s>'
out = tokenizer.decode(inp, skip_special_tokens = True,spaces_between_special_tokens=True)
print(out)
> 'Hello world'
out = tokenizer.decode(inp, skip_special_tokens = True,spaces_between_special_tokens=False)
print(out)
> 'Hello world'
```
### Expected behavior
I expected the last two outputs to be same. In the 2nd last output, Since special tokens are skipped, no space should have been added even if `spaces_between_special_tokens=True` | 06-27-2023 04:29:12 | 06-27-2023 04:29:12 | Hey! The behaviour is totally expected: you added `" "` as an added token. This means that it will not be split, and will not be encoded directly using the underlying model.
Here is what happens: (check [this](https://github.com/ArthurZucker/transformers/blob/74cb2b1dd9755f6bd72fc9e65d1e374e287afd84/src/transformers/models/t5/tokenization_t5.py#L324-L327)
```python
>>> tokenizer.encode("Hello world")
# 1. Split the input using a trie made of special tokens and added tokens:
["Hello", " ", "world"]
# 2. Add a prefix space
[" Hello", " ", " world"]
# 3. Replace prefix space with meta-space
['โHello', ' ', 'โworld']
# 4. Get the corresponding tokens
[8774, 32106, 296, 1]
>>> tokenizer.decode(inp, skip_special_tokens = False,spaces_between_special_tokens=True)
# When decoding, `convert_tokens_to_string` is called. A ` ` is added before every special token. But ` ` is a special token, so a space will be added to it
["โHello", " ", "โworld"]
```
When you skip special tokens, `" "` is skiped, but you still have the space from the tokenizer that *joins* the text on a space to properly reformat it.
You are not correctly using the API, I would recommend you to remove " " from the special tokens. |
transformers | 24,510 | closed | Show a warning for missing attention masks when pad_token_id is not None | # What does this PR do?
Fixes #16136
Shows a one-time warning message when the pad_token_id is not None and no attention masks are given.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #16136
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante @ydshieh | 06-27-2023 01:44:15 | 06-27-2023 01:44:15 | @ydshieh @gante
I've added the warning message as per #21916 and also a short test. Please let me know if you have any suggestions. If it looks good, I can copy/paste the warning to more models as well. Thanks!
<|||||>@hackyon LGTM ๐
But before you spend time propagating the change to the other models, let's also get a green light from a core maintainer (cc @sgugger )<|||||>Thanks for the review.
Would it be better to check for the presence of the pad_token_id inside input_ids first before throwing the error, as per
https://github.com/huggingface/transformers/pull/17444/files? If so, I can make the change to reflect that here.
<|||||>@sgugger
Thanks for the input. I changed my pull request up to be more like #17444. Let me know what you think. Thanks!
<|||||>Thanks, I've updated the code accordingly.<|||||>@gante could you have a second look here?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@hackyon Thank you for the contribution! Would you like to add it to the remaining models? ๐ค <|||||>Sure, I'll look into it ๐ |
transformers | 24,509 | closed | Documentation Clarification: Autoregressive Models using GenerationMixin | In the documentation on HuggingFace and within the source code for autoregressive models in comments, it shows to use the `generate` method from the GenerationMixin. Here is an example in the code for Llama model.
```
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, CausalLMOutputWithPast]:
r"""
Args:
labels (torch.LongTensor of shape (batch_size, sequence_length), *optional*):
Labels for computing the masked language modeling loss. Indices should either be in [0, ...,
config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns:
Example:
###python
>>> from transformers import AutoTokenizer, LlamaForCausalLM
>>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
>>> prompt = "Hey, are you conscious? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
"""
```
However, autoregressive models shouldn't need beam search as casual language modeling output should be able to directly decode the tokens. In tracing the model inheritance, there is no connection to `GenerationMixin` either to expose a generate method nor an implementation of the generate method for this models. What am I missing?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Review the documentation of autoregressive models
2. Review the source code of autoregressive models
3. Autoregressive models do not have `GenerationMixin` but have comment to use `.generate` method.
### Expected behavior
1. The source code of the models reflects the comments with `GenerationMixin` implementation or `.generate` method implementation or the comments and model documentation reflect how to use for inference if `GenerationMixin` is not used. | 06-26-2023 23:37:43 | 06-26-2023 23:37:43 | I was confusing training vs inference. At training time, the model autoregressively trains by shifting the labels. It has the entire input and output allowing for cross entropy for each label prediction. However, at inference time, that autoregressive generation has to be coded. Where I'm not totally clear is where the code is that connects the models to the `GenerationMixin` for inference. It looks like that it should just work to call `.generate` from the models don't inherent from `GenerationMixin`.<|||||>Found it, the base class, `class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMixin):` enables all classes to have access to generate method and they are guarded based on the config and `can_generate` which is based on whether or not it implements `prepare_inputs_for_generation`method (e.g. all models have `generate` method they can invoke, `generate` method will work if `prepare_inputs_for_generation` is implemented otherwise give an error). If only I can delete this issue. |
transformers | 24,508 | open | Add Flax diverse group search | # What does this PR do?
Mimics https://github.com/huggingface/transformers/pull/9006, but for Flax.
We want to match how PyTorch's logic accounts for `group_size` and `num_beam_groups` [here](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/beam_search.py#L175) and [here](https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/generation/beam_search.py#L249C1-L281C26)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 23:16:03 | 06-26-2023 23:16:03 | cc @sanchit-gandhi <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @yeandy! This PR is looking in good shape - thanks for your efforts so far! Would you like to go all the way and see it to completion? Happy to help with the remainder of the integration!<|||||>Hey @sanchit-gandhi. Due to other commitments, I currently don't have bandwidth to continue this. And the timeline for me to get to this unknown right now. If someone else wants to work on this, I'm ok with that. |
transformers | 24,507 | closed | Add Compact Convolutional Transformer model (CCT) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20133 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 06-26-2023 22:58:20 | 06-26-2023 22:58:20 | Could you let me know how to fix the failed test. The issue apparently is that PIL.Image.LINEAR does not exist<|||||>Rebase on main, a fix has been merge! ๐ <|||||>Done. Could you let me know if there are any further changes<|||||>Hey! Sorry I forgot to mention this in my previous comment, we are trying to push models to the hub as often as possible, to make it a LOT easier for contributors! Then we share the repo on the hub where everything is supported ๐ค
I would recommend following [this tutorial](https://huggingface.co/docs/transformers/custom_models), and sharing here the uploaded model! Tell me if that sounds good to you! <|||||>Sorry if I misunderstood, but I followed this [tutorial](https://huggingface.co/docs/transformers/add_new_model#514-port-brandnewbert-to-transformers), and it looks similar to the tutorial you shared. Could you quickly point out the difference between the two, or what additionally I must do?<|||||>Sure, adding the model to the hub rather than with a PR to transformers would give a model usable out of the box, without having to fix all of the CI, while keeping your code and no need for reviews etc. Mostly you would have to add MAPPINGS as explain in the tutorial, and users will just need to use `trust_remote_code = True` when doing `class.from_pretrained`! <|||||>I've made the changes, and uploaded the models: [cct_224](https://huggingface.co/rishabbala/cct_14_7x2_224) and [cct_384](https://huggingface.co/rishabbala/cct_14_7x2_384). Let me know if this looks ok, and if I should go ahead and close the PR. Thanks for the help :)<|||||>Look good to me thanks a lot! Would suggest you to add a model card for people who don't really know the model and might want a quick way to use it! |
transformers | 24,506 | open | Image-Classification Dataloader Missing Image Key | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-1029-azure-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
-
### Who can help?
@amyeroberts
I tried to run the official example of image classification as: python run_image_classification.py --output_dir output --dataset_name cifar10 --do_train --overwrite_output_dir

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I just try to run the official image- classification examples.
### Expected behavior
I was expecting the code to find the image key. | 06-26-2023 20:57:13 | 06-26-2023 20:57:13 | This dataset seems to have `img` as key in the dataset.<|||||>If you just want to try the script, the official example in the README is
```python
python run_image_classification.py \
--dataset_name beans \
--output_dir ./beans_outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--push_to_hub \
--push_to_hub_model_id vit-base-beans \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337
```<|||||>> This dataset seems to have `img` as key in the dataset.
That's not the case. In the trained, _remove_unused_columns remove "image" keys
<|||||>What I am saying is the original dataset has the `img` key, but the script expects `image` key. That is why there is an error. <|||||>If you really want to try `cifar10`, the quick way is to replace `"image"` to `"img"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,505 | closed | Clean load keys | # What does this PR do?
This PR finishes the work done in and completely cleans up the `_keys_to_ignore_on_save`, `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected`. Those were used in three situations:
1. Not saving the tied weights. This came from the (wrong) assumption that torch would take twice the space for tied weights (which it doesn't) and also created bugs where non-tied weights were not saved (unless a hack was added like for RoBERTa models). This is not necessary since PyTorch doesn't take more space for tied weights and safetensors will properly remove them (with `_tied_weights_keys`)
2. Ignoring non-saved non-persistent buffers. This can be done automatically in the code of modeling_utils as non-persistent buffers are keys in the model named buffers not in the state dict, so easy to dectect
3. Ignoring known unexpected weights from another architecture (like the pooler). This isn't necessary anymore since we don't issue a warning in this case. | 06-26-2023 20:52:52 | 06-26-2023 20:52:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,504 | closed | Add bitsandbytes support for gpt2 models | # What does this PR do?
The current bitsandbytes integration only supports models using [nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear), which excludes gpt2 and other models that instead use [Conv1D](https://github.com/huggingface/transformers/blob/68c92981ff2b804979d2e6107eeefe298d1e5183/src/transformers/pytorch_utils.py#L85). This PR enables loading/serialization of these models, as well as gpt2-xl tests for int8 and 4bit.
This is achieved by transposing the weight matrices of Conv1D layers before quantization.
Note: Following the suggestion in the bnb tests to use models with >1b params only leaves [gpt2-xl](https://huggingface.co/gpt2-xl), which is unfortunately a 6.4GB download due to being stored in float32.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@younesbelkada, @TimDettmers | 06-26-2023 20:29:24 | 06-26-2023 20:29:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> FYI, on my side I get these failing tests, I believe there might be a small difference between our envs. We can always update the expected sentence later in case they fail on the daily CI (which probably will be the case). Happy also to add the missing test in a follow up PR.
> <img alt="Screenshot 2023-06-27 at 16 08 55" width="1314" src="https://user-images.githubusercontent.com/49240599/249179643-418abf8c-3408-49b6-8212-5e4e75e5f284.png">
Aha, it's been stable for me so far, but I can see that happening. If it's any help I'm running this on an RTX 4090 and `torch==2.1.0.dev20230603+cu121`.
> Also one test is failing for 4bit:
>
> ```shell
> FAILED tests/bnb/test_4bit.py::Bnb4BitGPT2Test::test_memory_footprint - AttributeError: 'GPT2MLP' object has no attribute 'dense_4h_to_h'
> ```
>
> Could quickly address a fix? ๐ After that we should be ready to merge
Nice catch! I have a fix in mind that should also remove most of the int8 test code I added, so I'll get that in asap. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.