repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
22,176
closed
Deepspeed initialization AttributeError: 'EncoderDecoderConfig' object has no attribute 'hidden_size'
### System Info System: Ubuntu 22.04 - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.29 - Python version: 3.8.13 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, 4x RTX 3090 - Using distributed or parallel set-up in script?: yes, deepseed <details> <summary>packages - Click to expand!</summary> ``` Package Version ------------------------- ------------ absl-py 1.4.0 abydos 0.5.0 accelerate 0.17.0 aiohttp 3.8.4 aiosignal 1.3.1 alembic 1.9.4 antlr4-python3-runtime 4.9.3 anyio 3.6.2 appdirs 1.4.4 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 arrow 1.2.3 astroid 2.14.2 asttokens 2.2.1 async-timeout 4.0.2 attrs 22.2.0 Babel 2.12.1 backcall 0.2.0 beautifulsoup4 4.11.2 black 23.1.0 bleach 6.0.0 cachetools 5.3.0 certifi 2022.12.7 cffi 1.15.1 charset-normalizer 3.0.1 click 8.1.3 clldutils 3.19.0 cloudpickle 2.2.1 codecov 2.0.22 colorama 0.4.6 coloredlogs 10.0 colorlog 6.7.0 comm 0.1.2 contourpy 1.0.7 coverage 5.5 csvw 3.1.3 cycler 0.11.0 Cython 0.29.33 databricks-cli 0.17.4 dataclasses 0.6 datasets 2.10.1 debugpy 1.6.6 decorator 5.1.1 deepspeed 0.8.2 defusedxml 0.7.1 deprecation 2.1.0 dill 0.3.6 docker 6.0.1 docker-pycreds 0.4.0 editdistance 0.6.2 entrypoints 0.4 evaluate 0.4.0 executing 1.2.0 fairseq 0.10.0 fastjsonschema 2.16.3 filelock 3.9.0 Flask 2.2.3 fonttools 4.38.0 fqdn 1.5.1 frozenlist 1.3.3 fsspec 2023.1.0 fuzzywuzzy 0.17.0 gitdb 4.0.10 GitPython 3.1.31 google-auth 2.16.1 google-auth-oauthlib 0.4.6 greenlet 2.0.2 grpcio 1.51.3 gunicorn 20.1.0 hjson 3.1.0 huggingface-hub 0.12.1 humanfriendly 10.0 hydra-core 1.3.2 idna 3.4 importlib-metadata 6.0.0 importlib-resources 5.12.0 ipykernel 6.21.2 ipython 8.11.0 ipython-genutils 0.2.0 ipywidgets 8.0.4 isodate 0.6.1 isoduration 20.11.0 isort 5.12.0 itsdangerous 2.1.2 jedi 0.18.2 jellyfish 0.7.2 Jinja2 3.1.2 jiwer 2.5.1 jmespath 1.0.1 joblib 1.2.0 jsonlines 1.2.0 jsonpointer 2.3 jsonschema 4.17.3 jupyter 1.0.0 jupyter_client 8.0.3 jupyter-console 6.6.2 jupyter_core 5.2.0 jupyter-events 0.6.3 jupyter_server 2.3.0 jupyter_server_terminals 0.4.4 jupyterlab-pygments 0.2.2 jupyterlab-widgets 3.0.5 kiwisolver 1.4.4 language-tags 1.2.0 latexcodec 2.0.1 lazy-object-proxy 1.9.0 Levenshtein 0.20.2 lightning-utilities 0.7.1 lingpy 2.6.9 lxml 4.9.2 Mako 1.2.4 many-stop-words 0.2.2 Markdown 3.4.1 MarkupSafe 2.1.2 matplotlib 3.7.0 matplotlib-inline 0.1.6 mccabe 0.7.0 mistune 2.0.5 mlflow 1.27.0 more-itertools 9.1.0 multidict 6.0.4 multiprocess 0.70.14 mypy-extensions 1.0.0 nbclassic 0.5.2 nbclient 0.7.2 nbconvert 7.2.9 nbformat 5.7.3 nest-asyncio 1.5.6 networkx 3.0 newick 1.7.0 ninja 1.11.1 nltk 3.8.1 notebook 6.5.2 notebook_shim 0.2.2 numpy 1.24.2 oauthlib 3.2.2 omegaconf 2.3.0 packaging 23.0 pandas 1.5.3 pandocfilters 1.5.0 parso 0.8.3 pathspec 0.11.0 pathtools 0.1.2 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 23.0.1 pkgutil_resolve_name 1.3.10 platformdirs 3.0.0 pluggy 0.13.1 portalocker 2.7.0 progress 1.6 prometheus-client 0.16.0 prometheus-flask-exporter 0.22.2 prompt-toolkit 3.0.38 protobuf 3.20.3 psutil 5.9.4 ptyprocess 0.7.0 pure-eval 0.2.2 py 1.11.0 py-cpuinfo 9.0.0 pyarrow 11.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybtex 0.24.0 pycldf 1.34.0 pycparser 2.21 pydantic 1.10.6 Pygments 2.14.0 PyJWT 2.6.0 pylatexenc 2.10 pylint 2.16.2 pyparsing 3.0.9 pyrsistent 0.19.3 pytest 5.4.3 pytest-cov 2.8.1 python-dateutil 2.8.2 python-docx 0.8.11 python-frontmatter 1.0.0 python-json-logger 2.0.7 python-Levenshtein 0.12.2 python-nexus 2.9.0 pytorch-lightning 1.8.6 pytz 2022.7.1 pyxDamerauLevenshtein 1.7.1 PyYAML 6.0 pyzmq 25.0.0 qtconsole 5.4.0 QtPy 2.3.0 querystring-parser 1.2.4 rapidfuzz 2.13.7 rdflib 6.2.0 regex 2022.10.31 requests 2.28.2 requests-oauthlib 1.3.1 responses 0.18.0 rfc3339-validator 0.1.4 rfc3986 1.5.0 rfc3986-validator 0.1.1 rope 0.14.0 rsa 4.9 rapidfuzz 2.13.7 rdflib 6.2.0 regex 2022.10.31 requests 2.28.2 requests-oauthlib 1.3.1 responses 0.18.0 rfc3339-validator 0.1.4 rfc3986 1.5.0 rfc3986-validator 0.1.1 rope 0.14.0 rsa 4.9 sacrebleu 2.3.1 sacremoses 0.0.53 scikit-learn 0.22.2.post1 scipy 1.10.1 seaborn 0.11.2 Send2Trash 1.8.0 sentencepiece 0.1.97 sentry-sdk 1.16.0 setproctitle 1.3.2 setuptools 67.4.0 six 1.16.0 smmap 5.0.0 sniffio 1.3.0 soupsieve 2.4 SQLAlchemy 2.0.4 sqlparse 0.4.3 sru 3.0.0.dev6 stack-data 0.6.2 symspellpy 0.1.0 tabulate 0.9.0 tensorboard 2.12.0 tensorboard-data-server 0.7.0 tensorboard-plugin-wit 1.8.1 tensorboardX 2.6 termcolor 2.2.0 terminado 0.17.1 textdistance 4.5.0 tinycss2 1.2.1 tokenizers 0.13.2 tomli 2.0.1 tomlkit 0.11.6 torch 1.13.1+cu117 torchmetrics 0.11.3 tornado 6.2 tqdm 4.64.1 traitlets 5.9.0 transformers 4.26.1 typing_extensions 4.5.0 Unidecode 1.3.6 uri-template 1.2.0 uritemplate 4.1.1 urllib3 1.26.14 wandb 0.13.10 wcwidth 0.2.6 webcolors 1.12 webencodings 0.5.1 websocket-client 1.5.1 weighted-levenshtein 0.2.2 Werkzeug 2.2.3 wheel 0.38.4 widgetsnbextension 4.0.5 wrapt 1.15.0 xlrd 1.2.0 xxhash 3.2.0 yarl 1.8.2 zipp 3.15.0 ``` </details> ### Who can help? HF Trainer: @stas00, Accelerate: @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```cmd deepspeed --num_gpus=4 training_enc_dec_model_from_scratch.py \ --output_dir="./hf_output/" \ --per_device_train_batch_size=128 \ --dataloader_num_workers=8 \ --gradient_accumulation_steps=1 \ --gradient_checkpointing=False \ --fp16 \ --logging_steps=500 \ --eval_steps=5000 \ --save_steps=50000 \ --num_train_epochs=2 \ --learning_rate=0.001 \ --warmup_steps=5000 \ --logging_first_step=True \ --eval_accumulation_steps=100 \ --log_level=warning \ --deepspeed deepspeed_zero2.json ``` deepspeed_zero2.json >> ``` { "wandb": { "enabled": true, "project": "Project" }, "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": 2e-6, "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 3e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 3e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "steps_per_print": 500 } ``` The training script ```python from transformers import ( AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainer, Seq2SeqTrainingArguments, BartForConditionalGeneration, HfArgumentParser, BertConfig, EncoderDecoderConfig, EncoderDecoderModel, BartConfig, BartForConditionalGeneration, ReformerConfig, LEDConfig, LEDForConditionalGeneration, ) from transformers.data.data_collator import DataCollatorForSeq2Seq from encoder_decoder_utils import ( DataCollatorForEncoderDecoder, Seq2SeqTrainerForEncoderDecoder, ) import torch import torch.distributed import transformers import datasets import os import sys import logging import socket from datetime import datetime, date logger = logging.getLogger(__name__) if __name__ == "__main__": parser = HfArgumentParser(Seq2SeqTrainingArguments) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: training_args = parser.parse_args_into_dataclasses() # get the first value from tuple, probably lib error training_args = training_args[0] # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") experiment_name = f"hf_enc_dec_custom" # just for loading tokenizer model_name = "allegro/herbert-base-cased" # %% define training parameters batch_size = training_args.per_device_train_batch_size output_dir = f"{training_args.output_dir}/" path_datatime = datetime.now().strftime("%Y_%m_%d-%I_%M_%S") training_args.run_name = f"{experiment_name}-{model_name}-{path_datatime}" training_args.predict_with_generate = True training_args.do_train = True training_args.do_eval = True training_args.evaluation_strategy = ( transformers.trainer_utils.IntervalStrategy.STEPS ) training_args.logging_strategy = ( transformers.trainer_utils.IntervalStrategy.STEPS ) # "steps" training_args.save_total_limit = 5 training_args.seed = 123 training_args.report_to = ["wandb"] logger.info(f"After set new values Training/evaluation parameters {training_args}") #! data local machine data_file = "gec_data_file.jsonl" # 1M json line file # gec_data_file.jsonl content: # {"correct": "Ciasne, koronkowe, podniecające.", "incorrect": "Ciasne, koronkowe, podniwcające."} # {"correct": "Ślinka cieknie, serce rwie żebra, szabla w dłoń.", "incorrect": "Ślinka ciekni4, srve rwie żebra, sszabla w dloń."} num_proc = 1 data_file = os.path.abspath(data_file) dataset_name = os.path.basename(data_file) hf_cache_dir = f"{training_args.output_dir}/{experiment_name}/data/{dataset_name}/" hf_cache_dir = os.path.abspath(hf_cache_dir) output_dir = f"{training_args.output_dir}/{experiment_name}/{dataset_name}/{path_datatime}" training_args.output_dir = f"{output_dir}/checkpoints/" dataset = datasets.load_dataset( "json", data_files=data_file, cache_dir=hf_cache_dir, ) test_size = 2000 train_size = len(dataset) - test_size dataset = dataset.train_test_split(test_size=test_size, seed=123) train_data = dataset["train"] train_data = train_data.select(range(train_size)) val_data = dataset["test"] logger.info(f"\n\n*********\nTrain={len(train_data)} val={len(val_data)}") # %% # %% # %% load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.model_max_length = 512 tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token # %% initialize the Model # all the parameters could be found here # https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/bert#transformers.BertConfig config_encoder = BertConfig() config_decoder = BertConfig() config_encoder.hidden_size = 512 config_encoder.num_hidden_layers = 2 config_encoder.num_attention_heads = 4 config_encoder.intermediate_size = 1024 config_encoder.decoder_start_token_id = tokenizer.cls_token_id config_encoder.bos_token_id = tokenizer.bos_token_id config_encoder.eos_token_id = tokenizer.sep_token_id config_encoder.pad_token_id = tokenizer.pad_token_id config_encoder.vocab_size = tokenizer.vocab_size config_decoder.hidden_size = 512 config_decoder.intermediate_size = 1024 config_decoder.num_hidden_layers = 2 config_decoder.num_attention_heads = 4 config_decoder.is_decoder = True config_decoder.add_cross_attention = True config_decoder.decoder_start_token_id = tokenizer.cls_token_id config_decoder.bos_token_id = tokenizer.bos_token_id config_decoder.eos_token_id = tokenizer.sep_token_id config_decoder.pad_token_id = tokenizer.pad_token_id config_decoder.vocab_size = tokenizer.vocab_size config = EncoderDecoderConfig.from_encoder_decoder_configs( config_encoder, config_decoder ) # https://huggingface.co/blog/how-to-generate config.max_length = 512 config.min_length = 0 config.no_repeat_ngram_size = 3 config.early_stopping = True config.length_penalty = 2.0 config.num_beams = 5 # config.tie_word_embeddings = True config.tie_encoder_decoder = False config.decoder_start_token_id = tokenizer.cls_token_id config.eos_token_id = tokenizer.sep_token_id config.pad_token_id = tokenizer.pad_token_id config.vocab_size = config.encoder.vocab_size enc_dec = EncoderDecoderModel(config=config) model_file_name = f"{model_name}-custom" # Saving the model, including its configuration enc_dec.save_pretrained(model_file_name) # loading model and config from pretrained folder encoder_decoder_config = EncoderDecoderConfig.from_pretrained(model_file_name) model = EncoderDecoderModel.from_pretrained( model_file_name, config=encoder_decoder_config ) # set the wandb project where this run will be logged os.environ["WANDB_PROJECT"] = "Project" # save your trained model checkpoint to wandb os.environ["WANDB_LOG_MODEL"] = "false" # turn off watch to log faster os.environ["WANDB_WATCH"] = "false" logger.info(f"\n\nNum Params: {model_size}") # %%### process data, tokenize and prepare for training logger.info(f"process train data (tokenization)") def process_data_to_model_inputs(batch, max_len=512): """map function for transformation text to ids, tokenize the inputs and labels """ # Tokenizer will automatically set [BOS] <text> [EOS] inputs = batch["incorrect"] targets = batch["correct"] # tokenize the inputs and labels # without padding, the data collator will pad model_inputs = tokenizer(inputs, max_length=max_len, truncation=True) labels = tokenizer(text_target=targets, max_length=512, truncation=True) model_inputs["labels"] = labels.input_ids return model_inputs process_batch = 5000 train_data_tok = train_data.map( process_data_to_model_inputs, batched=True, batch_size=process_batch, remove_columns=["incorrect", "correct"], num_proc=num_proc ) logger.info(f"process val data (tokenization)") val_data_tok = val_data.map( process_data_to_model_inputs, batched=True, batch_size=process_batch, remove_columns=["incorrect", "correct"], num_proc=num_proc, cache_file_name=f"{hf_cache_dir}/val_mapped_{test_size}.arrow", # keep_in_memory=True ) del train_data del val_data del dataset logger.info(f"done process data (tokenization)") data_collator = DataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, max_length=512, pad_to_multiple_of=8 ) trainer = Seq2SeqTrainerForEncoderDecoder( args=training_args, model=model, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=None, train_dataset=train_data_tok, eval_dataset=val_data_tok, ) logger.info(f"start training") trainer.train() # %% trainer.save_model(f"{output_dir}/final") ``` ### Expected behavior Start traning without error ### Traceback ``` [2023-03-16 05:54:45,026] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]} [2023-03-16 05:54:45,026] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=4, node_rank=0 [2023-03-16 05:54:45,026] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]}) [2023-03-16 05:54:45,026] [INFO] [launch.py:162:main] dist_world_size=4 [2023-03-16 05:54:45,026] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 [2023-03-16 05:54:48,465] [INFO] [comm.py:661:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.90it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.90it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.89it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.87it/s] 03/16/2023 05:54:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:51 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:55 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/train_mapped_29636272_*_of_00008.arrow 03/16/2023 05:54:56 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/datagec_data_file.jsonl/val_mapped_10000_*_of_00008.arrow Loading results from main process Traceback (most recent call last): File "playground/hf_transformers/training_enc_dec_model_from_scratch.py", line 458, in <module> trainer.train() File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1612, in _inner_training_loop deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/deepspeed.py", line 312, in deepspeed_init hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps) File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/deepspeed.py", line 174, in trainer_config_finalize hidden_size = model.config.hidden_size File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/configuration_utils.py", line 260, in __getattribute__ return super().__getattribute__(key) AttributeError: 'EncoderDecoderConfig' object has no attribute 'hidden_size' 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/datagec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow [2023-03-16 05:54:59,072] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526261 [2023-03-16 05:54:59,073] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526262 [2023-03-16 05:54:59,218] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526263 [2023-03-16 05:54:59,362] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526264 [2023-03-16 05:54:59,545] [ERROR] [launch.py:324:sigkill_handler] ['/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/bin/python', '-u', 'playground/hf_transformers/training_enc_dec_model_from_scratch.py', '--local_rank=3', '--output_dir=./hf_output/', '--per_device_train_batch_size=128', '--dataloader_num_workers=8', '--gradient_accumulation_steps=1', '--gradient_checkpointing=False', '--fp16', '--logging_steps=500', '--eval_steps=5000', '--save_steps=50000', '--num_train_epochs=2', '--learning_rate=0.001', '--warmup_steps=5000', '--logging_first_step=True', '--eval_accumulation_steps=100', '--deepspeed', 'playground/hf_transformers/deepspeed_zero2.json', '--log_level=warning'] exits with return code = 1 ```
03-15-2023 08:31:48
03-15-2023 08:31:48
Hi @ksopyla Thanks for raising this issue and for giving all the script and environment details. Could you share the full traceback of the error encountered? Although I'm not immediately sure where the error is being raised, it is expected that the error occurs if `hidden_size` is being references from the model's config i.e. `model.config.hidden_size` as it's only the encoder and decoder configs that have this parameter. <|||||>HI @amyeroberts I have updated the issue and added the traceback. I hope it helps. Yes, you are right problem occurs when script tries to get ```model.config.hidden_size``` I would add to this that the encoder and decoder could have different sizes in terms of the number of layers and hidden_size <|||||>Thank you for the full traceback, @ksopyla. Now it's easy to support you. Please try again with the latest version of transformers. You can see here that this situation has been dealt with on Feb 10th so this assert shouldn't happen again as it now carefully checks different scenarios: https://github.com/huggingface/transformers/blob/1c4a9acc7319221643555c0e8ff1fda2f758c400/src/transformers/deepspeed.py#L179-L213 However if you don't set `hidden_size` then please don't use `auto` values for zero configuration section. This is what the proper assert in the latest version will tell you to do. This is just an automatic optimization and you can remove these entries completely and deepspeed will use its defaults. Or you can study what those values should be and set them yourself as explained here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero3-config<|||||>Sure, I will check and let you know. Meanwhile, could you explain what you mean by " then please don't use auto values for zero configuration section." I use Zero2 https://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero2-config not Zero3-config I infer you talk about these parameters, which should be set if I use Zero3. ``` hidden_size_based_keys = [ "zero_optimization.reduce_bucket_size", "zero_optimization.stage3_prefetch_bucket_size", "zero_optimization.stage3_param_persistence_threshold", ] ``` Correct me if I am wrong. Or maybe I should also set those in Zero2? <|||||>ah, ok, thank you for clarifying the situation - that's even simpler then. Just upgrade transformers, change nothing in your setup and it should just work. The original code just did `model.config.hidden_size` regardless of the config type and thus it is failing for you.<|||||>I have updated the transformers to 4.27 and pytorch 2.0 and it works :) But I have an issue that Zero-2 is slower than pytorch distributed approach, try to investigate it further. Meanwhile thank you for your help. <|||||>Best to discuss a new issue in a new Issue, but if we can wrap it up quickly - it's absolutely normal that the speed will progressively drop as you enable stages 1, 2 and 3, as each stage creates an additional overhead. If you can fit everything into a single GPU do not use Deepspeed. It's a scalability solution for when one can't fit the training or inference components into a single gpu. If you can, always use straight DDP.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,175
closed
wav2vec processor batching logic is too restrictive
### System Info transformers version at the time of writing is `4.26.1` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python # !pip install transformers torch # in jupyter notebook from transformers import Wav2Vec2Processor import torch import numpy as np batch = 4 # create Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") # generate random input tensor input_tensor = torch.tensor(np.random.randn(batch, 10, 10)) # pass input tensor through processor output = processor(input_tensor, return_tensors="pt") print(output["input_values"].shape) # 1 x 4 x 10 x 10 ``` ### Expected behavior It seems reasonable that an input could be of shape `batch x d_1 x d_2 ...` and I'd expect the output to have the same shape. However, [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L184) the code has an extra check for type list or tuple that results in it misinterpreting the input as a single example. Side note: I'm unsure what to infer from the type checking logic because it doesn't match the type hints i.e. `tuple` isn't supposed to be possible here anyways, according to the `__call__` type hint. I did check some other examples of `is_batched` appearing in the `src/transformers/models` directory and they look similar but unexpected.
03-15-2023 08:27:46
03-15-2023 08:27:46
cc @sanchit-gandhi @ArthurZucker <|||||>Hey @LWprogramming! Thanks for the comprehensive issue description - I agree that the logic for checking if the input `is_batched` is broken when the input is a batched numpy array, e.g. the feature extractor **should** set `is_batched=True` when the numpy array is 2-d, but currently does not: https://github.com/huggingface/transformers/blob/57f25f4b7fb85ff069f8701372710b2a3207bf2d/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L184-L187 Would you like to open a PR to fix this? 🤗 We can just do one additional check to set `is_batched = True` if the input is a 2-d numpy array. Note that it should be 2-d with dims [batch, audio_input] and not 3-d since we only expect mono channel input to the feature extractor.<|||||>Hey @LWprogramming! Just checking-in to see whether you'd like to open a PR to fix the issue you uncovered? Think you're in a good position to submit a clean fix! 🤗<|||||>Hi! I'll take care of it, got preoccupied with some irl stuff that came up the past few weeks but things should be settling down soon :)<|||||>That's awesome @LWprogramming! Excited for the PR 🤗 Feel free to tag me as soon as it's ready and I'll get you a review<|||||>marking as still active, just fixing up the PR
transformers
22,174
closed
[WIP] Add codegeex
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-15-2023 08:09:09
03-15-2023 08:09:09
transformers
22,173
closed
Fix DeiT Masked Image Modeling output
# What does this PR do? Fixes the output of `DeiTForMaskedImageModeling` and `TFDeiTForMaskedImageModeling` by replacing the inaccurate `MaskedLMOutput` with the `MaskedImageCompletionOutput` class. Follow-up PR on #22152 ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-15-2023 07:42:55
03-15-2023 07:42:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22173). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,172
closed
How to save the model after using checkpoint to continue training
I am trying to continue training my model (gpt2) from a checkpoint. However, the error occurred in the trainer area when I finished training. I save a checkpoint for every 5000 steps, but because there is not enough space, the previous checkpoint will be deleted. The last checkpoint I used was `checkpoint-70000`, but when I saved it, I had to find `checkpoint-35000`, but I had already deleted `checkpoint-35000` Then how can I save the final trained model? ``` training_args = TrainingArguments( output_dir=model_checkpoints_dir, # The directory where the model checkpoints and other output files will be saved. num_train_epochs=5, # The total number of training epochs to run. per_device_train_batch_size=64, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=200, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=model_log_dir, # directory for storing logs prediction_loss_only=True, save_steps=5000, logging_steps=5000, evaluation_strategy="steps", save_strategy="steps", load_best_model_at_end=True ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above data_collator=data_collator, train_dataset=tokenized_train_dataset, # training dataset eval_dataset=tokenized_eval_dataset ) trainer.train(resume_from_checkpoint = True) ``` error message ``` /usr/local/lib/python3.9/dist-packages/transformers/trainer.py in _sorted_checkpoints(self, output_dir, checkpoint_prefix, use_mtime) 2758 # Make sure we don't delete the best model. 2759 if self.state.best_model_checkpoint is not None: -> 2760 best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) 2761 for i in range(best_model_index, len(checkpoints_sorted) - 2): 2762 checkpoints_sorted[i], checkpoints_sorted[i + 1] = checkpoints_sorted[i + 1], checkpoints_sorted[i] ValueError: '/content/drive/MyDrive/exp/model_checkpoints/checkpoint-35000' is not in list ``` Thanks a lot for the help.
03-15-2023 07:27:34
03-15-2023 07:27:34
It looks like you accidentally deleted the best checkpoint. To fix this and be able to resume training, I'd advise to manually modify the `training_state` (which should be stored in a file named `trainer_state.json` in the checkpoint-70000 folder) and remove the key for `best_model_checkpoint`.<|||||>I'm having the same problem. For me it was because I cloned the repository to a different path (of a different machine). Since `best_model_checkpoint` contains an absolute path it cannot find the checkpoint at that path. I fixed it by manually editing `best_model_checkpoint` from `trainer_state.json`. Is there a way to store a relative path instead of an absolute one?
transformers
22,171
closed
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0']
### System Info transformers version 4.7 , pytorch2.0, python3.9 run the example code in document of transformers ```shell rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` error info ```shell /nfs/v100-022/anaconda3/lib/python3.9/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects `--local-rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** Traceback (most recent call last): File "/nfs/v100-022/run_clm.py", line 772, in <module> main() File "/nfs/v100-022/run_clm.py", line 406, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/nfs/v100-022//anaconda3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 341, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0'] ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.Install the following configuration environment: python 3.9 pytroch 2.1 dev trasnsformers 4.7 2. then run code ``` rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` 3. then you can get error. ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0'] ### Expected behavior 1.Install the following configuration environment: python 3.9 pytroch 2.1 dev trasnsformers 4.7 2. then run code ``` rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` 3. then you can get error. ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0']
03-15-2023 07:12:21
03-15-2023 07:12:21
Hi @bestpredicts, thanks for raising this issue. I can confirm that I see the same error with the most recent version of transformers and pytorch 2. I wasn't able to replicate the issue with pytorch 1.13.1 and the same transformers version. Following the messages in the shared error output, if I set `LOCAL_RANK` in my environment and pass in `--use-env` I am able to run on pytorch 2. ``` LOCAL_RANK=0,1 CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 --use-env examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ```<|||||>Also note that `torch.distributed.launch` is deprecated and `torchrun` is preferred in PyTorch 2.0.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Does anyone solved this problem? I got same problem when use torchrun or torch.distributed.launch, the self.local_rank is -1. my env is pytorch==2.0.0 and transorformers=4.30.1.<|||||>You might try migrating to torchrun? i.e.: ``` torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` for reference on migrating: https://pytorch.org/docs/stable/elastic/run.html<|||||>Have you solve your problems? I came up with the same error when using deepspeed. Solutions provided above didn't work at all. :(
transformers
22,170
closed
resume_from_checkpoint is not working with Deepspeed
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: true - Using distributed or parallel set-up in script?: true ### Who can help? @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. trainer with deepspeed using stage_2 or 3 (I think it does not matter) 2. set save_strategy = 'epoch', i.e., save every epoch 3. you cannot use ``resume_from_checkpoint`` to resume the training procedure 4. why? in ``transformers/deepspeed.py/L359``, ``deepspeed_engine.load_checkpoint`` actually needs an argument called ``tag`` or you need have a "latest file" in the checkpoint directory. However, neither of them are supported by trainer. The trainer does not provide a chance to pass ``tag`` , and does not store a "latest file" in the checkpoint directory. 5. related to ``deepspeed/runtime/engine.py/L2712`` ### Expected behavior It should work well as passed ``resume_from_checkpoint``.
03-15-2023 03:22:50
03-15-2023 03:22:50
Hi @Raibows, you're giving me no reproduction so there is nothing I can do here as i have no idea what you did. there is no need for tag, deepspeed's `save_checkpoint` creates a `latest` file and uses that to find the checkpoint for resume. I can send you to a test that validates the resume works - give it a try: https://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/tests/deepspeed/test_deepspeed.py#L636-L691 To run this test do: ``` RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py -k test_can_resume_training_normal ``` or is it something specific to `save_strategy = 'epoch'`? I have only used the default strategy - can you change my test to reproduce your Issue?<|||||>Hi, thanks for your reply. But actually I don't have any "latest" file in the ``output_dir``. Here is the screenshot: ![image](https://user-images.githubusercontent.com/37944786/225226161-147b8002-4192-4dea-98dc-eaaefed15a57.png) And in every checkpoint-xxx directory, we have ![image](https://user-images.githubusercontent.com/37944786/225226325-a96d00bc-f991-405f-b3f5-d6bf8e8bc9c3.png) In the globalstepxxx directory, we have ![image](https://user-images.githubusercontent.com/37944786/225226642-bd703406-5b7a-4e2d-872e-96d4b05f813c.png) If I pass ``resume_from_checkpoint = output_dir/checkpoint-xxx``, it will throw the error I mentioned. Thanks for your test scripts. I will try it later. <|||||>I totally believe you that this is the case. But I don't have access to your computer. So if there is a bug I need to be able to reproduce it. which means that ideally you'd send a small script that shows the problem. As I suggested perhaps you could adapt the test I sent to you to your particular situation and use it as the reproduction that demonstrates the problem.<|||||>Hi, sorry for the late response. I test many times and find it very weird. Now the latest file exists. But "zero_pp_rank_x_mp_rank_00_optim_states.pt" this file has some problems in saving. I have posted a gist in https://gist.github.com/Raibows/73c3a6105c0226669910d5608f5efb4e If you set the num of training samples to very few, which indicates that the ``save_checkpoint`` will be executed very soon after running. All the ckpts are saved very well. However, if you let it run for a longer time. it will only save 1 ckpt "zero_pp_rank_0_mp_rank_00_optim_states.pt" no other "zero_pp_rank_1_mp_rank_00_optim_states.pt, zero_pp_rank_2_mp_rank_00_optim_states.pt" ...... which should be saved. This will cause the fatal error when you are trying to resume from them. Comment out L59 in the gist and run it with ``` torchrun --nproc_per_node 4 test_save.py ```<|||||>I'm not sure if you had the same issue, but when I tried to resume a deepspeed run, it would try to load the right checkpoint but fail to find a `pytorch_model.bin` file. So I just ran the `zero_to_fp32.py` script to create the checkpoint and resuming with deepspeed just worked, it loaded the optimizer states / model states from the `global_stepXXX/` folder. I'm on transformers version `4.27.1`<|||||>@Raibows, thank you for providing an easy to use repro - you can use `model_name = 'patrickvonplaten/t5-tiny-random'` while debugging this as it'd be much faster and not require many resources. I did run it for a bit and had no problems on 2 gpus. As we are only integrating Deepspeed and the call to `save_checkpoint` is done correctly I think - you probably will have a better luck asking directly at https://github.com/microsoft/DeepSpeed/issues while providing your repro script. You can validate that the integration is calling it on all ranks: https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/trainer.py#L2297-L2300 If you'd like to debug this yourself, I'd add a debug print that would include a rank `self.args.local_rank` - so that you'd want to see that each rank calls this deepspeed method. If it gets called on all ranks for each save, then you definitely have to take it up to the Deepspeed team. If it doesn't, which I doubt, but who knows - do get back to me. Honestly, I have seen some reports in the past where users had some weird filesystem issues where files would not appear. Is it your personal computer that you're running this one, or some particular cloud?<|||||>@stas00 Hi, really thanks for your help! Now I find the reason, finally. It's my own code's fault. Since I use time-based as the path of output directory. However, we use distributed launch to launch the script which causes each process will have a little bit different path of output directory. I'm going to close this issue. Thanks!<|||||>Glad you figured it out, @Raibows! That's why we have unit tests that help us know whether the feature is working correctly and when it doesn't for a user often it has to do with some peculiarity of user's code.
transformers
22,169
closed
Wrong "view source" links on main docs
### System Info NA ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When viewing the latest docs, "view source" link expansion leads to an invalid GitHub link. e.g. https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertConfig View source link: https://github.com/huggingface/transformers/blob/v4.27.0/src/transformers/models/bert/configuration_bert.py#L72 Since `v4.27.0` tag does not exist, GitHub reports an invalid link. It should be `main` instead. I believe this is a problem of configuring the auto-generate docs. ### Expected behavior Show correct link to source code.
03-15-2023 01:22:25
03-15-2023 01:22:25
cc @stevhliu <|||||>The tag will be out in a couple of hours. The doc for the v4.27.0 release was pushed last night and the rest of the release will follow this morning. Thanks for catching this so quicly!
transformers
22,168
closed
(Not So) Bad words list for text generation
### Feature request Support a soft penalization logits processor in the transformers generate method (extends NoBadWordsLogitsProcessor). ### Motivation - The [NoBadWordsLogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.NoBadWordsLogitsProcessor) forbids the generation of certain tokens _in absolute terms_ by overwriting the logits to minus infinity - The request is to add a softer version of this, one in which certain tokens are penalized or boosted but _only mildly_ - This is in the spirit of the `logit_bias` parameter in the generate methods [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) (OpenAI) and [here](https://docs.cohere.ai/reference/generate) (Cohere) - Possible use cases include, but are not limited to: enhance extractiveness during document summarization by boosting tokens present in the input and style guidance by penalizing/boosting the appropriate vocabulary ### Your contribution **Overview** - A new class is defined as `BendLogitsProcessor` based on the current `NoBadWordsLogitsProcessor` class - The current argument `bad_words_ids` is enriched to include a float value per list of tokens_ids, aka the penalization/boosting score. Positive large values encourage the token to be generated while negative large values do the opposite - Penalization/boosting scores are unbounded but could be later scaled as it seems to be the case in the implementations referenced above, e.g. `logit bias` is in [-10,10] [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) and [-100,100] [here](https://docs.cohere.ai/reference/generate) - Observe that `NoBadWordsLogitsProcessor` behavior could be recovered just by explicitly setting penalization/boosting scores to float(“-Inf”) **The new class** This is very much the same as `NoBadWordsLogitsProcessor`, I tried to keep as much as possible intact. There might be a more efficient implementation. ```py class BendLogitsProcessor(LogitsProcessor): """ [`LogitsProcessor`] that softly penalizes or boosts certain token/s Args: bend_list (`List[Union[float, List[int]]]`): List of list of lists with penalization/boosting coefficients and list of token ids. In order to get the token ids of the words, use `tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids`. eos_token_id (`int`): The id of the *end-of-sequence* token. """ def __init__(self, bend_list: List[Union[float, List[int]]], eos_token_id: int): self.bend_list = bend_list coefs = [coef for coef,tok in self.bend_list] words_ids = [tok for coef,tok in self.bend_list] if not isinstance(bend_list, List) or len(bend_list) == 0: raise ValueError(f"`bend_list` has to be a non-empty list, but is {bend_list}.") if any(not isinstance(word_ids, list) for word_ids in words_ids): raise ValueError(f"`words_ids` has to be a list of lists, but is {words_ids}.") if any( any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in word_ids) for word_ids in words_ids ): raise ValueError( f"Each list in `words_ids` has to be a list of positive integers, but is {words_ids}." ) if any(not isinstance(coef, float) for coef in coefs): raise ValueError(f"`coefs` has to be a float, but is {coefs}.") words_ids = list(filter(lambda token_seq: token_seq != [eos_token_id], words_ids)) self.words_id_length_1, self.coefs_length_1 = [],[] self.words_id_length_greater_than_1, self.coefs_length_greater_than_1 = [],[] for coef,word in zip(coefs,words_ids): if len(word) == 1: self.words_id_length_1.append(word[0]) self.coefs_length_1.append(coef) else: self.words_id_length_greater_than_1.append(word) self.coefs_length_greater_than_1.append(coef) for token_seq in self.words_id_length_greater_than_1: if len(token_seq) == 0: raise ValueError(f"Words token sequences {words_ids} cannot have an empty list") def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: masks_length_1, scores_length_1 = [], torch.zeros_like(scores) masks_length_greater_than_1, scores_length_greater_than_1 = [], torch.zeros_like(scores) if len(self.words_id_length_1) > 0: for word_id,coef in zip(self.words_id_length_1,self.coefs_length_1): mask = self._get_mask_length_1(scores,word_id) masks_length_1.append(mask) if coef >= 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef) if coef < 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef)) scores_length_1 += score if len(self.words_id_length_greater_than_1) > 0: for word_ids,coef in zip(self.words_id_length_greater_than_1,self.coefs_length_greater_than_1): mask = self._get_mask_length_greater_than_1(input_ids.tolist(),scores,word_ids) masks_length_greater_than_1.append(mask) if coef >= 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef) if coef < 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef)) scores_length_greater_than_1 += score masks_all_lengths = masks_length_1 + masks_length_greater_than_1 one_large_mask = torch.zeros_like(scores).bool() for mask in masks_all_lengths: one_large_mask = torch.bitwise_or(one_large_mask,mask) base_scores = scores.masked_fill(one_large_mask,0.) new_scores = base_scores + scores_length_1 + scores_length_greater_than_1 return new_scores def _get_mask_length_1(self, scores: torch.FloatTensor, word_id:List[int]) -> torch.BoolTensor: mask = torch.zeros(scores.shape[1]) mask[word_id] = 1 return mask.unsqueeze(0).to(scores.device).bool() def _tokens_match(self, prev_tokens: List[int], tokens: List[int]) -> bool: if len(tokens) == 0: return True elif len(tokens) > len(prev_tokens): return False else: return prev_tokens[-len(tokens) :] == tokens def _calc_word_ids(self, prev_input_ids: List[List[int]], word_ids:List[int]) -> Iterable[int]: tokens = [] for prev_input_ids_slice in prev_input_ids: tokens_slice = [] if self._tokens_match(prev_input_ids_slice, word_ids[:-1]): tokens_slice.append(word_ids[-1]) tokens.append(tokens_slice) return tokens def _get_mask_length_greater_than_1(self, input_ids: list, scores: torch.FloatTensor, word_ids:List[int]) -> torch.BoolTensor: dynamic_tokens = self._calc_word_ids(input_ids, word_ids) mask_list = [] for idx, batch_tokens in enumerate(dynamic_tokens): for token in batch_tokens: # Eliminates invalid bad word IDs that are over the vocabulary size. if token <= scores.shape[1]: mask_list.append([idx, token]) else: logger.error( f"An invalid bad word ID is defined: {token}. This ID is not contained in the " "vocabulary, and is therefore ignored." ) if not mask_list: mask = torch.zeros_like(scores).bool() else: mask = torch.LongTensor(mask_list) indices = torch.ones(len(mask)) mask = ( torch.sparse.LongTensor(mask.t(), indices, scores.size()) .to(scores.device) .to_dense() .bool() ) return mask ``` **An example** Take the summarization example in BART documentation [here](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration.forward.example). Set `add_prefix_space=True` in the tokenizer and remove the `max_length = 20` in the generate method call. ```py from transformers import AutoTokenizer, BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", add_prefix_space=True) ARTICLE_TO_SUMMARIZE = ( "PG&E stated it scheduled the blackouts in response to forecasts for high winds " "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ) inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt") # Generate Summary summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0) tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ``` This yields the following summary: > Nearly 800 thousand customers were scheduled to be affected by the shutoffs. PG&E stated it scheduled the blackouts in response to forecasts for high winds. At this point the new logits processor class is applied. The objective will be to make the model output the number of customers affected as digits and replace the word “shutoffs”. We do so by penalizing the token ids for “thousand” and “shutoffs” while boosting the ones for “shutdowns”. ```py logits_processor = LogitsProcessorList( [ BendLogitsProcessor( bend_list = [[-10000.,[7673]], # thousand [1000.,[5001, 29]], # shutdowns [-1000000.,[2572, 10816]], # shutoffs [-1000000.,[2572, 1529]], # shutoffs ], eos_token_id=model.config.eos_token_id ) ] ) # Generate Summary summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, logits_processor=logits_processor, renormalize_logits=True) tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ``` If we call the the summary generation again, this time including the logits processor and renormalizing we get: > Nearly 800,000 customers were scheduled to be affected by the shutdowns. PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.
03-14-2023 23:32:05
03-14-2023 23:32:05
cc @gante <|||||>Hey @iiglesias-asapp 👋 Thank you for the suggestion! Before we dive into adding code, a disclaimer -- one of the current problems with `.generate()` is that there are too many options, scaring users away from the docs. This means that I will be conservative before giving the green light to add more options 🤗 We do have an option to have control over extractive vs abstraction summarization, the `encoder_repetition_penalty` ([docs](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.encoder_repetition_penalty)). This is a multiplicative factor to the logits that increases/decreases the odds of reusing the tokens in the input. Do you have more use cases in mind, where your suggestion would be critical?<|||||>Hi @gante! Thanks for the reply. I agree that there many options already 😅 I wasn't thinking of this as an additional option but more like an "upgrade" of the existing feature since it gives the user a bit more flexibility while keeping the previous functionality, i.e. tokens are boosted/penalized instead of forced/forbidden and users willing to forbid the appearance of certain token can still input float("-Inf") as score. Main use case in mind was cheap model customization by a set of score,[tokens]. I guess, more generally, it is desirable to allow the model to generate a certain token if there is no natural replacement for it and discourage it otherwise; the sort of soft penalization that is allowed in other APIs.<|||||>@iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗 Let's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me) @iiglesias-asapp does it sound good to you? <|||||>Sounds good! Thanks for considering it @gante <|||||>Please add this because I have alpaca model and it was trained on a bad dataset with many cases of input and output fields having "<noinput" and "nooutput>" text in them which causes my LLM to constantly respond with those words :/<|||||>@teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available <|||||>> @teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available Oh can you point me to where/how I can use the bad_words_list edit: nvm found it ty<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> custom logits processors > @iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗 > > Let's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me) > > @iiglesias-asapp does it sound good to you? @gante There are many use cases: 1) Increase length of generated text, by making **end of text** token less probable. 2) If you use few shot learning, and you have problem with labels that use used, you can increase probability of a label. for example: instruction: write me a joke about cars answer: some response instruction: write me a joke about [subject2] answer: some response instruction: write me a joke about [subject3] answer: some response then you need to increase probability for answer: in some cases, when not everything work as it should. encoded norepeat engrams is one option, but it sometimes generates strange text. 2a) The same thing if you do a few shot learning to generate html text. For example, when you want text not to repeat, if you set params for that, then also html tags wont be repeated and text will be strangely formated. So then you just increase the probability of html tags and you get much better output. 3) paraphrasing for dataset multiplying to get more unique paraphrases, it is good to lower probability of original words 4) openai has this feature, i really doubt they would implement something, and write documentation for that, if they did not think that some users would use it. <|||||>@gante Here comes the 10th reaction! Thanks for considering adding this feature. Really need this since I'm currently working on building APIs similar to [OpenAI API](https://platform.openai.com/docs/api-reference/completions/create#completions/create-logit_bias). It would be convenient if it is officially supported!<|||||>As promised, I've added it to my queue! 🫡 <|||||>Hey everyone 👋 A way to bias specific tokens has been added on `main`. You can check its docs [here](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.SequenceBiasLogitsProcessor) (which contains a thorough example) and the corresponding `GenerationConfig` flag [here](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.sequence_bias). Let me know if it is not working properly 🤗 Tagging the folks that have upvoted the comment above and/or replied on this thread for visibility: @iiglesias-asapp @teknium1 @liaeh @skevy @talkhaldi @francislabountyjr @tristanvdb @thjwhite @NanoCode012 @zhuhl98 @Oxi84 @andyh0913 @Vonathar
transformers
22,167
closed
No checkpoint saved during training
### System Info - `transformers` version: 4.10.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.9.0 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction . ### Expected behavior There is no checkpoint saved during the training. My arguments are: --do_train --train_file xxxxx --output_dir xxxx --model_name_or_path bert-base-uncased --eval_steps 1 --num_train_epochs 5 --per_device_train_batch_size 24 --learning_rate 5e-5 --max_seq_length 16 --pooler_type cls --temp 0.05 --downscale_dim 64 --fp16 --num_hidden_layers 4 --keep_in_memory True --save_strategy "steps" --save_steps 20 Can anyone give me a hint on it? Many thanks for considering my request.
03-14-2023 23:24:51
03-14-2023 23:24:51
Hi @lxlxlxx thanks for raising this issue. Could you share which script you're running e.g. `run_mlm.py`? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,166
closed
Mismatch between CLIP fast and non-fast tokenizers
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.8.1 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is a minimal working example to show the mismatch: ```python from transformers import AutoTokenizer tokenizer_fast = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=True) tokenizer = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=False) text = "You should've done this" print(tokenizer(text)) print(tokenizer_fast(text)) print(tokenizer(text) == tokenizer_fast(text)) # Outputs: # {'input_ids': [49406, 592, 1535, 262, 563, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} # {'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} # False ``` It appears to stem from the `'ve` token. ### Expected behavior The non-fast tokenization should match the fast tokenization (https://github.com/huggingface/tokenizers)
03-14-2023 21:55:23
03-14-2023 21:55:23
cc @ArthurZucker <|||||>Note: If you would like to get matching tokenizing before a fix goes in, installing `ftfy` first should do it. Initially looked to fix this specific issue around apostrophes, but it became apparent there were other potential formatting inconsistencies. For example, running the below also shows differences in things like `'ll` and `!!` tokenization: ```py from transformers import AutoTokenizer tokenizer_fast = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=True) tokenizer = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=False) text = "A\n'll !!to?'d''d of, can't." print(tokenizer(text)) print(tokenizer_fast(text)) print(tokenizer(text) == tokenizer_fast(text)) # Outputs: # {'input_ids': [49406, 320, 262, 865, 256, 256, 531, 286, 262, 323, 262, 262, 323, 539, 267, 753, 262, 339, 269, 49407], ...} # {'input_ids': [49406, 320, 1342, 748, 531, 13610, 323, 8445, 323, 539, 267, 753, 713, 269, 49407], ...} # False ``` I put up a first go at fixing this. It's ready for review but not merge until we pick a change to apply more broadly<|||||>Hey! I can't reproduce the issue, when I ran your code I got: ```python {'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} {'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} ```<|||||>I just updated to `transformers-4.27.2`, but it still produces the incorrect output. Are you sure you are running the non-fast tokenizer @ArthurZucker ? As stated above by @connor-henderson, it's probably because you have `ftfy` installed, which I assume will use its own basic tokenizer.<|||||>Hey Arthur and xenova, in my case uninstalling ftfy or commenting out [these import lines](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/clip/tokenization_clip.py#L313-L317) leads to repro, I believe since [this conditional](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/clip/tokenization_clip.py#L469-L470) determines whether the BasicTokenizer is used for CLIPTokenizer.<|||||>I made sure that I was using fast yes 😉 Though it is our goal to have the same output from fast and non-fast, I don't really know why there is this ftfy used here. But yes, this is most probably not used in the `fast` tokenization. This also means that the expected behaviour should probably be the one that normalized the text with `ftfy`. This is something that is going to be hard to port to tokenizer depending on what kind of normalization is going on. <|||||>@ArthurZucker I believe it is the opposite, the mismatch happens when ftfy is not installed. (@connor-henderson correct me if I misunderstood your posts).<|||||>> @ArthurZucker I believe it is the opposite, the mismatch happens when ftfy is not installed. (@connor-henderson correct me if I misunderstood your posts). Yes, this is correct. I don't have ftfy installed, and I get the mismatch.<|||||>@sgugger yes thanks that is what I was saying. I think this comes down to the expected behavior when using the BasicTokenizer generally. If it is supposed to match the fast tokenizer output I believe we have a bug. But if its not, and it's just expected to split naively on punctuation then I don't think we have a bug and I should close my PR<|||||>I think this PR is good for ppl who do not have `ftfy`! Thanks both of you for pointing this out and will be reviewing the PR!<|||||>Commenting to mark as not stale :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,165
closed
TypeError: zero_grad() got an unexpected keyword argument 'set_to_none'
### System Info Getting below error while runing Bert pretraining using Huggingface trainer with deepspeed 0.8.2 version and pytorch 2.0RC Traceback (most recent call last): File "pretrain_glue.py", line 124, in <module> result = trainer.train() File "/opt/conda/envs/env/lib/python3.8/site-packages/transformers/trainer.py", line 1631, in train return inner_training_loop( File "/opt/conda/envs/env/lib/python3.8/site-packages/transformers/trainer.py", line 1814, in _inner_training_loop model.zero_grad(set_to_none=True) TypeError: zero_grad() got an unexpected keyword argument 'set_to_none' @sgugger , could this be due to below change? Enforce same behavior as PyTorch 2.0 for older versions (https://github.com/huggingface/transformers/pull/22136) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run Bert pretrain with HF trainer and deepspeed 0.8.2 version on Pytorch 2.0 RC build ### Expected behavior run with no errors
03-14-2023 20:34:45
03-14-2023 20:34:45
Yes, this has been fixed already on main. You just need to pull from source :-)<|||||>ah, thankyou for quick fix. :)
transformers
22,164
closed
Error when running pipeline with whisper and using the 'return_dict_in_generate=True' option
### System Info - `transformers` version: 4.26.1 - Platform: macOS-13.1-x86_64-i386-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sanchit-gandhi @Narsil When running a simple whisper pipeline, e.g., using the options 'return_dict_in_generate': True and 'output_scores': True, e.g., ``` from pathlib import Path from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig audio_path = 'xxx.wav' generate_kwargs = {'temperature': 1, 'max_length': 448, 'return_dict_in_generate': True, 'output_scores': True} pipe = pipeline( model="openai/whisper-small", chunk_length_s=10, framework="pt", batch_size=1 ) print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) ``` I am getting the following error: ``` Traceback (most recent call last): File "/Users/sofia/PycharmProjects/openAI-whisper/test4.py", line 39, in <module> print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 378, in __call__ return super().__call__(inputs, **kwargs) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1076, in __call__ return next( File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 611, in postprocess items = outputs[key].numpy() AttributeError: 'ModelOutput' object has no attribute 'numpy' ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Run the code ``` from pathlib import Path from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig audio_path = 'xxx.wav' generate_kwargs = {'temperature': 1, 'max_length': 448, 'return_dict_in_generate': True, 'output_scores': True} pipe = pipeline( model="openai/whisper-small", chunk_length_s=10, framework="pt", batch_size=1 ) print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) ``` ### Expected behavior I expect to get the text result accompanied with the timestamps and the prediction scores
03-14-2023 18:06:02
03-14-2023 18:06:02
cc @ArthurZucker <|||||>Hey! Thanks for reporting. This is normal as the `pipeline` does not support returning the usual `dictionary`. We should probably prevent this behaviour (raise an error when `return_dict_in_generate` is set in the pipeline) cc @Narsil this is a duplicate of another issue but I can't find it! edit: #21185 <|||||>Best recommendation in the mean time is to define a custom pipeline, where you process the inputs before feeding them to `super.preprocess`! <|||||>> Best recommendation in the mean time is to define a custom pipeline, where you process the inputs before feeding them to `super.preprocess`! Thanks for your reply, I now understand the issue. However, I am not sure how to preprocess the input to achieve this. I can see the output and the dictionary still contains the tokens (inside the ModelOutput): ``` {'tokens': ModelOutput([('sequences', tensor([[50258, 50342, 50358, 50364, 1044, 291, 337, 1976, 0, 50864, 50257]])), ('scores', (tensor([[2.3064, -inf, -inf, ..., 2.8053, 2.7866, 3.3406]]), tensor([[3.7724, -inf, -inf, ..., 3.1328, 3.6590, 3.8489]]), tensor([[ -inf, -inf, -inf, ..., -7.8979, -7.7944, -11.4352]]), tensor([[-5.0041, -inf, -inf, ..., -5.5928, -5.6329, -6.7607]]), tensor([[16.9060, -inf, -inf, ..., -inf, -inf, -inf]]), tensor([[ 4.7684, -inf, -inf, ..., -4.7718, -4.7031, -6.6440]]), tensor([[ 3.5967, -inf, -inf, ..., -0.2559, -0.4887, -1.7837]]), tensor([[ 1.7885, -inf, -inf, ..., -8.9040, -8.4750, -12.0667]]), tensor([[ -inf, -inf, -inf, ..., -15.8636, -15.3132, -18.1436]]), tensor([[ -inf, -inf, -inf, ..., 13.3971, 12.9880, 10.2999]])))]), 'stride': (160000, 0, 26667)} ``` and where it fails is when it tries to execute `outputs["tokens"].numpy()`. Would you mean maybe post process the output? <|||||>Hi @panagiotidi , thanks for raising this issue. Yes, in this case as the error is being raise in the `postprocess` method, this is the one you'd need to adapt. Generally for custom workflows, it's probably easier to start with lower-level API such as `AutoModel` to define your steps and then move to something like a custom pipeline. If all that you want to do automatic speech recognition with the audio input, removing `return_dict_in_generate` from the `generate_kwargs` will work i.e.: ```python from pathlib import Path from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig audio_path = 'xxx.wav' generate_kwargs = {'temperature': 1, 'max_length': 448, 'output_scores': True} pipe = pipeline( model="openai/whisper-small", chunk_length_s=10, framework="pt", batch_size=1 ) print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) ```<|||||>I am actually trying to implement the `--logprob_threshold` from the original paper of whisper as I would like to be able to experiment with it when transcribing. There is a relevant discussion [here](https://github.com/openai/whisper/discussions/654#discussioncomment-4510801), but as you said too, in order to implement in a pipeline, a custom implementation of post process is needed on the output results. Will you maybe include in later versions?<|||||>@panagiotidi I don't know of any plans to add this at the moment. As this is a specific generation case, it's not something that's likely to be included into a pipeline. If I've understood `--logprob_threshold`, then the desire is to stop generation if the average logprob is below a certain threshold. In this case, a custom [`Constraint` class](https://huggingface.co/docs/transformers/v4.27.1/en/internal/generation_utils#transformers.Constraint) could be implemented and passed in to the `generate_kwargs`. Questions about an implementation of this is probably best placed in the [forums](https://discuss.huggingface.co/). As mentioned above, when applying custom code, it is easier to work from the `AutoModel` level first e.g. [adapting the examples in the docs](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,163
closed
Revert "Enforce same behavior as PyTorch 2.0 for older versions"
Reverts huggingface/transformers#22136 As we discovered this was breaking the DeepSpeed integration (and thus potential other integrations wrapping the model), it's safer to revert this change for now.
03-14-2023 17:21:14
03-14-2023 17:21:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,162
closed
Run all tests by default
# What does this PR do? This PR makes the default value for the crosstests PT<>TF and PT<>FLAX true by default. This way when a user runs tests locally, all tests are run (the only exception being the hub staging tests, which require setting an env variable anyway to use moon-staging instead of moon-landing). In the CI however each job runs the same tests as before since the env variables are set at False by default.
03-14-2023 15:27:49
03-14-2023 15:27:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,161
closed
GPT Neox rotary embedding does not work with padding left
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-1097-aws-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker, @younesbelkada, @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", device_map="auto") f_not_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=False, return_tensors="pt")) f_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=True, pad_to_multiple_of=256, return_tensors="pt")) torch.testing.assert_allclose(f_not_padded.logits[:, -1], f_padded.logits[:, -1]) # AssertionError: Tensor-likes are not close! # Mismatched elements: 6057 / 50288 (12.0%) # Greatest absolute difference: 0.0003177821636199951 at index (0, 4649) (up to 1e-05 allowed) # Greatest relative difference: 1.5682868874196898 at index (0, 30410) (up to 0.0001 allowed) ``` The problem is exacerbated in bfloat16 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", device_map="auto", torch_dtype=torch.bfloat16) f_not_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=False, return_tensors="pt")) f_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=True, pad_to_multiple_of=256, return_tensors="pt")) torch.testing.assert_allclose(f_not_padded.logits[:, -1], f_padded.logits[:, -1]) # AssertionError: Tensor-likes are not equal! # Mismatched elements: 49417 / 50288 (98.3%) # Greatest absolute difference: 1.154541015625 at index (0, 50271) # Greatest relative difference: 2058.906976744186 at index (0, 29917) ``` ### Expected behavior padding left should have no influence on the resulting logits. While the differences do not look like much, it has a huge impact on generation.
03-14-2023 14:59:44
03-14-2023 14:59:44
Hey thanks for reporting! <|||||>It is possible that this is not the root cause but there is an issue with these lines: ```python offset = 0 if has_layer_past: offset = layer_past[0].shape[-2] seq_len += offset cos, sin = self.rotary_emb(value, seq_len=seq_len) query, key = apply_rotary_pos_emb(query_rot, key_rot, cos, sin, offset=offset) ``` `offset` and `seq_len` are not computed correctly when you have padding. On a sidenote, it is impossible to have a single value for `offset` as different sequences in the batch might have different length and therefore different offsets when padding left.<|||||>We use padding left extensively on the serving side as we have a dynamic batching logic that batches sequence of very different lengths together. While the pad==256 example above seems extreme in isolation, it is completely normal when serving. We sometimes even go higher in chat applications where a member of the batch has a very large history (> 1000 tokens) and other sequences only just started ( ~ 40 tokens). We also serve all the models in bfloat16 if available and we almost always use sampling which amplifies the logits issue even more.<|||||>Hey everyone! Yes, it is correct, it is pretty much the same issue as I reported [here](https://github.com/huggingface/transformers/pull/21853#issuecomment-1461028782) -- we should be passing `position_ids` all the way down to the attention layer, and compute the sequence length from it. We have an open PR to fix the same issue with GPT-J (#22069), I'll make sure it is ported to GPT NeoX when it is merged. We are currently ironing out `torch.fx` issues (adding the correct behavior makes the tensors dynamic, which blocks existing features)<|||||>Hi @OlivierDehaene, I'm actually in the middle of porting the fix from #22069 to GPT-Neox too, since I was also interested in that one (in parallel with other things including resolving this torch.fx issue). Also for reference there's a similar existing issue which went stale: https://github.com/huggingface/transformers/issues/18999<|||||>Hi @njhill! Nice thanks for working on this! For now I have a fix on my text-generation-inference fork as we have multiple neox in prod and I need a fix asap. It's sensibly the same to yours I think. ```python class RotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings, base=10000, device=None): super().__init__() inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) self.register_buffer("inv_freq", inv_freq) # Build here to make `torch.jit.trace` work. self.max_seq_len_cached = max_position_embeddings self.cos_cached = None self.sin_cached = None @staticmethod def rotate_half(x): """Rotates half the hidden dims of the input.""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) @staticmethod def _create_cos_sin(inv_freq, max_position_embeddings, dtype, device): t = torch.arange(max_position_embeddings, device=inv_freq.device, dtype=inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) return emb.cos().to(device).to(dtype), emb.sin().to(device).to(dtype) def forward(self, q, k, position_ids, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] if seq_len > self.max_seq_len_cached or self.cos_cached is None or self.sin_cached is None: if seq_len > self.max_seq_len_cached: self.max_seq_len_cached = seq_len self.cos_cached, self.sin_cached = self._create_cos_sin( self.inv_freq, self.max_seq_len_cached, q.dtype, q.device ) cos = self.cos_cached[position_ids].unsqueeze(1) sin = self.sin_cached[position_ids].unsqueeze(1) q_embed = (q * cos) + (rotate_half(q) * sin) k_embed = (k * cos) + (rotate_half(k) * sin) return q_embed, k_embed ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,160
closed
[i18n-it] Translating docs to it
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to guides - [] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)
03-14-2023 14:34:47
03-14-2023 14:34:47
Please stop opening those issues without filling the template. This is spamming every maintainer of the library.
transformers
22,159
closed
Load optimizer state on CPU to avoid CUDA OOM
# What does this PR do? As reported in #22123 resuming from a checkpoint can get a user out of memory if the optimizer state is loaded directly on GPU. This PR loads it on CPU by default and it will be copied over to the proper device by PyTorch in `load_state_dict`. This might be a bit slower at the checkpoint loading time (so just once) but will benefit users training large models. Fixes #22123
03-14-2023 14:19:17
03-14-2023 14:19:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,158
closed
to_pil - don't rescale if int and in range 0-255
# What does this PR do? #21969 Introduced a bug where binary masks would have their values rescaled to between 0-255. This was because of [this part](https://github.com/huggingface/transformers/blob/4063fd9cba6b72ebfd5c663a307ab9d5ff1a153d/src/transformers/image_transforms.py#L161) of the logic check. The original assumption was that inputs with their values between 0-1 would be rescaled images with float pixels. However, binary masks aren't and shouldn't be rescaled. We now check first if the input is of type uint8. Then check if any precision is lost when converting to int and that the int values are in the valid range 0-255 and finally if float values are between 0-1. Fixes #22147 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
03-14-2023 14:05:52
03-14-2023 14:05:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,157
closed
LayoutLM model only able to classify individual words instead of entire sections
### Model description Model I am using (LayoutLM ...): Here, I would like to develop a custom resume parser model that can accurately predict the sections for *EDUCATION*, *SKILLS*, and *EXPERIENCE* based on the resume. I have fine-tuned the *LayoutLMv3* model on a custom dataset that is similar to the *FUNSD* dataset. Although the LayoutLM model can predict education keywords, it only does so at the word level. For instance, if the resume states "My education is in computer engineering from LD College Ahmedabad," the model will label "computer" and "engineering" as *EDUCATION*. However, I aim to have all classified words in a single section rather than in individual word sections. Therefore, here are some random screenshots of the LayoutLM model output. ![Screenshot from 2023-03-13 18-36-47](https://user-images.githubusercontent.com/105478351/225013023-a1ea58b0-4e26-49f2-9cd5-cdebc8365d55.png) And here, I would like the output to include box coordinates for the EDUCATION section as well as the SKILLS section, identified by their respective keywords. ![Screenshot from 2023-03-13 18-32-05](https://user-images.githubusercontent.com/105478351/225013056-31073b3f-605d-4728-8058-0300bb8fd977.png) Note: I have attempted to use the *Layout Parser* model with the *PublayNet* dataset. However, this model was unable to accurately predict and classify the sections for *EDUCATION*, *SKILLS,* *EXPERIENCE*, etc. If there are any other models that would be suitable for my use case, please kindly suggest them. *Thank you all for your help.* ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
03-14-2023 13:23:00
03-14-2023 13:23:00
Hi @keval2415 - thanks for opening an issue. Can you please use the [forum](https://discuss.huggingface.co/) for questions like this. We try to keep the github issues reserved for bugs or feature requests.
transformers
22,156
closed
[i18n-it] Translating docs to it
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to Guide - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx) ## How-to guides - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx)
03-14-2023 11:32:15
03-14-2023 11:32:15
transformers
22,155
closed
Fix GPT2 position ids issues
# What does this PR do? Follow up PR of #21080
03-14-2023 10:51:50
03-14-2023 10:51:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22155). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,154
closed
data collator or tokenizer.pad has bug when add new features to data
### System Info transformers 4.26.1, mac m1, python 3.9.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') if 'new_feature' not in tokenizer.model_input_names: tokenizer.model_input_names.append('new_feature') samples = [ {'input_ids': torch.arange(3), 'new_feature': torch.arange(8)}, {'input_ids': torch.arange(5), 'new_feature': torch.arange(11)}, ] batch = tokenizer.pad(samples) ``` ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`new_feature` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ### Expected behavior no error. batch['input_ids'].shape == (2, 5) batch['new_feature'].shape == (2, 11)
03-14-2023 10:41:51
03-14-2023 10:41:51
Hi @lanlanlan3 - thanks for opening this issue. The reason this error is being thrown is that `"new_feature"` won't be padded and therefore the tensors can't be concatenated to create a batch. This can be seen if the inputs passed are lists and the return type not specified: ```python >>> samples = [ ... {'input_ids': list(range(8)), 'new_feature': list(range(3))}, ... {'input_ids': list(range(11)), 'new_feature': list(range(5))}, ... ] >>> batch = tokenizer.pad(samples, max_length=12, padding='max_length', return_tensors=None) {'input_ids': [[0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]], 'new_feature': [[0, 1, 2], [0, 1, 2, 3, 4]]} ``` This occurs for a few reasons: * All input features are expected to be padded by the same amount per sample. The amount of padding needed is calculated based on the padding strategy and the length of the `input_ids`. For example, if `padding='max_length'`, then for sample 0, the padding to be added is calculated as `max_length - sample_length = 5 - 3 = 2` for all features (`input_ids` and `new_feature`). However `new_features` isn't padded at all because of the next point. * The padding behaviour for `"new_feature"` is undefined i.e. what should the sequence be padded with? You can see how this is controlled in the padding internals [here](https://github.com/huggingface/transformers/blob/ebdb185befaa821304d461ed6aa20a17e4dc3aa2/src/transformers/tokenization_utils_base.py#L3379). This behaviour from the tokenizer is expected. Note: `model_input_names` defines the expected inputs to the model during the forward pass. Therefore changing this will mean that the tokenizer outputs aren't in the expected format for a model in the transformers library. To modify it, it should be passed when creating the tokenizer, rather than modifying the class attribute directly: ```python tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_input_names=["input_ids", "new_feature"]) ``` If the outputs of the tokenizer are being passed to a custom model that ingests `input_ids` and `new_feature`, and the model expects them to be of different length, then I would suggest defining your own tokenizer class which subclasses `PreTrainedTokenizer` or `BertTokenizer`; or a custom data collator which performs the expected padding behaviour. <|||||>> Hi @lanlanlan3 - thanks for opening this issue. > > The reason this error is being thrown is that `"new_feature"` won't be padded and therefore the tensors can't be concatenated to create a batch. This can be seen if the inputs passed are lists and the return type not specified: > > ```python > >>> samples = [ > ... {'input_ids': list(range(8)), 'new_feature': list(range(3))}, > ... {'input_ids': list(range(11)), 'new_feature': list(range(5))}, > ... ] > >>> batch = tokenizer.pad(samples, max_length=12, padding='max_length', return_tensors=None) > {'input_ids': [[0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]], 'new_feature': [[0, 1, 2], [0, 1, 2, 3, 4]]} > ``` > > This occurs for a few reasons: > > * All input features are expected to be padded by the same amount per sample. The amount of padding needed is calculated based on the padding strategy and the length of the `input_ids`. For example, if `padding='max_length'`, then for sample 0, the padding to be added is calculated as `max_length - sample_length = 5 - 3 = 2` for all features (`input_ids` and `new_feature`). However `new_features` isn't padded at all because of the next point. > * The padding behaviour for `"new_feature"` is undefined i.e. what should the sequence be padded with? You can see how this is controlled in the padding internals [here](https://github.com/huggingface/transformers/blob/ebdb185befaa821304d461ed6aa20a17e4dc3aa2/src/transformers/tokenization_utils_base.py#L3379). > > This behaviour from the tokenizer is expected. > > Note: `model_input_names` defines the expected inputs to the model during the forward pass. Therefore changing this will mean that the tokenizer outputs aren't in the expected format for a model in the transformers library. To modify it, it should be passed when creating the tokenizer, rather than modifying the class attribute directly: > > ```python > tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_input_names=["input_ids", "new_feature"]) > ``` > > If the outputs of the tokenizer are being passed to a custom model that ingests `input_ids` and `new_feature`, and the model expects them to be of different length, then I would suggest defining your own tokenizer class which subclasses `PreTrainedTokenizer` or `BertTokenizer`; or a custom data collator which performs the expected padding behaviour. ![image](https://user-images.githubusercontent.com/28173281/225197675-c8bb0283-50ff-45a7-a7c8-6cfae645f468.png)
transformers
22,153
closed
Specify metric aggregation strategy when evaluating on multiple validation datasets using `Trainer` class
### Feature request Specify metric aggregation strategy when evaluating on multiple validation datasets using `Trainer` class. This metric aggregation strategy will output a `metric: Dict[str, float]` by aggregating the metrics computed from the multiple validation sets. ### Motivation When evaluating on multiple validation datasets using `Trainer`, the metrics used to compute the best checkpoint is the last metric. However, the user will likely want an aggregation of metrics from the multiple validation datasets. https://github.com/huggingface/transformers/blob/ff8870350151091d3d8b2af4c1c0fa3ebcc1052a/src/transformers/trainer.py#L2224-L2235 ### Your contribution I would be happy to make a PR. My idea is as follows: 1. Collate all metrics from evaluation dataset 2. Aggregate the metrics based on a user-specified strategy (e.g. average a user-specified common metric between all evaluation dataset; here, I can leverage the TrainingArgument `metric_for_best_model `). This step should return a `Dict[str, float]` so as to be compatible with the `metric` type.
03-14-2023 10:29:58
03-14-2023 10:29:58
This is a bit too niche to be added in the Trainer, it might be best to write your own subclass for this.
transformers
22,152
closed
Create MaskedImageCompletionOutput and fix ViT docs
# What does this PR do? Fixes the output class and docs of `ViTForMaskedImageModeling `which returns images of shape (batch_size, num_channels, height, width) whereas MaskedLMOutput logits are of shape (batch_size, seq_length, vocab_size). **Notes:** - We have no checkpoints for `ViTForMaskedImageModeling` and the docstrings use the pretrained ViTModel combined with random head parameters. I will open a separate PR for the other affected model - `DeiTForMaskedImageModeling`, for which no checkpoints are available either. - Swin has its own MaskedImageOutput class but only has base model checkpoints (trained on the masked image modeling task) and no task-specific checkpoints. - I'm planning to create a masked-image-completion pipeline and add Swin and ICT (once Sheon's PR is merged). CC: @sheonhan ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-14-2023 10:00:39
03-14-2023 10:00:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pinging @amyeroberts for the final approval<|||||>masked-image-completion pipeline sounds awesome 🙌
transformers
22,151
closed
Translation Italian: perf_train_cpu and perf_train_cpu_many
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_train_cpu.mdx * added perf_train_cpu_many.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu and @MKhalusova @omarespejel
03-14-2023 09:43:45
03-14-2023 09:43:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,150
closed
Returning n-best hypotheses from Wav2Vec2ProcessorWithLM decoder
### Feature request Currently, the [Wav2Vec2ProcessorWithLM](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L67) decode function returns [only the best hypothesis](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L572). Shall we extend its functionality and make it return n-best hypotheses, logit_scores, lm_scores, word_offsets so that people could rescore these hypotheses with a larger LM. For example, take a look at [NeMo article](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html#neural-rescoring) regarding the rescoring of n-best hypotheses. ### Motivation I suppose many people use n-gram models during the shallow fusion stage, the n-grams models are a good fit during the beam search because they are fast. People perform the rescoring of the n-best hypotheses with a larger LM (using them during the decoding is too slow so it makes sense to apply them during the rescoring of n-best hypotheses that come out of the ASR system). They fuse the score which comes out of the ASR with the perplexity-like score from the LM. If this external model is trained on the domain data it will drastically improve the WER of the resulting model. ### Your contribution If it sounds like a good feature to you, that can be potentially adopted let me know and I'll prepare the PR 😃
03-14-2023 08:46:49
03-14-2023 08:46:49
cc @sanchit-gandhi <|||||>> please, tell me your opinion on this feature :)
transformers
22,149
closed
Failed to dump torchscript model for GPT2
### System Info python version, 3.7 transformers version, 4.26.1 ### Who can help? @ArthurZucker, @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` model_inputs = dict( input_ids=torch.zeros((1, 1024), dtype=torch.long).cuda(), attention_mask=torch.ones((1, 1024), dtype=torch.long).cuda()) model = GPT2LMHeadModel.from_pretrained(args.model, torchscript=True).eval().cuda() def dict_test(example_inputs: Dict[str, torch.Tensor]): return model(input_ids=example_inputs['input_ids'], attention_mask=example_inputs['attention_mask']) model_scripted = torch.jit.trace(dict_test, model_inputs) torch.jit.save(model_scripted, "traced_bert.pt") ``` I used the above code to generate GPT2 torchscript model and got error as follows: `RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient` when running to ``` "./transformers/models/gpt2/modeling_gpt2.py", line 830, in forward inputs_embeds = self.wte(input_ids) ``` ### Expected behavior generate GPT2 torchscript model.
03-14-2023 07:39:48
03-14-2023 07:39:48
Hey! Thanks for reporting. This is indeed a bug, will see what we can do to fix that!<|||||>Hi @zhuango, I think your problem is more related to how you trace the model rather than the transformers library itself. Since you're tracing a function (not the model itself), JIT trace knows nothing about model parameters, but instead sees them as unnamed tensors that take part in the forward pass calculations. As the origins of these tensors are unknown, it cannot build an autograd chain for them, but since those tensors have autograd enabled, it shows this error. So, I see the following ways you could solve this: 1. Disable autograd for all model parameters before tracing: ``` model.requires_grad_(False) ``` 2. Transform your `dict_test `function into a model that wraps the original model and trace it (this way JIT will discover model parameters and corresponding tensors and will be able to use autograd for them): ``` class DictModel(torch.nn.Module): def __init__(self, model): super().__init__() self.model = model def forward(self, inputs: Dict[str, torch.Tensor]): return self.model(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask']) dict_model = DictModel(model) model_scripted = torch.jit.trace(dict_model, inputs) ``` 3. Just trace the model itself sending input parameters as a tuple instead of a dict (but I guess you intentionally want to use a dict to make the resulting torchscript usage easier?): ``` torch.jit.trace(model, ...) ``` @ArthurZucker, let me know if you think this needs any additions to the library itself or documentation?<|||||>Hi @vvmnnnkv, thanks a lot. That works for me.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,148
closed
Update 2 doctest expected values for torch 2.0.0
# What does this PR do? 2 doctests need to update their expected values with torch 2.0.0. (same reason as in #21975)
03-14-2023 06:45:35
03-14-2023 06:45:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,147
closed
OneFormerProcessor、MaskFormerImageProcessor will cause errors if segmentation_maps only have elements 0 and 1
### System Info transformers-4.26.0 do not have this bug but transformers-4.27.0.dev0 has. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation, OneFormerImageProcessor, OneFormerConfig from transformers import Mask2FormerImageProcessor, Mask2FormerForUniversalSegmentation from PIL import Image import requests import torch import numpy as np import matplotlib processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny",num_text=134,do_reduce_labels=True,) image_np=np.random.randint(0,255,(3,512,512)) #segmentation_maps only have elements 0 and 1 segmentation_maps = torch.randint(0, 2, (image_np.shape[1], image_np.shape[2]), dtype=torch.long) inst2class={1: 4} raw_inputs=processor.image_processor([image_np], task_inputs=["panoptic"], segmentation_maps=[segmentation_maps], return_tensors="pt", instance_id_to_semantic_id=inst2class, do_reduce_labels=True, ignore_index=None) ``` #ERROR ``` E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py:419: FutureWarning: The `reduce_labels` argument is deprecated and will be removed in v4.27. Please use `do_reduce_labels` instead. warnings.warn( Traceback (most recent call last): File "E:\condaenv\yaogan\lib\site-packages\IPython\core\interactiveshell.py", line 3460, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-ed9733992fe8>", line 23, in <module> raw_inputs=processor.image_processor([image_np], File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 524, in __call__ return self.preprocess(images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, **kwargs) File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 708, in preprocess encoded_inputs = self.encode_inputs( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 962, in encode_inputs masks, classes = self.convert_segmentation_map_to_binary_masks( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 516, in convert_segmentation_map_to_binary_masks return convert_segmentation_map_to_binary_masks( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 288, in convert_segmentation_map_to_binary_masks class_id = instance_id_to_semantic_id[label + 1 if reduce_labels else label] KeyError: 255 ``` This bug is caused by a **resize** function of OneFormerProcessor, which convert segmentation_maps to PIL.Image and then convert to np.ndarray. After **resize**, segmentation_maps have elements 0 and 255, so the bug arise. ### Expected behavior fix this bug before release 4.27.0 as stable version transformers-4.26.0 do not have this bug
03-14-2023 05:49:05
03-14-2023 05:49:05
cc @amyeroberts @alaradirik
transformers
22,146
closed
Missing parameter settings in BLIP 2
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.19.0-31-generic-x86_64-with-glibc2.36 - Python version: 3.10.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 2.0.0.dev20230209+cu118 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following code is working but can not process parameters like nucleus sampling, length penalty or temperature as provided in the original prokect from Salesforce. (to test out at https://huggingface.co/spaces/Salesforce/BLIP2) ``` from transformers import Blip2Processor,AutoProcessor, Blip2ForConditionalGeneration processor3 = AutoProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map={'':torch.cuda.current_device()}) with torch.device("cuda"): model3 = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map={'':torch.cuda.current_device()}) raw_image = Image.open('UIDimgsages/x.jpg').convert('RGB') inputs = processor3(raw_image, return_tensors="pt").to(device, torch.float16) out = model3.generate(**inputs, max_length=64, min_length=12) blip2_output = processor3.decode(out[0], skip_special_tokens=True) print(blip2_output) ``` ### Expected behavior It should be possible to adjust all parameters which are given in the original BLIP 2 project. @ArthurZucker @amyeroberts Best regards Marc
03-14-2023 01:40:49
03-14-2023 01:40:49
Hey! Did you try playing with `generation_config` ? All the arguments that you are looking for can either be setup inside, or provided in the `generate` kwargs. Tempertature and penalty length are both availble 😉 not sure about nucleus sampling, but what you are looking for is probably [here](https://huggingface.co/docs/transformers/internal/generation_utils#utilities-for-generation) or [here](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.generate). Tell me if you can't find what you were looking for! <|||||>@ArthurZucker , that sounds wonderful! I have no idea why I missed this at least a dozen times. :) I will try it out later today. Thank you very much!<|||||>It's pretty hard for us to debug if there's no error message being given. :( Also, BLIP-2 should support all arguments of the `generate` method, and there's no need to use the `with torch.device("cuda")` context manager, as this might break the code. The `device_map` argument of the `from_pretrained` method will take care of placing everything on the appropriate device. Refer to the example code snippets shown at the bottom of the model cards like [this one](https://huggingface.co/Salesforce/blip2-flan-t5-xxl) on the hub regarding usage in 8bit.<|||||>Thank you, @NielsRogge , I tried the example code as the very first one but had the same problem. To confirm (myself) I tried it again. Without setting a minimum length the inference is so fast that I wouldn't mind about the performance issue. But when adding a minimum length then this bottleneck is really annoying. I can confirm that the cpu usage, while inference, is exactly 100% for the inference job. Either 1 cpu thread (from 24) is at 100% or one is at 66% and a second one at 33%. It is caused by the 8 bit setting. Can't anyone confirm (or refute) my observations?<|||||>Oh! I think I answered to the wrong comment or in the wrong thread. Or you answered in the wrong thread, @NielsRogge ! 😄 May be you refered to my other thread https://github.com/huggingface/transformers/issues/22011 ?<|||||>@Marcophono2 did you figure out nice settings to use? I also switched from BLIP codebase to using transformers version and the generated captions are not as good. There is a lot of repeating. I've tried with default, contrastive search, multinomial sampling, beam search, and diverse beam search and still haven't found settings that give consistent captions like the old BLIP library. <|||||>@pharmapsychotic Wow, Mr Clip-Interrogator! I love your tool and use it very often! Unfortunatelly I didn't find a solution for better control over BLIP2. Also I switched back from transformers to the native codebase since I realized that opt2.7b is working as good as the flan-t5-xxl (for me at least) and I am able to put it into my 4090 vram without needing a 8 bit conversion. The inference time is much shorter now, about 0.6 seconds, if using standard length. And now I have some more control over the settings, excluded length + senseful output. Meanwhile I think there is no really solution for it. The captions of the training sets are simply too small. The only thing I could imagine is to ask certain questions in a second step depending on the (short) standard output of BLIP2. Another "workaround" that I use meanwhile is to analyse an image additionally with CLIP related to pre-defined points of interest. Using feature extraction is a mighty tool for a lot of things here. For example to estimate the age of a person I use feature extraction + classification like `cls_namesA = ["age of 1 year","age of 2 years","age of 3 years","age of 4 years","age of 5 years","age of 6 years","age of 7 years","age of 8 years","age of 9 years","age of 10 years","age of 11 years","age of 12 years","age of 13 years","age of 14 years","age of 15 years","age of 16 years","age of 17 years","age of 18 years","age of 19 years","age of 20 years","age of 21 years","age of 22 years","age of 23 years","age of 24 years","age of 25 years","age of 26 years","age of 27 years","age of 28 years","age of 29 years","age of 30 years","age of 31 years","age of 32 years","age of 33 years","age of 34 years","age of 35 years","age of 36 years","age of 37 years","age of 38 years","age of 39 years","age of 40 years","age of 41 years","age of 42 years","age of 43 years","age of 44 years","age of 45 years","age of 46 years","age of 47 years","age of 48 years","age of 49 years","age of 50 years","age of 51 years","age of 52 years","age of 53 years","age of 54 years","age of 55 years","age of 56 years","age of 57 years","age of 58 years","age of 59 years","age of 60 years","age of 61 years","age of 62 years","age of 63 years","age of 64 years","age of 65 years","age of 66 years","age of 67 years","age of 68 years","age of 69 years","age of 70 years","age of 71 years","age of 72 years","age of 73 years","age of 74 years","age of 75 years","age of 76 years","age of 77 years","age of 78 years","age of 79 years","age of 80 years","age of 81 years","age of 82 years","age of 83 years","age of 84 years","age of 85 years","age of 86 years","age of 87 years","age of 88 years","age of 89 years","age of 90 years","age of 91 years","age of 92 years","age of 93 years","age of 94 years","age of 95 years","age of 96 years","age of 97 years","age of 98 years","age of 99 years","age of 100 years","age of 101 years","age of 102 years","age of 103 years"]` with a filtering and second classification in a second step. That works extremly fast and well! Also for other points of interest. I found out that ViT-B-32 brings the best results. `modelC, vis_processors2, txt_processors2 = load_model_and_preprocess("clip_feature_extractor", model_type="ViT-B-32", is_eval=True, device=device)` Best regards Marc <|||||>Thanks for reporting, we are looking into why this is the case. cc @gante <|||||>Hi, I wonder how should I do if I would like to generate multiple captions for each image? For example, we could use "use_nucleus_sampling" in Lavis version of BLIP2 to accomplish that, but I haven't found a way in hugging face version of BLIP2. generated_text = model.generate( {"image": image}, use_nucleus_sampling=True, num_captions=20 )<|||||>Oh yes one reason why results weren't the same was because you might have used different generation settings. Note that if you do `model.generate(**inputs)`, greedy decoding is used by default (which is the most simple form of generating text by taking the token with the highest probability at each time step). To match the settings in the BLIP-2 repo, which uses beam search by default as seen [here](https://github.com/salesforce/LAVIS/blob/5ee63d688ba4cebff63acee04adaef2dee9af207/lavis/models/blip2_models/blip2_opt.py#L149), you can do `model.generate(**inputs, num_beams=5, max_new_tokens=30, repetition_penalty=1.0, length_penalty=1.0, temperature=1)`. To use nucleus sampling, you can do `model.generate(**inputs, do_sample=True, top_p=0.9)`<|||||>I've had really good success with BLIP2 since it came out a couple months ago, and now am rebuilding my notebooks on transformers. However, being new to transformers, it would be nice having `num_captions` natively available, as it is this feature that makes captioning powerful on my end.<|||||>Hi @rodrigo-barraza this is supported, just pass in `num_return_sequences` as argument to the `generate()` method.<|||||>> Hi @rodrigo-barraza this is supported, just pass in `num_return_sequences` as argument to the `generate()` method. Oh wow, amazing. Not sure how I missed that. Thanks a bunch! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,145
closed
Update BridgeTowerForContrastiveLearning
# What does this PR do? 1. Use return_loss in BridgeTowerForContrastiveLearning 2. Update example in BridgeTowerForContrastiveLearning 3. Handles @amyeroberts suggestion from https://github.com/huggingface/transformers/pull/21964 to use smaller vocab_size for BridgeTowerTester <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @sgugger can you please help review this fix. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-13-2023 23:32:55
03-13-2023 23:32:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank @ydshieh and @amyeroberts for your suggestions. In the latest commit, we addressed all of your feedbacks except one that regards to tests inspired from Clip's tests. @ydshieh Regarding tests, yes, you are right, it will require more works. We plan to have another PR to improve tests (following your suggestion) soon, yet this is not the main purpose of this PR. We would love to have you help us working on this, we truly appreciate your offer to help. It will be great if you can help to merge this PR without restructuring tests, and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks<|||||>Thank @amyeroberts for approving this PR. @ydshieh We have updated the 2 positions that you most recently suggested to change in the latest commit. Could you please review, approve, and merge the PR? We are looking forward to having this PR merged soon. Thanks a lot. <|||||>Thank you again, @tileintel and @abhiwand, for the work! Merge now 🚀 ! <|||||>> and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks Hi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT?<|||||>> > and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks > > Hi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT? @ydshieh thanks for pointing this out. We would really appreciate if you can work on restructuring tests. Thank you!<|||||>> > > and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks > > > > > > Hi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT? > > @ydshieh thanks for pointing this out. We would really appreciate if you can work on restructuring tests. Thank you! Sure, thanks for letting me know 😊
transformers
22,144
closed
[trainer] add `--optim adamw_torch_fused` for pt-2.0+
This PR implement the discussion of https://github.com/huggingface/transformers/issues/22141 to 1. add support for the fused version of torch's AdamW. via `--optim adamw_torch_fused` 2. due it being too new, untested and a known bug the fix of which didn't make it into pt-2.0.0 - I did not make `--optim adamw_torch_fused` the default for pt>=2.0 - but prepared a place holder to do that for pt-2.1 instead. 3. added an assert for `--fp16` as apparently it's buggy with fp16/AMP https://github.com/huggingface/transformers/issues/22141#issuecomment-1467013132 fixed in https://github.com/pytorch/pytorch/issues/95781 but didn't make it into pt-2.0 cut off - this should be fixed in pt-2.0.1. But should already work with pt-2.1.0 nightly - except I think it's broken still (reported on pytroch slack). **Bottom line: for pt-2.0 `--optim adamw_torch_fused` will become available for any use except `--fp16` which will automatically be re-enabled upon pt-2.0.1 release, which probably will happen a month later. And we want to give `--optim adamw_torch_fused` time before making it a default.** ## Quality and Speed Comparison Benchmarks: ``` PYTHONPATH=src CUDA_VISIBLE_DEVICES=0 python scripts/benchmark/trainer-benchmark.py --base-cmd ' \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \ --do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \ --max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \ --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \ --source_prefix "translate English to Romanian: " --warmup_steps 50 \ --max_train_samples 20000 --dataloader_num_workers 2 --fp16 \ ' --target-metric-key train_samples_per_second --repeat-times 1 --variations '--optim adamw_torch_fused|--optim adamw_torch|--optim adamw_apex_fused' --report-metric-keys train_loss --base-variation '--optim adamw_torch' ``` and then adding `--bf16` and `--fp16` for the non-fp32 benchmarks. ### bf16 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 382.06 | 9 | 2.22 | | --optim adamw_torch | 350.22 | 0 | 2.22 | | --optim adamw_apex_fused | 386.81 | 10 | 2.22 | ### fp16 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 389.41 | 0 | 2.66 | | --optim adamw_torch | 389.37 | 0 | 2.55 | | --optim adamw_apex_fused | 399.27 | 3 | 2.53 | it's easy to see fp16 is broken - bad loss ### fp32 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 107.98 | 3 | 2.21 | | --optim adamw_torch | 105.14 | 0 | 2.21 | | --optim adamw_apex_fused | 108.20 | 3 | 2.21 | *** Setup: ``` Datetime : 2023-03-13 15:17:47 Software: transformers: 4.27.0.dev0 torch : 2.1.0.dev20230312+cu117 cuda : 11.7 python : 3.8.16 Hardware: 1 GPUs : NVIDIA A100 80GB PCIe, 79.21GB ``` Fixes: https://github.com/huggingface/transformers/issues/22141
03-13-2023 21:32:56
03-13-2023 21:32:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Arg, indeed let's wait a bit to make it the default then and leave the default untouched for now.<|||||>I made additional fixes, please see the OP for details. Of course, I'm open to suggestions to handle these multiple nuances differently.<|||||>its give me the following error `ValueError: --optim adamw_torch_fused with --fp16 requires PyTorch>2.0`<|||||>That's correct. the new fused version is broken for fp16/amp in pt-2.0. It's fixed in pt-nightly and will be fully available in 2.0.1 and/or 2.1.0.
transformers
22,143
closed
[trainer] bug in resume and gas>1
https://github.com/huggingface/transformers/pull/22098 fixed the issue with GAS>1 at the epoch boundary. the same bug will still happens at resume boundary, since `total_batched_samples` is currently reset to 0. So need to save `total_batched_samples` and restore from the saved value on resume.
03-13-2023 19:59:03
03-13-2023 19:59:03
Actually thought more about it this morning. The gradients accumulated before the save will be lost, so even if we save the `total_batched_samples` variable, we won't be able to resume training with the same gradients (they will be 0 instead of whatever was accumulated before the checkpoint). So I think leaving the situation as is is okay, there is a tiny bit of training lost but it shouldn't impact convergence. And we should document somewhere that we do not guarantee checkpoints will not yield the exact same model using `save_strategy="epoch"` in conjunction with gradient accumulation.<|||||>oh, I wrongly assumed that they were saved. Yes, then it makes sense. There will be no miscalculation then, just some very minor intermediary results loss. I think it's all good. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,142
closed
fix url post pt-2.0 release
change: https://pytorch.org/docs/2.0/generated/torch.compile.html?highlight=torch+compile#torch.compile to: https://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile once the latter doc appears post pt-2.0 release for the trainer code here: https://github.com/huggingface/transformers/pull/22140 Actually it looks like all the good stuff is at https://pytorch.org/docs/master/dynamo/index.html - but again be wary of `/master`
03-13-2023 19:45:44
03-13-2023 19:45:44
transformers
22,141
closed
[Trainer] fused torch `AdamW` is added in 2.0
### Feature request there is a faster pytorch version of AdamW, it's the fused one. it was added in Feb-23 and will be part of pt-2.0. `AdamW(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8…, fused=True).` there is also `Adam` (not `AdamW`) which also has the fused version since pt-1.13 but we don't expose this one. Now the question is this: should we add `--optim adamw_fused_torch` and allow it only for pt-2.0+ or silently switch `--optim adamw_torch` to the fused version when pt-2.0+ is used? cc: @sgugger
03-13-2023 18:54:25
03-13-2023 18:54:25
Good question! It might also revive the question of switching the optimizer from the HF implementation to the PyTorch one. Maybe we could add `adamw_fused_torch` as an option and then use for the default value of optim: - `adamw_hf` on PyTorch < 2.0 (as before) - `adamw_fused_torch` on PyTorch >= 2.0 so that users get the nice speed-up What do you think?<|||||>You know me, I'm all for progress so I'd vote for your proposal +1. The problem is that `adam_hf` != `adamw_*torch` algorithmically, so I will let you decide if you're OK with such a change of the default. I guess since `--adamw_hf` is still available - it's only a matter of communication to the community of the change of the default. In a way perhaps pt-2.0 release allows us to change things as well. ----------- Now practically let's implement `adamw_fused_torch`, then I can benchmark that it is faster while keeping loss not worse than `adamw_torch` and if all goes well then make it the default. Of course, it can be the same PR. or in 2 PRs I can work on it, unless you prefer to do it. <|||||>Discussed with Lysandre and he's also fine changing the default with the PyTorch 2.0 release (we will highlight it in the release notes as a breaking change so users are aware). I think users upgrading to PyTorch 2.0 will expect some differences anyway. Also note that we have used the PyTorch AdamW in all the Accelerate example and no one raised an issue of convergence problems or differences with the Trainer examples in the past year. As for the practicalities, go ahead with a PR if you want to do it. We just need to have it in main by tomorrow evening for the release :-) <|||||>I would like to add from the PyTorch side that fused AdamW is still in its nascent stage and has had recent fixes regarding grad scaling interaction on float16s which unfortunately were too recent to be included in PT 2.0 (https://github.com/pytorch/pytorch/pull/95847). If most hf models will not be using auto-mixed precision, this may not be an issue, but I did want to call out the risk and add a +1 for the safer "add as an option for now so people can enroll for faster implementations, but don't make fused adamw the default yet". <|||||>oh, thank you for the heads up, @janeyx99 - most models are using AMP. can the fix be pushed into 2.0.1? so should we not allow its use if AMP is used? edit: further discussion with Jane - only fp16/AMP is affected. --------------------- OK, so @sgugger - we have to recall the plan on making it the default. I already implemented the default change in https://github.com/huggingface/transformers/pull/22144 so roll back and make an issue to switch to it in pt-2.0.1 or pt-2.1.0? but perhaps it's a good idea to let it steep for a few months anyway. and need to deal with f16/AMP too<|||||>The issue I'm talking about is https://github.com/pytorch/pytorch/issues/95781, and the fix has already landed last week and will go into whatever next release we have. And the repro given in the issue was used with fp16/amp and interacts with grad scaler. I agree it may be wise to let the implementation bake in, but it would be great to get people trying it out to harden its implementation.<|||||>Thanks for chiming in @janeyx99 . Let's wait a bit to change the default then!<|||||>ok, so https://github.com/huggingface/transformers/pull/22144 adds `--optim adamw_torch_fused` - but will assert on fp16/AMP w/ pt-2.0.0 and is already programmed to allow it for pt-2.0.1. Making it a default will happen at pt-2.1.0 release or later.<|||||>I confirmed that the fp16 issue has been solved in today's nightly, the last fix was here https://github.com/pytorch/pytorch/pull/97415, discussed here https://github.com/pytorch/pytorch/issues/96755 As you can see the loss now matches for fp16/amp: | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 387.10 | 3 | 2.66 | | --optim adamw_torch | 377.61 | 0 | 2.66 | | --optim adamw_apex_fused | 389.49 | 3 | 2.66 | so our future proofing (allowing `--fp16` with `--adamw_torch_fused`) should work once pytorch makes a new release.
transformers
22,140
closed
Remove backend check for torch.compile
# What does this PR do? Since many of the choices in the list of backends for `torch.compile` do not work, this PR removes any check on the backend selected and let PyTorch itself errors if not happy. This also cleans up a bit the integration and mark everything as experimental.
03-13-2023 18:49:10
03-13-2023 18:49:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,139
closed
Update configuration_align.py (projected_dim=640)
updated projected_dim=640 in the argument section of the class AlignConfig
03-13-2023 17:47:16
03-13-2023 17:47:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,138
closed
Fix doc link for MGP-STR
# What does this PR do? This fixes the link to the doc.
03-13-2023 15:09:14
03-13-2023 15:09:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,137
closed
Return attention_mask in FeatureExtractionPipeline output
### Feature request Return `attention_mask` as one output of the FeatureExtractionPipeline so that padding token embeddings can be ignored. ### Motivation **Who can help?** @Narsil When using the `FeatureExtractionPipeline` to generate sentence embeddings, the input to the pipeline processes a raw sentence with a tokenizer. The output of the pipeline is a tensor of shape `[1, seq_len, hidden_dim]`. If the input is padded, `seq_len` is equal to the `max_length` of the tokenizer or longest seq in the batch. However, when performing mean pooling of individual word embeddings to obtain the sentence embedding, one may want to use `attention_mask` in order to ignore the padding token embeddings (see the mean pooling example below). But, FeatureExtractionPipeline does not return `attention_mask` as part of its output. ```python #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask ``` ### Your contribution I can submit a pull request to the issue if it sounds good to you!
03-13-2023 15:04:03
03-13-2023 15:04:03
This doesn't seem like a use-case for the pipeline though. Since you want access to the process inputs, you should just used the tokenizer and the model directly.<|||||>Your comment makes sense. As my goal aligns with the pipeline's main functionality, I think I will subclass `FeatureExtractionPipeline` and make small modifications to achieve my goal. Feel free to close the issue. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,136
closed
Enforce same behavior as PyTorch 2.0 for older versions
# What does this PR do? The default of `set_to_none` will change in PyTorch 2.0 because it is slightly better in terms of memory consumption. This PR uses it for all versions of PyTorch in the Trainer.
03-13-2023 14:24:30
03-13-2023 14:24:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>hmm, that didn't work on DS side - perhaps they have an old pytorch - checking ``` 2023-03-13T21:25:45.7750922Z Traceback (most recent call last): 2023-03-13T21:25:45.7751317Z File "/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/examples/pytorch/translation/run_translation.py", line 664, in <module> 2023-03-13T21:25:45.7751409Z main() 2023-03-13T21:25:45.7751794Z File "/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/examples/pytorch/translation/run_translation.py", line 581, in main 2023-03-13T21:25:45.7751970Z train_result = trainer.train(resume_from_checkpoint=checkpoint) 2023-03-13T21:25:45.7752299Z File "/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/src/transformers/trainer.py", line 1631, in train 2023-03-13T21:25:45.7752421Z return inner_training_loop( 2023-03-13T21:25:45.7752790Z File "/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/src/transformers/trainer.py", line 1814, in _inner_training_loop 2023-03-13T21:25:45.7752913Z model.zero_grad(set_to_none=True) 2023-03-13T21:25:45.7753166Z TypeError: zero_grad() got an unexpected keyword argument 'set_to_none' ```<|||||>They are using torch==1.8.2 and it fails, which means that this change will break for any user with older pytorch versions - let me find out when `set_to_none` was added. Using this as a test: ``` python -c 'import torch; m=torch.nn.Linear(1,1); m.zero_grad(set_to_none=True)' ```
transformers
22,135
closed
Prepare daily CI for torch 2.0.0
# What does this PR do? ~~⚠️⚠️ Don't merge before I run it (again) ⚠️⚠️~~ This PR changes docker files / workflow files to use the upcoming torch `2.0.0`.
03-13-2023 14:22:31
03-13-2023 14:22:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,133
closed
[`Whiper`] add `get_input_embeddings` to `WhisperForAudioClassification`
# What does this PR do? Fixes: https://github.com/huggingface/peft/issues/173 Fixes: https://github.com/huggingface/transformers/issues/22131 This PR adds `get_input_embeddings` method to `WhisperForAudioClassification` and `WhisperForConditionalGeneration` to avoid some issues such as [here](https://github.com/huggingface/peft/issues/173) In my understanding `get_input_embeddings` method should return the first module that converts the input to the first hidden states and not necessarly an `nn.Embedding` layer cc @ArthurZucker @sgugger
03-13-2023 13:32:44
03-13-2023 13:32:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes I can confirm the script provided by the user: ```python from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer from transformers import WhisperProcessor from transformers import WhisperForAudioClassification # Select CUDA device index import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name_or_path = "openai/whisper-small" feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path) tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path) processor = WhisperProcessor.from_pretrained(model_name_or_path) model = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto") from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model) ``` work perfect now with this fix!<|||||>I am a bit surprised by the fix, as it is not an embedding layer, this kind of break the usages we have for `get_input_embeddings`, which are for example `_resize_token_embeddings` and `tie_weights` which are both incompatible with the Whisper encoder as it does not have an embedding layer. So not really sure this is the way to go. Also, the `test_model_common_attributes` has to be updated ```python # WhisperEncoder has no inputs_embeds and thus the `get_input_embeddings` fn is not implemented def test_model_common_attributes(self): pass ``` as well as : ```python # input embeds is meaningless for an encoder-only acoustic model def test_inputs_embeds(self): pass ``` in whisper tests <|||||>Even if it's not text, the layer converts an input to a hidden size, so it's kind of like an embedding. Not sure if there is a better name for this function but it doesn't shock me to use this for Whisper, esp since peft needs a common API across models. As for the potential problems in `_resize_token_embeddings` and `tie_weights` I would wait for a user to actually raise an issue on this before giving them more thought. We can always override and add specific error messages. <|||||>@ArthurZucker Added the common tests and there was an issue when running whisper without `decoder_input_ids` and `decoder_embeds` instead that I fixed. Can you please double check ? Thanks
transformers
22,132
closed
Zero-shot image classification task guide
This PR adds the inference task guide for zero-shot image classification. It adds examples of inference with a pipeline and manual inference.
03-13-2023 13:23:40
03-13-2023 13:23:40
Related PR with images https://huggingface.co/datasets/huggingface/documentation-images/discussions/57<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,131
closed
WhisperForAudioClassification can't be prepared for int8 training
### System Info System: Google Colab Pro latest version of transformers installed through: !pip install -q git+https://github.com/huggingface/transformers.git@main ### Who can help? The code that I used is: ``` from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer from transformers import WhisperProcessor from transformers import WhisperForAudioClassification # Select CUDA device index import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name_or_path = "openai/whisper-small" feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path) tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path) processor = WhisperProcessor.from_pretrained(model_name_or_path) model = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto" , num_labels=2, label2id=label2id, id2label=id2label) from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model) ``` and the error I got is : ![image](https://user-images.githubusercontent.com/68870951/224712923-41d626a7-e326-4db4-aca4-9fda26ee0992.png) @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Open colab 2. Install latest version of transformers and peft ``` !pip install -q git+https://github.com/huggingface/transformers.git@main !pip install git+https://github.com/huggingface/peft.git@main ``` 3. use the code ``` from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer from transformers import WhisperProcessor from transformers import WhisperForAudioClassification # Select CUDA device index import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name_or_path = "openai/whisper-small" feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path) tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path) processor = WhisperProcessor.from_pretrained(model_name_or_path) model = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto" , num_labels=2, label2id=label2id, id2label=id2label) from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model) ``` ### Expected behavior Expected to get a model which is prepared for int 8 training
03-13-2023 13:20:08
03-13-2023 13:20:08
cc @younesbelkada @pacman100 <|||||>https://github.com/huggingface/transformers/pull/22133 should fix the issue
transformers
22,130
closed
Fix gradient checkpointing bug in LongT5
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 13:19:53
03-13-2023 13:19:53
No problem! Yeah I added it because I thought it was incorrectly omitted from LongT5Stack. Should it be removed?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I think your changes are fine and it was something we forgot to add on our side at the first place !
transformers
22,129
closed
Fix gradient checkpointing bug in xmod
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 13:07:04
03-13-2023 13:07:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,128
closed
Fix gradient checkpointing bug in xlm_roberta_xl
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 13:04:53
03-13-2023 13:04:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,127
closed
Fix gradient checkpointing bug in xglm
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 12:53:46
03-13-2023 12:53:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,126
closed
Fix gradient checkpointing bug in trocr
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 12:51:36
03-13-2023 12:51:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,125
closed
Fix gradient checkpointing bug in Trajectory Transformer
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-13-2023 12:49:25
03-13-2023 12:49:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,124
closed
[`Blip2`] skip accelerate test
# What does this PR do? This PR skips a test that is currently failing: https://github.com/huggingface/transformers/actions/runs/4395315343/jobs/7697061404 The tiny BLIP2 models uses T5 as a text decoder, that is itself having a parameter named `shared` that is tied with 3 other parameters of the model: `encoder.embed_tokens`, `decoder.embed_tokens`, `lm_head`. In some very specific usecases (small model, + small `max_memory`), the test hits some corner cases as `accelerate` does not support handling multiple tied weights yet: https://github.com/huggingface/accelerate/blob/37831808444e089a182f66713935d27c39a0cf2c/src/accelerate/utils/modeling.py#L232 & https://github.com/huggingface/accelerate/blob/37831808444e089a182f66713935d27c39a0cf2c/src/accelerate/utils/modeling.py#L566 Not that a similar test is also currently being skipped for T5: https://github.com/huggingface/transformers/blob/102b5ff4a813eea848bb82ff2f451e0f6b17b30c/tests/models/t5/test_modeling_t5.py#L689 As this usecase is very corner case and less likely to happen (most of BLIP2 models are large) and fixing this would require a lot of work on `accelerate`, let's skip this test as we did it for T5 cc @sgugger @ydshieh
03-13-2023 12:04:10
03-13-2023 12:04:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>I am trained by @sgugger 's words and my prediction of the response to this PR is Yes ``` the test doesn't make sense for tiny models and triggers some undefined behaviors. Probably better to just skip it at this stage. ``` and ``` We should have slow integration tests instead (on a regular-size model) ``` (but I am not sure if it's required to do this in the same PR, or just something we should change for all such tests in another PR)
transformers
22,123
closed
CUDA OOM when loading optimizer state dict
### System Info I am finetuning GPT NEO 1.3B with 1 GPU and 24GB VRAM. From scratch, model weights load and train fine with <1GB of memory to spare. When loading from checkpoint, I run into issues loading optimizer states. https://github.com/huggingface/transformers/blob/04bfac83b793b757e7b33188f88eebe21ac65ef7/src/transformers/trainer.py#L2434-L2436 If I hard-code `map_location ='cpu'`, the OOM goes away and I am able to resume training. After some reading, this was caused by the optimizer state dict being associated with 'gpu:0' at the time of checkpoint saving. I also found another ticket adding device logic to `map_location=self.args.device` in the first place due to CPU OOM for the PR author. I was wondering if the interface needs to be change in order to accommodate both use cases. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction see above. Otherwise from using run_clm.py from the repo without deepspeed. ### Expected behavior Resume from checkpoint should fit in the same vram footprint as finetune from scratch.
03-13-2023 10:09:04
03-13-2023 10:09:04
cc @sgugger <|||||>I tried to find the reference to the issue mentioning a CPU OOM but couldn't find it. Do you have a link handY? We could always load the weights on the CPU then move them which would be a tiny bit slower, but this just happens once. With models getting bigger and bigger it's probably makes sens to have this as a default behavior though.<|||||>Sorry I lost track of the issue with the CPU OOM but it was related to GPU memory in aggregate (deepspeed iirc) is greater than CPU. https://github.com/huggingface/transformers/issues/3730#issuecomment-629563466 is also relevant. My understanding is that `self.args.device` is where the model lives which is usually `gpu` or `tpu`. This puts optimizer state state_dict loading also to device with limited memory. Is there any reason to not always put this into CPU? My understanding is that if the model is trained using GPU they will be implicitly copied over during optimization but avoids the setup OOM.
transformers
22,122
closed
device_map='auto' doesn't use MPS backend on Apple M2
With the following program: ``` import os import time import readline import textwrap os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" os.environ["HF_ENDPOINT"] = "https://huggingface.co" os.environ["ACCELERATE_USE_MPS_DEVICE"] = "True" import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig from accelerate import init_empty_weights, load_checkpoint_and_dispatch, Accelerator def main(): print('Pytorch version', torch.__version__) if torch.backends.mps.is_available(): active_device = torch.device('mps') elif torch.cuda.is_available(): active_device = torch.device('cuda', 0) else: active_device = torch.device('cpu') accelerator = Accelerator() print('Accelerator device: ', accelerator.device) checkpoint = "bigscience/bloom" tm_start = time.time() tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, device_map="auto", offload_folder="offload", offload_state_dict=True, ) tm_end = time.time() print(f'Loaded in {tm_end - tm_start} seconds.') while True: prompt = input('Request to LLM: ') tm_start = time.time() inputs = tokenizer.encode(prompt, return_tensors="pt").to(active_device) tm_end = time.time() print(f'Encoded in {tm_end - tm_start} seconds.') tm_start = time.time() outputs = model.generate( inputs, max_new_tokens=2048, pad_token_id=tokenizer.eos_token_id, repetition_penalty=1.2) tm_end = time.time() print(f'Generated in {tm_end - tm_start} seconds.') tm_start = time.time() response = tokenizer.decode(outputs[0]) tm_end = time.time() print(f'Decoded in {tm_end - tm_start} seconds.') print("\n".join(textwrap.wrap(response, width=120))) if __name__ == '__main__': main() ``` the cpu backend is used by transformers/accelerate, even though it prints `Accelerator device: mps`. I know this because it's slow (below NVMe bandwidth) and the following is printed: ``` /Users/serge/PycharmProjects/macLLM/venv/lib/python3.9/site-packages/transformers/generation/utils.py:1359: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on mps, whereas the model is on cpu. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cpu') before running `.generate()`. warnings.warn( ``` Environment: transformers v4.26.1 accelerate v0.17.0 PyTorch v1.13.1 MacOS 13.2.1 (22D68) Python 3.9.6
03-13-2023 07:58:31
03-13-2023 07:58:31
MPS devices are indeed not supported with `device_map="auto"` yet. As a workaround you should just move your model to that device manually.<|||||>> MPS devices are indeed not supported with `device_map="auto"` yet. As a workaround you should just move your model to that device manually. How to move the model to that device manually? Will I lose CPU and disk offload in that case?<|||||>Yes, CPU and disk offload are not supported with the MPS device either for now. To move your model to the MPS device, you just do `model = model.to("mps")`<|||||>Manually moving a model to MPS does not seem to work. Below is a minimal example: ``` Python 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ] Type 'copyright', 'credits' or 'license' for more information IPython 8.11.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: from transformers import T5ForConditionalGeneration, AutoTokenizer In [2]: tokenizer = AutoTokenizer.from_pretrained('t5-small', model_max_length=512) ...: model = T5ForConditionalGeneration.from_pretrained('t5-small', device_map='auto') In [3]: model.device Out[3]: device(type='cpu') In [4]: input_string = 'translate English to German: The house is wonderful."' ...: inputs = tokenizer(input_string, return_tensors='pt').input_ids ...: outputs = model.generate(inputs, max_length=200) ...: print(tokenizer.decode(outputs[0])) <pad> Das Haus ist wunderbar."</s> In [5]: model = model.to('mps') In [6]: model.device Out[6]: device(type='mps', index=0) In [7]: inputs = inputs.to('mps') ...: outputs = model.generate(inputs, max_length=200) ...: print(tokenizer.decode(outputs[0])) RuntimeError: Placeholder storage has not been allocated on MPS device! ``` Transformers version: 4.27.1 Accelerate version: 0.17.1 Torch version: 2.0.0 MacOS 13.2.1 (22D68)<|||||>Yes you need to load it without the `device_map="auto"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I am on M2 MAX CHIP MACOS that has 12 CPU, 38 GPU. I am having issue with ever modification of this code snippet. Would you please tell me how I can correct it? from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", trust_remote_code=True) model = model.to('mps') tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, # device = torch.device('mps'), # device_map="auto", )<|||||>> Hi, I am on M2 MAX CHIP MACOS that has 12 CPU, 38 GPU. I am having issue with ever modification of this code snippet. Would you please tell me how I can correct it? > > from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch > > model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct", trust_remote_code=True) model = model.to('mps') > > tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, # device = torch.device('mps'), # device_map="auto", ) I also meet the problem.<|||||>Any solution yet?<|||||>Should the issue at least stay open as a feature request? This would be very nice to have. <|||||>THis is solved in the latest version of Accelerate (cc @SunMarc )<|||||>> THis is solved in the latest version of Accelerate (cc @SunMarc ) @sgugger Is this fix included in the latest https://github.com/huggingface/transformers/releases/tag/v4.30.2 release?<|||||>It's in Accelerate, not Transformers. It will be in the version of Accelerate released today.
transformers
22,121
closed
Can not init BertTokenizerFast
### System Info Linux python3.7 tokenizers 0.12.1 transformers 4.26.1 @ArthurZucker ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` >>> from transformers import BertTokenizerFast >>> BertTokenizerFast() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/anaconda3//python3.7/site-packages/transformers/utils/dummy_tokenizers_objects.py", line 31, in __init__ requires_backends(self, ["tokenizers"]) File "/home/anaconda3/python3.7/site-packages/transformers/utils/import_utils.py", line 935, in requires_backends raise ImportError("".join(failed)) ImportError: BertTokenizerFast requires the 珞 Tokenizers library but it was not found in your environment. You can install it with: pip install tokenizers In a notebook or a colab, you can install it by executing a cell with !pip install tokenizers ``` ### Expected behavior work normally.
03-13-2023 07:32:29
03-13-2023 07:32:29
Not sure I understand, to use `TokenizerFast` you need the tokenizers llibrary. Either use: ```python >>> from transformers import BertTokenizer >>> BertTokenizer ``` Or run `pip install tokenizers`. This is not an issue but expected.<|||||>What is the tokenizers llibrary? and how to install them? @ArthurZucker I have run `pip install tokenizers` to install tokenizers<|||||>The `tokenizers` library is available [here](https://github.com/huggingface/tokenizers), it implements the backend of fast tokenizers in rust. If it is installed you should be able to import without any issues! Make sure it was installed in the environment you are using. <|||||>@ArthurZucker ``` [root@localhost home]# pip list |grep tokenizers tokenizers 0.13.2 ``` ``` [root@localhost home]# python -c "import tokenizers" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'tokenizers' ``` Why @ArthurZucker <|||||>Sorry , it looks like the problem of my conda env
transformers
22,120
closed
`No such file or directory` when setting `cache_dir`
### System Info - `transformers` version: 4.26.1 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Note that `cache_model` directory exists**: ``` from transformers import CLIPProcessor, CLIPModel processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32", cache_dir="cache_model") inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=None, return_tensors="np", padding=True) print(inputs) ``` ### Expected behavior in recent versions of `transformers`, the code snippet above produces the following error: ``` FileNotFoundError: [Errno 2] No such file or directory: 'cache_model/models--openai--clip-vit-base-patch32/snapshots/e6a30b603a447e251fdaca1c3056b2a16cdfebeb/preprocessor_config.json' ``` whereas with `transformers==4.20.1`, the snippet successfully runs
03-13-2023 05:04:13
03-13-2023 05:04:13
I just tried your code sample and it runs without issue on both main and 4.26.1. Are you sure `"cache_model"` is in a folder you have write access to?<|||||>hi @sgugger, you are correct, I just made a fresh virtualenv and my snippet indeed does work. However, it doesn't work in my dev environment, nor in my projects CI, so I suspect theres an issue with a dependency somewhere? I dug into what is happening inside `cache_model` in both environments, and found a clue: In my new virtualenv: ``` lrwxr-xr-x 1 richard staff 136B Mar 14 14:38 preprocessor_config.json -> /Users/richard/Projects/huggingface_bug/cache_model/models--openai--clip-vit-base-patch32/blobs/5a12a1eb250987a4eee0e3e7d7338c4b22724be1 ``` in my dev environment: ``` lrwxr-xr-x 1 richard staff 96B Mar 14 15:05 preprocessor_config.json -> cache_model/models--openai--clip-vit-base-patch32/blobs/5a12a1eb250987a4eee0e3e7d7338c4b22724be1 ``` you can see that the former creates a symlink to an absolute path, whereas the latter uses a relative path, which may explain why it cannot find the actual file. Do you have any ideas what could be causing this behaviour?<|||||>Might be a different version of `huggingface_hub`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,119
closed
Trainer removes columns before transform is called
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a text dataset and am attempting to apply a transform to tokenize the contents. I'm using: [with_transform()](https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.with_transform) for this and it works fine: the transform removes the `text` column and adds the `input_ids` and `attention_mask` columns. The problem is when combining this with the `Trainer`, it runs `_remove_unused_columns()` _before_ calling the transform, which has the effect of removing the whole dataset, and I get an error as it tries to read the first batch: ``` IndexError: Invalid key: 664 is out of bounds for size 0 ``` ### Expected behavior I should be able to combine `Dataset.with_transform()` and `Trainer`.
03-13-2023 00:44:52
03-13-2023 00:44:52
Ah, I've re-read through the parameter lists for everything and found `remove_unused_columns=False` in `TrainingArguments`. Setting this resolves the issue, so I guess this won't be considered a bug. I think there's room for improvement in the UX though, perhaps a warning "After removing unused columns, there were no columns left, this is probably not what you meant to do, right?" Like `if set(dataset.column_names) == set(ignored_columns)`... <|||||>We could add such a warning yes. Do you want to take a stab at a PR?<|||||>Sorry, I've got a full plate at the moment.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,118
closed
ImportError: cannot import name 'AlignModel' from 'transformers'
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AlignModel, AlignProcessor ### Expected behavior The model should be imported without an ImportError
03-12-2023 20:35:32
03-12-2023 20:35:32
The model is not in Transformers 4.26.1, you need to [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) to get access to it.<|||||>Thanks!
transformers
22,117
closed
wav2vec2 with lm on persian doesn't seem to work
### System Info transformers version : 4.26.1 Platform: colab Python version: 3.9.16 PyTorch version (GPU?): 1.13.1+cu116 (False) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: No (NA) Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction i have build a wav2vec2 with lm model followig this blog post [https://huggingface.co/blog/wav2vec2-with-ngram](https://huggingface.co/blog/wav2vec2-with-ngram) after building i have tested it on my own data and compared it with wav2vec2 without lm but i can't see any improvement in transcription. result with lm : ``` 'سالاموروزوخیر خدمتو دوستان عزیز امروز می خوایم با موادی که به صورت ' 'رایعج در درمانهای دندانپزشکی استفاده میشه آشنا بشیم خب اولین موادها ' 'اسید اچ ها هستن استیج ها در ترم های با کامپوزیت استفاده میشن و به ' 'صورت روتین معمولا اسید فسفری که سی و هفت درصد رو استفاده می کنند' ``` result without lm: ``` 'سالام موروز وخیر خدمتو دوستان ازیز امروز می خوایم با موادی که به صورت رایعج ' 'در درمان های دندانپزشکی استفاده میشه آشنا بشیم خب اولین مواد ها قسید اج ها ' 'استن اسیلش ها در ترمی هایی با کامپوزیت استفاده میشن و به صورت روتین معمولا ' 'اسید فسفری که سی و هفت درصد رو استفاده می کنن' ``` i also tried to apply fixes mentioned in this [issue](https://github.com/huggingface/transformers/issues/15392) but that also doesn't fix the problem notebook to reproduce model : [https://colab.research.google.com/drive/1Y_ESjlLd3cbhpmSvpLuR-rWiQTXdd3kq?usp=share_link](https://colab.research.google.com/drive/1Y_ESjlLd3cbhpmSvpLuR-rWiQTXdd3kq?usp=share_link ) i also wanted to push processor to hub but i got some errors so i uploaded files into google drive: [https://drive.google.com/drive/folders/1SyEIsd1ZQPBPdDrD97XhdDJs3xhMFNrJ?usp=share_link](https://drive.google.com/drive/folders/1SyEIsd1ZQPBPdDrD97XhdDJs3xhMFNrJ?usp=share_link) ### Expected behavior at least i expect that lm would fix unigram errors
03-12-2023 16:34:08
03-12-2023 16:34:08
cc @sanchit-gandhi <|||||>any help?! @sanchit-gandhi <|||||>Hey @tntchack, Note the adding a LM does not always improve model performance it really depends on what data you train your LM-ngram on. While the LM-ngram will not make it impossible for the model to generate spelling errors / unigram errors it should greatly improve spelling errors. If it does not it is a sign that your LM has not been trained very well. A couple of tips from my side: - 1.) Could you try to also contact the Arabic forum to see if someone can help there? https://discuss.huggingface.co/t/arabic-nlp-introductions/3715 - 2.) We have some strong Wav2Vec2-LMs in Arabic on the Hub, e.g. here: https://huggingface.co/kingabzpro/wav2vec2-large-xlsr-300-arabic . Could you try evaluating your model with the LM from this repo instead and see if it improves your perf? You can just load the processor from this repo and the model from your repo<|||||>Hi @patrickvonplaten thanks for your response. I will have a look on those links and will try Arabic LM with my model<|||||>Hey @patrickvonplaten, quick questions regarding N-gram: 1. Does N-gram language model for wav2vec2 works upto order of N or sticks to fixed order of N? 2. After the language model is created using the `kenlm` package there's a `unigrams.txt` file inside the directory. Does that files is used by the N-gram model to find appropriate weights while doing the beam search using ctc-decoder?<|||||>$n$-grams typically stay fixed to order $n$ (see https://web.stanford.edu/~jurafsky/slp3/3.pdf). The $n$-gram language model implicitly uses the [pyctcdecode package](https://github.com/kensho-technologies/pyctcdecode) under the hood - the `unigrams.txt` file is used to hold unigram info (see [pyctcdecode/language_model.py#L362-L369](https://github.com/kensho-technologies/pyctcdecode/blob/afecb67622c1395b85b6a55d2902a780c0c927d4/pyctcdecode/language_model.py#L362-L369)).
transformers
22,116
closed
Add pr_checks.mdx Italian translation (#17459)
## What does this PR do? Italian translation of doc related to the checks on a PR of :hugs: Transformers. * updated _toctree.yml * added pr_checks.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) @omarespejel @sgugger
03-12-2023 16:15:46
03-12-2023 16:15:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>@omarespejel @sgugger for me is ok. Thanks @alexcalabrese
transformers
22,115
closed
Added big_models.mdx italian translation #17600
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added big_models.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) @omarespejel @sgugger
03-12-2023 15:39:46
03-12-2023 15:39:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,114
closed
FastTokenizer for LLaMa
### Feature request FastTokenizer support for LLaMa sentencepiece tokenizer. ### Motivation The offset_mapping is only available in FastTokenizer, it would be useful if there's support for this. ### Your contribution I have tried using existing sentencepiece based model as replacement. However hf conversation code means we are missing the byte fallback support ``` The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers ``` Which means out of vocabulary tokens are simply mapped to <unk> instead of using the byte mapping inside the vocab.
03-12-2023 15:14:09
03-12-2023 15:14:09
Let's maybe wait for the LLaMa PR to be merged first?<|||||>it is fix on tokenizers https://github.com/huggingface/tokenizers/pull/1183<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,113
closed
Make GPT2ForSequenceClassification computationllay efficient
### Feature request Compute the logits using only the hidden state for the last position of the input sequence. ### Motivation The current GPT2ForSequenceClassification module computes logits using all hidden states, which computationally cost is proportional to the length of the input sequence. https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/models/gpt2/modeling_gpt2.py#L1384 If we compute the logits using only the hidden state for the last position of the input sequence, the cost is not proportional to the length. It can not be only applied to , but also to other XXXForSequenceClassification. ### Your contribution The followings are the code changes that I suggest: https://github.com/Kyeongpil/transformers/commit/f97f6e38f444522a55f236b37ca70b4e35096e12
03-12-2023 13:16:33
03-12-2023 13:16:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,112
closed
Time spent on engine.step() increased strangely
### System Info I'm using Deepspeed's zero3 with optimizer offload. Time spent on step() increased from ~100ms to 10,000+ ms after a few steps. The CPU memory in occupied ~350G (500G in total). - `transformers` version: 4.26.1 - Platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help? @sgugger @stas ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. My code ```python from transformers.deepspeed import HfDeepSpeedConfig from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer from transformers.models.codegen.modeling_codegen import CodeGenMLP import argparse import torch import time, datetime import deepspeed from deepspeed.accelerator import get_accelerator from torch.utils.data import Dataset from transformers.activations import ClippedGELUActivation, LinearActivation from lion_pytorch import Lion from datasets import load_dataset import os, sys from transformers import Trainer, TrainingArguments, HfArgumentParser from transformers.integrations import WandbCallback class MyDataset(Dataset): def __init__(self, data, tknz): super().__init__() self.data = data self.tknz = tknz def __len__(self): return len(self.data) def __getitem__(self, idx): tknz_text = self.tknz( self.data[idx]['text'], max_length=args.seq_len, padding='max_length', truncation=True, ) return { 'input_ids': tknz_text['input_ids'], 'attention_mask': tknz_text['attention_mask'], 'labels': tknz_text['input_ids'] } def collate_fn(batch, tknz): tknz_batch = tknz.pad( batch, padding=True, max_length=args.seq_len, pad_to_multiple_of=8, return_tensors='pt' ) return { 'input_ids': tknz_batch['input_ids'], 'attention_mask': tknz_batch['attention_mask'], 'labels': tknz_batch['input_ids'] } def train(): print(f"[{datetime.datetime.today()}] Loading model.") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono", use_cache=False) tknz = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") tknz.pad_token = tknz.eos_token print(f"[{datetime.datetime.today()}] Loading dataset.") dataset = load_dataset("NeelNanda/pile-10k")['train'].select(range(args.data_size)) dataset = MyDataset(dataset, tknz) print(f"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.") trainer = Trainer( model=model, args=training_args[0], data_collator=lambda batch: collate_fn(batch, tknz), train_dataset=dataset, tokenizer=tknz, callbacks=[WandbCallback()], ) print(f"[{datetime.datetime.today()}] Entering training loop.") trainer.train() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=-1) parser.add_argument('--project', type=str, default="my_project") parser.add_argument('--name', type=str, default="my_exps") parser.add_argument('--data_size', type=int, default=100) parser.add_argument('--seq_len', type=int, default=300) parser.add_argument("--training_args_file", type=str, default="config/training_args.yml") args = parser.parse_args() training_args = HfArgumentParser(TrainingArguments).parse_yaml_file(args.training_args_file) train() ``` 2. My script to run the Python file ```bash port=$(shuf -i25000-30000 -n1) WANDB_MODE=disabled \ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ deepspeed --master_port "$port" train_ds_zero3.py \ --seq_len 100 ``` 3. My config files - training_args.yml ```yaml output_dir: ./output do_train: true per_device_train_batch_size: 1 gradient_accumulation_steps: 1 num_train_epochs: 3 log_level: info fp16: true gradient_checkpointing: true remove_unused_columns: false #deepspeed: ./config/ds_zero3.json report_to: wandb run_name: ds_zero3_opt_offload_0311 deepspeed: config/ds_zero3_opt_offload.json ``` - ds_zero3_opt_offload.json ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": true } ``` 4. Time spent on step <img width="1440" alt="image" src="https://user-images.githubusercontent.com/39761308/224525584-f91586c5-4e04-4601-bdd6-569d35405aa0.png"> ### Expected behavior The CPU memory is occupied ~350G and I have 500G in total, so the occupation is not that high. I'm confused why the step() get so slow after that certain step. I hope the step() will be as quick as the first few steps (<100ms). Thank you for your kindly help.
03-12-2023 05:13:46
03-12-2023 05:13:46
cc @stas00 <|||||>- The first few steps lead to an OVERFLOW so optimizer didn't run and thus was fast. it then adjusted the scaling factor each step until it reached one that didn't lead to an overflow and thus it did the first optimizer step. - then you can see from the warning that your setup is misconfigured - you're trying to load too much into your GPU memory and all the optimizations are disabled since there is no gpu memory and it has to do a lot more work to be optimal. As you're already at bs=1 and `gradient_checkpointing=true`, the next thing to do is to either add more gpus or use gpus with more memory (I have no idea which gpus you're using) or enable `offload_param` (but not sure if you have enough cpu memory remain for offloading params): You can follow the guidelines here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#how-to-choose-which-zero-stage-and-offloads-to-use-for-best-performance but most likely the model you picked is too large for the hardware setup you have chosen.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,111
closed
error when using from indobenchmark
### System Info ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("indobenchmark/indobart-v2") ``` the error `ValueError: Tokenizer class IndoNLGTokenizer does not exist or is not currently imported.` any help guys? ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("indobenchmark/indobart-v2") ``` ### Expected behavior Can used
03-12-2023 05:03:41
03-12-2023 05:03:41
Hey! The `tokenizer_class` that was set in the configuration.json is wrong as the `IndoNLGTokenizer` does not exist in transformers. You should try to ask the other of the model on the community tab how to use it, or try to use: ```python from transformers import MBartTokenizer tokenizer = MBartTokenizer.from_pretrained("indobenchmark/indobart-v2") ``` as it appears that the model is an MBartModel.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,110
closed
Overestimated number of training epochs in Trainer
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Under certain circumstances, given `max_steps` and dataloader size non-divisible by `gradient_accumulation_steps`, the number of epochs printed during model training can be overestimated, even if `dataloader_drop_last` is set to False. On an example run with the following inputs, Trainer calculated 100 training epochs instead of 87. ```bash python run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --do_train \ --output_dir /tmp/test-clm \ --max_train_samples=148 \ --gradient_accumulation_steps=32 \ --overwrite_output_dir \ --max_steps=200 \ --logging_steps=10 \ --dataloader_drop_last=False ``` ``` [INFO|trainer.py:1650] 2023-03-11 15:21:27,133 >> ***** Running training ***** [INFO|trainer.py:1651] 2023-03-11 15:21:27,133 >> Num examples = 148 [INFO|trainer.py:1652] 2023-03-11 15:21:27,133 >> Num Epochs = 100 [INFO|trainer.py:1653] 2023-03-11 15:21:27,133 >> Instantaneous batch size per device = 2 [INFO|trainer.py:1654] 2023-03-11 15:21:27,133 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:1655] 2023-03-11 15:21:27,133 >> Gradient Accumulation steps = 32 [INFO|trainer.py:1656] 2023-03-11 15:21:27,133 >> Total optimization steps = 200 [INFO|trainer.py:1657] 2023-03-11 15:21:27,133 >> Number of trainable parameters = 124439808 ``` I believe this happens due to the computation [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1654) and consequently [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1659). ### Expected behavior Expected the estimated number of epochs to be closer to the actual number of epochs. Perhaps in that case `num_train_epochs` can be computed as: ``` update_steps_per_epoch = len_dataloader / args.gradient_accumulation_steps num_train_epochs = math.ceil(args.max_steps / update_steps_per_epoch) ``` Thank you in advance!
03-11-2023 17:59:29
03-11-2023 17:59:29
Thanks for the report! Would you like to suggest a PR with your fix?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,109
open
Add tensor flow whisper model for audio classification
# What does this PR do? Adding support for audio classification within TensorFlow whisper model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21777 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-11-2023 17:02:47
03-11-2023 17:02:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22109). All of your documentation changes will be reflected on that endpoint.<|||||>I just had a few questions on how to proceed with adding the TensorFlow Whisper model, just to make sure I'm on the right track. (1) Just so that I am clear on what the task is asking for, I need to recreate what is being done in PR #21754, except in TensorFlow. So, more specifically recreate the WhisperForAudioClassification class in TensorFlow, within the modeling_tf_whisper.py file. (2) I see that there are a lot of additional lines of code within PR #21754 in various files that seem to be "registering" that the Whisper model now supports audio classification. Would I have to add any lines of code similar to this within my PR? Is there any documentation I can take a look at to learn more about this? (or anything that would help me understand more about this task in general) @sanchit-gandhi <|||||>Hi @adit299 Thanks for opening this PR - excited to have this implemented in TF! Regarding your questions: 1) Yes, exactly. 2) Yes, the other (equivalent TF) additions will also need to be added. Some of the additions in #21754 are automatically generated e.g. those in `dummy_pt_objects.py`. There's an in-depth guide to adding TensorFlow models [here](https://huggingface.co/docs/transformers/v4.27.1/en/add_tensorflow_model) which should cover the process. Let us know if there's anything missing or unclear. <|||||>Super cool @adit299! Feel free to ping us if you have any more questions / queries! More than happy to help with the integration here!<|||||> Hello, Just wanted to check in and provide an update. I have finished adding the TFWhisperForAudioClassification class within the modeling_tf_whisper.py file. One question regarding this: (1) Within the modeling_tf_auto.py file I don't see any OrderedDict named TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES (or any OrderedDict that is equivalent to the MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES present within the modeling_auto.py file). I was wondering where the TFWhisperForAudioClassification class should go within the modeling_tf_auto.py file. I will continue work on developing the model tester, and will post any issues I run into here. @sanchit-gandhi <|||||>@adit299 - that's great news on the update! For the auto mapping, if the tensorflow equivalent `TF_MODEL_FOR_XXX` doesn't exist, then it can be added to `modeling_tf_auto.py`. This means this is the first audio classification model to be added for TensorFlow 🔥🔥🔥<|||||>Recently, we merged TensorFlow Wav2Vec2 For Sequence Classification: https://github.com/huggingface/transformers/pull/22073 You could propagate the modelling code changes form this PR onto Whisper as a quick way of getting this working @adit299 (as we do for the PyTorch code)<|||||>By propagate, do you mean just looking at that PR and using the code written for that task as help for this current task? If so, I have already been doing that. If you are referring to some other procedure please do let me know about this as I am not aware. That would certainly help! Questions I had: (1) I noticed that within the Pytorch implementation of the whisper tests, it refers to a class `GenerationTesterMixin` which does not seem to have a similarly named Tensorflow equivalent. Would I have to add this class? I am also confused about what these classes are doing (ex. what is TFModelTesterMixin doing, etc.), so any clarification you can provide is appreciated! https://github.com/huggingface/transformers/blob/d204aea7314217fa8b47e7418ead0d9973f50ccd/tests/models/whisper/test_modeling_tf_whisper.py#L926 (2) I was having trouble with translating the test_encoder_outputs method in TensorFlow. Mainly these lines: https://github.com/huggingface/transformers/blob/d204aea7314217fa8b47e7418ead0d9973f50ccd/tests/models/whisper/test_modeling_tf_whisper.py#L963-L966 Again, a bit confused about what `model.to(torch_device)` is doing. I will look into this a bit more, but again any clarifications about what this method is doing would help. Thanks again for the speedy responses! @sanchit-gandhi @amyeroberts <|||||>@adit299 By propagate, we mean apply the equivalent changes from the Wav2Vec2 PR to this PR - it won't be a direct copy-paste, but there will be large proportions in common. It's sounds like this is what you're doing, which is great :) With respect to your questions: 1) GenerationTesterMixin Yes, I don't think this class exists yet and you wouldn't have to add this class as part of this PR. Is there anything that should be added for the TF model tests @gante ? In terms of what these classes are doing, the mixin classes group together related functionality e.g. common tests that should be added to all models. For example, [TFModelTesterMixin](https://github.com/huggingface/transformers/blob/57ffd8ab4c833e26b2288769f6031f94870a102c/tests/test_modeling_tf_common.py#L164) contains tests for the TensorFlow models. This way we can create other classes using a composition of mixins. 2) .to and .eval methods `model.to(...)` is a pytorch specific method. See docs here: https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=#torch.nn.Module.to. It's moving the model onto the specified torch device. `model.eval()` is also a PyTorch method: https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=#torch.nn.Module.to.<|||||>@amyeroberts there is no generation-specific test mixin for TF. `TFModelTesterMixin` has some basic generate checks :)<|||||>Looks cool already @adit299! Let us know if you need a hand with the integration or when you'd like a PR review 🤗<|||||>Thanks for the follow-up @sanchit-gandhi. Currently, I am debugging some of the test failures that I am getting. I also see that 7 more tests within TFModelTesterMixin are failing, but I thought I would resolve the tests failing within the TFWhisperEncoderModelTest class first before moving on to that. This is the error occuring when test_encoder_outputs is run: ``` self = <tests.models.whisper.test_modeling_tf_whisper.TFWhisperEncoderModelTest testMethod=test_encoder_outputs> def test_encoder_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) inputs = copy.deepcopy(self._prepare_for_class(inputs_dict, model_class)) > with tf.stop_gradient: E AttributeError: __enter__ tests/models/whisper/test_modeling_tf_whisper.py:975: AttributeError ``` I believe this error is occuring since TensorFlow's stop_gradient implementation has no __enter__ method defined (https://stackoverflow.com/questions/51427729/python-error-attributeerror-enter). I figured this is the closest equivalent to torch.no_grad, used in the PyTorch implementation which is why I used it. If you could let me know a little bit more about what this method is testing and how it works, I think I will be able to solve the error. On I sidenote, I also see the methods freeze_encoder, get_input_embeddings, and set_input_embeddings within the Pytorch implementation. Would I have to implement these as well? What are these methods doing? @amyeroberts <|||||>@adit299 Yes, these methods should also be implemented for the TF model. You can look at similar TF implementations to see how this was done e.g. [here for freezing a module](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#LL1463C4-L1463C4). <|||||>I would say probably we don't need freezing since this is only relevant for fine-tuning, and we don't have a seq2seq ASR fine-tuning script in TF (related https://github.com/huggingface/transformers/pull/22109#discussion_r1194040076)<|||||>Hey @adit299 - feel free to comment here when this PR is ready for review and we can take a look! Seems to be close to completion<|||||>Hey @sanchit-gandhi, apologies for the delay! Yes, this PR is ready for review. I haven't had much luck in getting some tests to pass however. I appreciate any help you guys can provide by taking a look. <|||||>@adit299 Unfortunately, diving into people's PRs to debug isn't something we can do as it's just not a scalable solution with a repo of this size. If you need help from us, then please share a detailed description of the issue, what you've tried already and ideally highlighting any relevant pieces of code. <|||||>Understandable, @amyeroberts . There are 5 tests failing right now. Here is all the information requested (to the best of my knowledge): **FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_compile_tf_model** Error - ``` E TypeError: Exception encountered when calling layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification). E E call() got an unexpected keyword argument 'decoder_input_ids' E E Call arguments received by layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification): E • input_features={'input_features': 'tf.Tensor(shape=(2, 80, 59), dtype=float32)', 'decoder_input_ids': 'tf.Tensor(shape=(1, 2), dtype=int32)'} E • head_mask=None E • encoder_outputs=None E • labels=None E • output_attentions=None E • output_hidden_states=None E • return_dict=None ../../../src/transformers/modeling_tf_utils.py:434: TypeError ``` What I tried - I suspected it had something to do with: https://github.com/adit299/transformers/blob/3d3c7d4213e08d69254edb9c04ac28b3dfbd40f4/tests/test_modeling_tf_common.py#L739C4-L819 But that doesn't seem to be the case. Maybe the Whisper decoder is being mistakenly invoked? I am just not sure. **FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_hidden_states_output - AssertionError: Lists differ: [30, 16] != [60, 16]** Error - ``` ../../test_modeling_tf_common.py:1002: in check_hidden_states_output self.assertListEqual( E AssertionError: Lists differ: [30, 16] != [60, 16] E E First differing element 0: E 30 E 60 E E - [30, 16] E ? ^ E E + [60, 16] E ? ^ ``` The assertion failing is: ``` self.assertListEqual( list(hidden_states[0].shape[-2:]), [self.model_tester.seq_length, self.model_tester.hidden_size], ) ``` What I tried - Not sure about this one. **FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_pt_tf_model_equivalence - AttributeError: tf_whisper_encoder_17.conv1.weight not found in PyTorch model** Error - ``` E AttributeError: tf_whisper_encoder_17.conv1.weight not found in PyTorch model ../../../src/transformers/modeling_tf_pytorch_utils.py:322: AttributeError ``` What I tried - Not sure about this one as well **FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_resize_token_embeddings - NotImplementedError** Error - `../../../src/transformers/modeling_tf_utils.py:1343: NotImplementedError` What I tried - I think this one is out of my control **FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_save_load - TypeError: Exception encountered when calling layer 'tf_whisper_for_audio_classification_20' (type TFWhisperForAudioClassification** What I tried - connected to the first error, solving that should solve this. Please do let me know if any other clarification is needed! Apologies for the long post! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @adit299, thanks for giving more details about debugging the tests and apologies for the delay in my response. I suggest looking through the [artefacts](https://app.circleci.com/pipelines/github/huggingface/transformers/66686/workflows/4a5167ab-3f48-4107-8037-046b9e22c37f/jobs/830952/artifacts) from the CI run, specifically [failure_long.txt](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/584530217/bbf68a84-eca3-4a58-b435-ccf6fe76ee5e/0/~/transformers/reports/tests_tf/failures_long.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEOCHM3RUWE%2F20230714%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230714T175646Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJL%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIGcglqOGo%2Fe%2F5mwCVxVr%2F8nn4rzUcn4laRIRVkurIB6aAiEA44PuRr%2FWN1FN9Gf0apo5P4mNPaaCINszHTJTGYQS4BIqqwIIGxADGgwwNDU0NjY4MDY1NTYiDP76zxcgkVLBzjO82CqIAvyVau8QZFaLVnQdP%2FeJu9IBQaqvdkg4zBQZJOv0ZMDaUJUrnMWxpJ20n%2BEBFrs7BrKmSZgJ6imY6gcr%2FSJhEtOpVRvgqbz%2FwoRyQXlD9Ob2D0hPuOilJdab9t7un3XGVwqX8UXs2ui4IM0SNWGsVNhVq9%2B5zCLZQqKiJKeeMfBZPvzzL4W2Z5d8Gf%2FEZIQYCTF9GPD9W7YYJt1sL08GUOITLBUdk7Noplt8kMyqbKY%2F5JdsSG6%2B2xdYuLt%2BLyfCcKfOSGwE0FvB4s2opeg7FRcX%2FLzniNM8zWqVxCwDwnxfeXKwT0mzLAu8WVSaddbp%2FwWbCvEeSuWAGY7csHgkaeC3uD7gA1mY0TCwlsalBjqdAcM0csGm7aIw7slQZJqakKelu4UgawGQbET3HBBG%2FBCInREsvUkcEgRMgwrX%2FQYHPumFhgkeDaGhD7qs4IRZcKppDgPlB8AJxi7LxyriYwKdA0XgVkWgpo3ZMwspHQ%2Fqx%2F159F7cdMOmj3xxJY9JhRV0RdcyMBb%2Bj57eq9fYRL3OkCNoFtWTTnOSs2xB2%2BENfZMIVO%2BEQf1xXe6%2BaXM%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=11d02f07765c93cc347a5aa2ee23a984701b3f8ad5dd446f7a825450947f2c78) as they will give you a more detailed error message an trackback to help figure out the issues. **test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_compile_tf_model** I think your suspicions are correct. You'll need to add a new branch in the if/else logic to create the correct inputs for this model. **test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_hidden_states_output** In this case it seems the sequence length of the hidden size doesn't match what's expected. I would create a model using the test config and check its architecture and the hidden states outputs when passed a dummy input. **test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_pt_tf_model_equivalence** I looks like a weight is in the TF model and not in the PT model. I'd check the params in each model - looking at `tf_model.trainable_parameters()` and `pt_model.state_dict()` to see if you can identify if this is a case of a weight not being loaded, or name not properly matched. If you create the TF whisper model with pytorch weights, do you get any warnings about weights being randomly initialized? **test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_resize_token_embeddings - NotImplementedError** This is raised because the model doesn't have a `get_input_embeddings` method implemented **test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_save_load** From the CI artefacts, it looks like this is failing because of `decoder_input_ids` being in the input
transformers
22,108
closed
Trainer: let generate pick its inputs
# What does this PR do? Trainer's `predict_with_generate` seems to have been designed for an older version of `.generate()`, where manual selection of the inputs was needed. The current version of `.generate()` can do it on its own. This is particularly relevant for multimodal models, which can take more than one modality as input. As such, this PR removes the `.generate()` input selection logic from Trainer. This PR is a requirement for Amazon's [MM-CoT](https://github.com/amazon-science/mm-cot).
03-11-2023 16:45:10
03-11-2023 16:45:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,107
closed
CLIPTokenizer problems in from pretrained on version 4.25.1
### System Info WARNING:tensorflow:From D:\Stable Diffusion\SD New\stable-diffusion-webui\venv\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2023-03-11 10:21:20.969146: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.25.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.7 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> Yes - Using distributed or parallel set-up in script?: <fill in> I don't know ### Who can help? @amyeroberts @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is what I placed in the Python shell. ```pycon Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import transformers >>> from transformers import CLIPTokenizer >>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Stable Diffusion\SD New\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. ``` Other than that, I'm also getting the same issue in the same place just with a longer stack trace in the AUTOMATIC1111 webui and the kohya scripts. Also, in case this helps, "models--openai--clip-vit-large-patch14" does get created during the running of the scripts, but it is empty. Edit: With the webui, when I set the version to 4.19.2 it works, except for the dreambooth extension. Didn't test with the kohya script and version 4.19.2. ### Expected behavior I would expect that the Tokenizer would load properly and that the scripts I was running would work.
03-11-2023 15:43:32
03-11-2023 15:43:32
Hey! As you mention, you are creating an empty directory with the same name. The priority is given to local folders, which explains why you are having this issue. Either delete the empty folder or save a tokenizer inside it 😉 <|||||>I have tried to delete the empty folder, but it just keeps coming back, even when just running the code I mentioned before that I placed into the console. I also don't know how to put a tokenizer into this folder that can't be deleted. Could you please help with that?<|||||>To put the tokenizer in the folder run: ```python tokenizer.save_pretrained("path_to_folder") ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am so sorry that I haven't replied in so long. Work IRL took over my life. Also, transformers 4.19.2 worked well enough because I could just use colab for the khoya scripts. The issue is that the tokenizer won't even be created in the first place. I again tried putting this in the python shell in the webui's virtual env: ```pycon Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import transformers >>> from transformers import CLIPTokenizer >>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Stable Diffusion\SD 3\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.tokenizer.save_pretrained("path_to_folder") ``` tokenizer doesn't even get created, so I can't use ```tokenizer.save_pretrained("path_to_folder")``` on it. I also tried loading 'openai/clip-vit-base-patch16' and it got the same error. Is there another way to save the transformer snapshot into that folder?<|||||>That very strange, I can't reproduce your error at all. If you open any colab, the script that you share will work. Quick fixes are probably: `pip install --upgrade transformers`. <|||||>I set the transformers version to 4.28.1 and still got the issue. I also tried version 4.26.0. The latest version, 4.28.1, did not work. Something that I thought was weird was where the files were downloaded to. With version 4.19.2, the files download fine. There are a lot of weirdly named files in ```.cache\huggingface\transformers``` one file with a seemingly random extension and random name, followed by a file with the same name, including the extension, plus ```.json```. They all have really small file sizes. ```.cache\clip``` has the ```ViT-L-14.pt``` file in it. It has the file size I would expect. In versions 4.28.1, 4.26.0, and 4.25.1, the file tries to save in ```.cache\huggingface\hub\models--openai--clip-vit-large-patch14```. The docs say that it should be saved in the transformers folder in the huggingface directory. This is what I found weird. The docs didn't match up with what I was seeing. I also couldn't change the ```.cache``` directory using the environment variable ```HF_HOME```. Maybe I'm using the wrong hf version? Edit: I actually think I didn't press apply, so maybe that's why the cache location didn't change.
transformers
22,106
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
03-11-2023 14:08:39
03-11-2023 14:08:39
transformers
22,105
open
[WIP] Refactor Deberta/Deberta-v2
# What does this PR do? Refactor both Deberta and DebertaV2 to make them more compatible with the overall transformers library Should fix a bunch of issues related to torch-scripting with Deberta: - #15216 - #15673 - #16456 - #18659 - #21300 - #20815 - #12436 - #18674 - help supporting the Prefix_Tuning PEFT approach
03-11-2023 10:43:48
03-11-2023 10:43:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22105). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @ArthurZucker any updates on this? ETA for when it will be merged into main?<|||||>Hey! Just got back from holidays, this should be my main focus in the coming days! <|||||>Sorry! Seem like I had to postpone this! If anyone want to take over feel free to do it, otherwise will be my priority once https://github.com/huggingface/transformers/pull/23909 is merge!<|||||>Regarding the `z_steps` in `DebertaV2Model`: I think this code is relevant for the [enhanced mask decoder of the generator model](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/masked_language_model.py#L51) ```python if attention_mask.dim() <= 2: extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) att_mask = extended_attention_mask.byte() attention_mask = att_mask * att_mask.squeeze(-2).unsqueeze(-1) elif attention_mask.dim() == 3: attention_mask = attention_mask.unsqueeze(1) target_mask = target_ids > 0 hidden_states = encoder_layers[-2] if not self.position_biased_input: layers = [encoder.layer[-1] for _ in range(2)] z_states += hidden_states query_states = z_states query_mask = attention_mask outputs = [] rel_embeddings = encoder.get_rel_embedding() for layer in layers: # TODO: pass relative pos ids output = layer(hidden_states, query_mask, return_att=False, query_states=query_states, relative_pos=relative_pos, rel_embeddings=rel_embeddings) query_states = output outputs.append(query_states) else: outputs = [encoder_layers[-1]] ``` As far as I can tell, they hardcoded z_steps to 2 here. Although it should still be left as 0 for the discriminator. Adding the z_steps to the config seems like a good idea. `z_states` represents [the position embeddings](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/masked_language_model.py#L111), which are non-zero if `position_biased_input` is set to `True`. They are passed from the [embedding layer](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/bert.py#L269). So in order to properly implement this, I think we need to return the position embeddings here: ```python class DebertaV2Embeddings(nn.Module): def forward(self, input_ids=None, token_type_ids=None, position_ids=None, mask=None, inputs_embeds=None): ... return embeddings, position_embeddings ``` and implement the `z_steps` like this: ```python class DebertaV2Model(DebertaV2PreTrainedModel): def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutput]: ... embedding_output, position_embedding_output = self.embeddings( input_ids=input_ids, token_type_ids=token_type_ids, position_ids=position_ids, mask=attention_mask, inputs_embeds=inputs_embeds, ) ... if self.z_steps > 0: hidden_states = encoded_layers[-2] layers = [self.encoder.layer[-1] for _ in range(self.z_steps)] position_embedding_output += hidden_states query_states = position_embedding_output query_mask = self.encoder.get_attention_mask(attention_mask) rel_embeddings = self.encoder.get_rel_embedding() rel_pos = self.encoder.get_rel_pos(embedding_output) for layer in layers: query_states = layer( hidden_states, query_mask, output_attentions=False, query_states=query_states, relative_pos=rel_pos, rel_embeddings=rel_embeddings, ) encoded_layers = encoded_layers + (query_states,) ```
transformers
22,104
closed
[Time-Series] time series patching, PatchTST
This PR added a time series patching - PatchTST Fixes https://github.com/huggingface/transformers/issues/22075 @kashif Kashif impl in gluonTS: https://github.com/awslabs/gluonts/pull/2748
03-11-2023 08:31:54
03-11-2023 08:31:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>comment<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>comment<|||||>PatchTST is in Gluon, thanks to @kashif . Closing here :) https://github.com/awslabs/gluonts/pull/2748
transformers
22,103
closed
FLAVA not doing a forward pass
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> **N.B. I do have PyTorch installed, I'm not sure why the tool can't find it:** ``` python -c "import torch; print(torch.__version__)" 2.1.0.dev20230310 ``` ### Who can help? @apsdehal ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior ([also a Colab notebook doing this](https://colab.research.google.com/drive/12f_1jtgJXvk-LT49pWCyUToF6ew70Cel?usp=sharing)): 1. Get a datapoint for a forward pass (`fetch_images` is in the notebook above): ``` pmd = datasets.load_dataset("facebook/pmd", "wit", use_auth_token=True, streaming=True) pmd_train_head = pmd['train'].take(2) pmd_train_head_with_images = pmd_train_head.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": 20}) datapoint = next(iter(pmd_train_head_with_images)) ``` 2. Process the input: ``` from transformers import FlavaProcessor, FlavaForPreTraining processor = FlavaProcessor.from_pretrained("facebook/flava-full") inputs = processor( text=[datapoint['text']], images=[datapoint['image']], return_tensors="pt", padding="max_length", max_length=77, return_codebook_pixels=True, return_image_mask=True, return_attention_mask=True, return_token_type_ids=True, return_special_tokens_mask=True, ) inputs.bool_masked_pos ``` 3. Mask the text input for MLM: ``` from transformers import DataCollatorForLanguageModeling, AutoTokenizer data_collator = DataCollatorForLanguageModeling(processor.tokenizer, mlm=True, mlm_probability=0.4, return_tensors="pt") inputs['input_ids'], inputs['input_ids_masked'] = data_collator.torch_mask_tokens(inputs=inputs['input_ids'], special_tokens_mask=inputs['special_tokens_mask']) del inputs['special_tokens_mask'] ``` 4. Do a forward pass: ``` model = FlavaForPreTraining.from_pretrained("facebook/flava-full") outputs = model(**inputs) loss = outputs.loss print(f"loss: {loss}") ``` ### Expected behavior I would expect the forward pass to not throw errors. ### Actual behavior ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-14-b821d73f49e9> in <module> 1 model = FlavaForPreTraining.from_pretrained("facebook/flava-full") 2 ----> 3 outputs = model(**inputs) --------------------------------------------------------------------------- /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, input_ids_masked, pixel_values, codebook_pixel_values, attention_mask, token_type_ids, bool_masked_pos, position_ids, image_attention_mask, skip_unmasked_multimodal_encoder, mlm_labels, mim_labels, itm_labels, output_attentions, output_hidden_states, return_dict, return_loss) 1857 ) 1858 -> 1859 flava_masked_output = self.flava( 1860 input_ids=input_ids_masked, 1861 pixel_values=pixel_values, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, pixel_values, attention_mask, token_type_ids, bool_masked_pos, position_ids, image_attention_mask, skip_multimodal_encoder, output_attentions, output_hidden_states, return_dict) 1403 text_output = None 1404 if input_ids is not None: -> 1405 text_output = self.text_model( 1406 input_ids=input_ids, 1407 attention_mask=attention_mask, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, output_attentions, output_hidden_states, return_dict) 1061 ) 1062 -> 1063 embedding_output = self.embeddings( 1064 input_ids=input_ids, 1065 token_type_ids=token_type_ids, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, token_type_ids, position_ids) 417 token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) 418 --> 419 inputs_embeds = self.word_embeddings(input_ids) 420 token_type_embeddings = self.token_type_embeddings(token_type_ids) 421 /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 158 159 def forward(self, input: Tensor) -> Tensor: --> 160 return F.embedding( 161 input, self.weight, self.padding_idx, self.max_norm, 162 self.norm_type, self.scale_grad_by_freq, self.sparse) /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2208 # remove once script supports set_grad_enabled 2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2211 2212 IndexError: index out of range in self ```
03-11-2023 07:46:36
03-11-2023 07:46:36
Hi @amariucaitheodor . Thank you for reporting the issue! Could you also copy-paste the error (traceback) you got to your above PR description? Thanks.<|||||>I tried the colab and found the issue. Specifically, the code which is used for calculating input_ids and input_ids_masked is incorrect as the `torch_mask_tokens` [function](https://github.com/huggingface/transformers/blob/a096eaca6554173ecd4c016eb2b10b8e0b2cb245/src/transformers/data/data_collator.py#L748) returns modified input_ids with masking and the corresponding labels. Since the loss is only calculated on the masked tokens, other tokens are set to -100 in the labels. This causes an "index out of range" error down the line in the embeddings' forward.<|||||>Thank you for the reply! I had noticed the same problem. What is then the *correct* way of calculating `input_ids_masked`? The code doesn't work with `DataCollatorForLanguageModeling` for the reasons mentioned above, and there is no other example for doing this.<|||||>Thank you @amariucaitheodor for providing the error log, and thanks @apsdehal for sharing your finding. I will take a look on this issue. But @apsdehal , don't hesitate to share if you have any idea regarding the correct solution ❤️ <|||||>Hello! After looking into the issue with the notebook, here is my finding: - `data_collator.torch_mask_tokens(inputs=inputs['input_ids'], ....)` return two items - the first item is the input ids being masked - the second item indicates: - if a place has value `-100`: it means that places is not masked - otherwise, it gives the original value of that place in `inputs` - The `FlavaForPreTraining` model expect `input_ids_masked` to be the masked inputs, which is the first item prepared above. See https://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/src/transformers/models/flava/modeling_flava.py#L803-L805 - However, in the notebook, you do ```python inputs['input_ids'], inputs['input_ids_masked'] = data_collator.torch_mask_tokens(...) ``` which cause `inputs['input_ids_masked']` to be the 2nd item return ed by `torch_mask_tokens` which is incorrect. In particularly, it contains `-100`, which causes the error. Furthermore, `inputs['input_ids']` is also the wrong value, but it doesn't cause the program to crash. **The solution is just to prepare the correct inputs for the model**: ```python inputs['input_ids_masked'], _ = data_collator.torch_mask_tokens( inputs=inputs['input_ids'], special_tokens_mask=inputs['special_tokens_mask'] ) ``` With this change, I get `loss: 7.162976264953613`. Let me know if you have further question 🤗 <|||||>@ydshieh I don't think this is also correct as `torch_mask_tokens` masks the `input_ids` in place so you will have to clone the `input_ids` before passing them to it.<|||||>@apsdehal Thanks a lot, nice catch! You are 100% correct. @amariucaitheodor Please see this comment too!<|||||>As it turns out that this is not an issue in modeling code in `transformers`, but the wrong preparation of model inputs, I move forward to close the issue. @amariucaitheodor If you still have issues, you can post on [Hugging Face Forums](https://discuss.huggingface.co/). However, if you find other issue(s) you believe that is/are in modeling code, feel free to continue to leave comments here.
transformers
22,102
closed
[neptune] fix checkpoint bug with relative out_dir
# What does this PR do? It takes care of the following: * Updates to NeptuneCallback aligned with Neptune 1.0 release : https://docs.neptune.ai/setup/neptune-client_1-0_release_changes/ * It also fixes the case where we silently don't log the model_checkpoints when `output_dir` argument has a relative path of the form `../models`. Ref to the relevant lines of code: https://github.com/huggingface/transformers/blob/3be0e6e4a367dadb453ac31dad46fb665dc28b42/src/transformers/integrations.py#L1352-L1354 https://github.com/huggingface/transformers/blob/3be0e6e4a367dadb453ac31dad46fb665dc28b42/src/transformers/integrations.py#L1296-L1305 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-11-2023 07:32:43
03-11-2023 07:32:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Thanks for taking a look. Will update the PR description in sometime (probably today or early tomorrow) to have more information. Will ping once that is done.<|||||>@sgugger have updated the PR description. Let me know if that gives enough context. Thank you :)<|||||>@sgugger Thanks for the review. @AleksanderWWW is out for this week so we will update the PR once he is back (as he has more context on update for latest neptune version support). Thank you :)!<|||||>Thanks! I believe now all the failing checks will be solved once you rebase your PR on the main branch.<|||||>> Thanks! I believe now all the failing checks will be solved once you rebase your PR on the main branch. @sgugger Thank you for your support
transformers
22,101
closed
[Benchmark] HF Trainer optimizers (Mar-2023)
This is a rerun of Adam torch vs. apex vs HF vs adafactor [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005219385), [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263) but added BNB's 8bit Adam optimizer and probably the software has improved/changed since 14 months as well. note: [8-bit Optimizer](https://github.com/TimDettmers/bitsandbytes) Actually this time it was run on a desktop PCIe 80GB A100 - so not the same hardware as the previous benchmark which was an SXM 40GB A100. I'm using the specially written [HF Trainer benchmarking tool](https://github.com/huggingface/transformers/blob/main/scripts/benchmark/trainer-benchmark.py) that I developed specifically to make such benchmarks trivial to run and automatically get report tables. So I'm running: ``` CUDA_VISIBLE_DEVICES=0 python scripts/benchmark/trainer-benchmark.py --base-cmd ' \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \ --do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \ --max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \ --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \ --source_prefix "translate English to Romanian: " --warmup_steps 50 \ --max_train_samples 20000 --dataloader_num_workers 2 \ ' --target-metric-key train_samples_per_second --repeat-times 1 --variations '--optim adamw_torch|--optim adamw_bnb_8bit|--optim adamw_hf|--optim adafactor|--optim adamw_apex_fused' --report-metric-keys train_loss --base-variation '--optim adamw_torch' ``` You can see that I'm telling the tool to compare 5 optimizers: `adamw_torch`, `adamw_bnb_8bit`, `adamw_hf`, `adafactor`, `adamw_apex_fused`. **Memory usage wise we have per parameter:** - 2 bytes: `adamw_bnb_8bit` - 4 bytes: `adafactor` - 8 bytes: `adamw_torch`, `adamw_hf`, `adamw_apex_fused` *** Setup When publishing benchmarks it's crucial to log the versions that were used while running those, so here we go: ``` Datetime : 2023-03-10 20:55:38 Software: transformers: 4.27.0.dev0 torch : 1.13.1 cuda : 11.7 python : 3.8.15 Hardware: 1 GPUs : NVIDIA A100 80GB PCIe, 79.21GB ``` *** Results Last year's benchmark showed that the speed ups percentage was about the same between fp16/bf16/fp32. Let's see what this year brings plus a new optimizer. ### FP32 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 102.77 | 0 | 2.21 | | --optim adamw_bnb_8bit | 104.99 | 2 | 2.15 | | --optim adamw_hf | 103.64 | 1 | 2.21 | | --optim adafactor | 97.22 | -5 | 2.21 | | --optim adamw_apex_fused | 106.12 | 3 | 2.21 | Observations: - The results are very different from the previous year's benchmark. While Adafactor is still the slowest, the rest are are pretty close by. - Very surprisingly the quantized 8-bit BNB Adam optimizer is faster than pytorch's 8-byte Adam optimizer! While it uses 1/4th of the memory of the latter! And its loss is even better! ### BF16 (added `--bf16` to the base command line) | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 323.18 | 0 | 2.22 | | --optim adamw_bnb_8bit | 348.29 | 8 | 2.16 | | --optim adamw_hf | 333.07 | 3 | 2.22 | | --optim adafactor | 274.36 | -15 | 2.22 | | --optim adamw_apex_fused | 359.46 | 11 | 2.22 | Observations: - Again BNB beats every other optimizer at loss, while being only second to apex in speed. ### FP16 (added `--fp16` to the base command line) | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 370.09 | 0 | 2.55 | | --optim adamw_bnb_8bit | 383.21 | 4 | 2.45 | | --optim adamw_hf | 373.66 | 1 | 2.55 | | --optim adafactor | 356.84 | -4 | 2.53 | | --optim adamw_apex_fused | 380.50 | 3 | 2.55 | Observations: - Here BNB even managed to beat apex. But since I run each only once it's possible that re-running multiple times might show a slightly different outcome. - Somehow BF16 appears to be slower than fp16 but it gives a much better loss (same loss as fp32). I wonder why?! ### new addition! `--adamw_torch_fused` edit: we added `--adamw_torch_fused` to HF Trainer, which runs almost as fast as `--adamw_apex_fused` - this option requires `torch>=2.0` for fp32 and bf16, and `torch>2.0` for fp16 as some bugs were fixed in `torch==2.0` e.g. here is fp16 comparison: | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 387.10 | 3 | 2.66 | | --optim adamw_torch | 377.61 | 0 | 2.66 | | --optim adamw_apex_fused | 389.49 | 3 | 2.66 |
03-11-2023 05:06:50
03-11-2023 05:06:50
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Could you add some Lion benchmarks?<|||||>It's not in the HF Trainer's arsenal of optimizers, if you'd like to make a PR to integrate it then it can be done.
transformers
22,100
closed
Transformers cannot recognise `config.json` even though it is in model directory
I ran this code: ``` import os os.environ['TRANSFORMERS_CACHE'] = 'G:\.cache' from transformers import AutoModelForCausalLM, AutoTokenizer, PretrainedConfig #config = PretrainedConfig('G:\.cache\models--gptjchatbot_model\config.json') tried this but doesn't work well model = AutoModelForCausalLM.from_pretrained("models--gptjchatbot_model") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.8, top_p=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] ``` for one of my finetuned custom models and I got this debug prompt (and I'm running a virtual python environment) ``` Traceback (most recent call last): File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 620, in _get_config_dict resolved_config_file = cached_file( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\huggingface_hub\utils\_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\huggingface_hub\utils\_validators.py", line 173, in validate_repo_id raise HFValidationError(f"Cannot have -- or .. in repo_id: '{repo_id}'.") huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'models--gptjchatbot_model'. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\gptjchatbot.py", line 8, in <module> model = AutoModelForCausalLM.from_pretrained("models--gptjchatbot_model") File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 434, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 852, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 565, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 641, in _get_config_dict raise EnvironmentError( OSError: Can't load the configuration of 'models--gptjchatbot_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'models--gptjchatbot_model' is the correct path to a directory containing a config.json file ``` I ran an Anaconda environment on a separate drive and went through the work of changing the cache directory because I have no space on `C:`. Is there a way for config.json to be recognized. It is in the actual folder and I even tried making a subdirectory called `config`. Please help. Thanks.
03-11-2023 04:38:11
03-11-2023 04:38:11
`model = AutoModelForCausalLM.from_pretrained("models--gptjchatbot_model")` cannot work, you need to specify the whole path to the folder on your local steup.<|||||>Modified line 8 `model = AutoModelForCausalLM.from_pretrained("G:\.cache\models--gptjchatbot_model\")` got this error ``` File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\gptjchatbot.py", line 8 model = AutoModelForCausalLM.from_pretrained("G:\.cache\models--gptjchatbot_model\") ^ SyntaxError: unterminated string literal (detected at line 8) ```<|||||>@constantinethegr8 This is a python syntax error resulting from the `\` at the end of the path. The following should work: `model = AutoModelForCausalLM.from_pretrained("G:\.cache\models--gptjchatbot_model")`<|||||>I used this line: ``` model = AutoModelForCausalLM.from_pretrained("G:\.cache\models--gpt4chan_model") ``` like you said but got this new error about dependecies in the conda environment ``` Traceback (most recent call last): File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\gptjchatbot.py.py", line 8, in <module> model = AutoModelForCausalLM.from_pretrained("G:\.cache\models--gptjchatbot_model") File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 434, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 874, in from_pretrained return config_class.from_dict(config_dict, **unused_kwargs) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 688, in from_dict config = cls(**config_dict) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\gptj\configuration_gptj.py", line 139, in __init__ super().__init__( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 332, in __init__ import torch File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\torch\__init__.py", line 128, in <module> raise err OSError: [WinError 182] The operating system cannot run %1. Error loading "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\torch\lib\shm.dll" or one of its dependencies. ```<|||||>@constantinethegr8 The new error being raised is showing that `torch` cannot be imported, which isn't a transformers issue. I suggest trying to reinstall torch in your environment. You can check whether torch is importable and its version by running `python -c "import torch; print(torch.__version__)"` in the terminal. <|||||>so i should not `import torch as pytorch`?<|||||>and I got torch version 1.13.1<|||||>Hi @constantinethegr8, > so i should not `import torch as pytorch`? `import x as y` is just a renaming of the module `x` to `y` in the scope. Although is generally advised against importing with non-canonical names, there's nothing stopping you and I doubt it is the cause of the issue here. > and I got torch version 1.13.1 OK, this means torch is install and in your path. As you mentioned, there's likely some misconfiguration in the conda environment between the dependencies. This isn't a transformers issue. I would recommend creating a new conda environment or searching online to see if there's other people who have encountered the same error message and the solutions they have found. <|||||>Thank you. Should I avoid using a conda environment then as I have used regular python with transformers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,099
closed
[deepspeed docs] Activation Checkpointing
Add a section on Activation Checkpointing. Even though we don't support Deepspeed Activation Checkpointing API nevertheless document it and clarify what's what to help the user achieve clarity and make the right choices (and not file Issues ;)
03-11-2023 04:27:40
03-11-2023 04:27:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,098
closed
[trainer] fix bug in grad accum with multiple epochs
Please see https://github.com/huggingface/transformers/issues/22082 for the analysis printout of the problem. But basically we have a bug in grad accum machinery when `steps_in_epoch % gradient_accumulation_steps != 0` we always check for `step+1 % gradient_accumulation_steps != 0` and when we hit the epoch boundary we end up running more than `gradient_accumulation_steps` in that iteration. I proposed a fix using a total step counter - please feel free to suggest a different fix. I left the debug prints if you'd like to validate the situation yourself. will remove when happy. Fixes: https://github.com/huggingface/transformers/issues/22082
03-11-2023 00:31:41
03-11-2023 00:31:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,097
closed
t5 remove data dependency
# What does this PR do? Prevents clamping logic from being skipped by torch.onnx tracer by moving data-dependent inf check into fp16-specific training code. Unclamped inf results in NaN being returned during loss calculation and eventually will crash with the following error: ```bash Traceback (most recent call last): File "/home/prathikrao/optimum/examples/onnxruntime/training/translation/run_translation.py", line 680, in <module> main() File "/home/prathikrao/optimum/examples/onnxruntime/training/translation/run_translation.py", line 588, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 373, in train return inner_training_loop( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 658, in _inner_training_loop self.deepspeed.step() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2169, in step self._take_model_step(lr_kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2071, in _take_model_step self.optimizer.step() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1759, in step self._update_scale(self.overflow) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2016, in _update_scale self.loss_scaler.update_scale(has_overflow) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 156, in update_scale raise Exception( Exception: Current loss scale already at minimum - cannot decrease scale anymore. Exiting run. ``` ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada
03-10-2023 22:35:25
03-10-2023 22:35:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>Cool! Thanks for this contribution! Pretty sure that this can also be applied to `SwitchTransformers` (as it implements the similar procedure) and ~`MT5`~ LongT5<|||||>Let's maybe address this in a follow up PR no? Btw this PR includes the changes for `mt5` (EDIT: you meant `LongT5`)<|||||>@prathikr Would you mind changing the two other models in this PR or would you prefer we followup in a separate PR?<|||||>@sgugger I think a separate PR would be best. Thank you
transformers
22,096
closed
Use torch.TensorDicts: The output of tokenizers.batch_encode_plus/__call__ could be made to inherit from torch TensorDicts
### Feature request Tensor dicts have recently come out as a way to manipulate dicts of tenors in a way that is analogous to pandas, eg to make it easy to work on columns on tensors that share some property and a batch dimension or set of dimensions https://pytorch.org/rl/tensordict/ An obvious application of this is the output of `tokenizer.batch_encode_plus`. ### Motivation Being able to do a bunch of things on all the subtensors at once would be cool, like `.to` and `.cat`, etc. Having a common interface with tensordict could be fun. ### Your contribution I don't have the bandwidth to handle this myself.
03-10-2023 22:20:31
03-10-2023 22:20:31
The result of the tokenizer calls can already interact with the `to` method (note that batch_encode_plus will be deprecated sometime soon) but I agree it could be interesting to look at this! The main challenge I see is that it's not packaged in PyTorch main so would require an extra dep...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,095
closed
Fix big model inference for T5 models in float16
# What does this PR do? This PR fixes big model inference for large T5 models. The problem is that T5 models have some weights kept in float32, which interferes with the computation of `infer_auto_device_map`. Accelerate adds the functionality to deal with this in [this PR](https://github.com/huggingface/accelerate/pull/1179), and a patch release will be out soon with the fix in a release. When this is done, this PR can be merged so the fix can be used. With this I can do ```py from transformers import T5ForConditionalGeneration, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained('google/flan-ul2') model = T5ForConditionalGeneration.from_pretrained('google/flan-ul2', device_map = 'auto', torch_dtype=torch.float16) input_string = 'Answer the following question by reasoning step by step. I start with 10 bananas. A monkey eats three of them, and then gives me an avocado. How many bananas do I have left?' inputs = tokenizer(input_string, return_tensors = 'pt').to('cuda:0') outputs = model.generate(inputs['input_ids'], max_length = 200) print(tokenizer.decode(outputs[0])) ``` whereas before this went OOM.
03-10-2023 21:53:43
03-10-2023 21:53:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,094
closed
SearchSummarizationPipeline - 'object has no attribute' error
I am facing the below error while initializing SearchSummarizationPipeline. I am using latest haystack version and python 3.10. ``` pipe3 = SearchSummarizationPipeline(summarizer=summarizer,retriever=retriever,generate_single_summary=False,return_in_answer_format=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\wing\AppData\Local\Programs\Python\Python310\lib\site-packages\haystack\pipelines\standard_pipelines.py", line 424, in __init__ self.pipeline.add_node(component=summarizer, name="Summarizer", inputs=["Retriever"]) File "C:\Users\wing\AppData\Local\Programs\Python\Python310\lib\site-packages\haystack\pipelines\base.py", line 424, in add_node component_definitions[name] = component._component_config AttributeError: 'SummarizationPipeline' object has no attribute '_component_config' ```
03-10-2023 21:23:04
03-10-2023 21:23:04
transformers
22,093
closed
Revert "[GPT2] Propose fix for #21080"
Reverts huggingface/transformers#21853 A few PT/TF and PT/Flax cross tests started to fail after #21853. Revert that PR for now. We need to look what went wrong (and why it is not reported in the PR CI) ``` FAILED tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_pt_tf_model_equivalence - AssertionError: 3.225274 not less than or equal to 1e-05 : outputs.last_hidden_state: Difference between torch and tf is 3.225274085998535 (>= 1e-05). FAILED tests/models/encoder_decoder/test_modeling_tf_encoder_decoder.py::TFGPT2EncoderDecoderModelTest::test_pt_tf_model_equivalence - AssertionError: 0.3753911 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.3753910958766937 (>= 1e-05). FAILED tests/models/vision_encoder_decoder/test_modeling_tf_vision_encoder_decoder.py::TFViT2GPT2EncoderDecoderModelTest::test_pt_tf_model_equivalence - AssertionError: 0.42845678 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.42845678329467773 (>= 1e-05). ``` A job run with failed tests is [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59606/workflows/fd5cb028-f3df-4ac6-83ae-b0cc68396af8/jobs/728991)
03-10-2023 20:56:21
03-10-2023 20:56:21
Tested locally - all good now. Will merge once CI is green too (already discussed offline)
transformers
22,092
closed
New metrics and different Loss in TF version of Segformer
### Feature request I am playing with the awesome [Segformer finetuning example on the Keras website](https://keras.io/examples/vision/segformer/) made by @sayakpaul that relies on HF Transformers. In this example no loss function nor any metrics are specified in `model.compile()` . I would like to be able to add metrics (e.g., IoU, Dice, etc), and potentially change the loss for the Segformer model. When I tried to make these additions, the compile step failed. (From reading the Segformer paper and original code it seems like all metrics and losses need to have some form of masking?). Any advice or info on how to implement these changes would be awesome (and i apologize in advance if I have missed the relevant docs (i did look!). (based on comms with @sayakpaul , i am also cc:ing @Rocketknight1 ) ### Motivation Track various metrics during the fine-tuning of the Segformer model. ### Your contribution I think once i understand the solution steps i would be able to determine if i could contribute
03-10-2023 20:45:34
03-10-2023 20:45:34
Interesting! Can you document exactly what error you got with the compile step and what code you ran to cause them?<|||||>Hi @Rocketknight1 I misremembered- the error is not after `model.compile()` - compiling a model with a different loss function, added metrics, a custom loss, or custom metrics all compile w/ no error. The errors appear with `model.fit()` . So far I have tried to fit a model with a range of things : - a custom loss (my own version of Dice Loss) - added metrics (`tf.keras.metrics.MeanIoU()` and/or a (custom) Dice metric) - using KLDivergence loss (`tf.keras.losses.KLDivergence()`) All produce errors during `model.fit()`, and all produce their own sets of errors.. All of them seem to me to be some type of tensorshape issue, but the Tracebacks are all different. To make sure its not just me, my colleague @dbuscombe-usgs has also tried, and also reported similar issues (with different datasets, different number of classes, different TF versions, different machines, etc.). I can provide a reference dataset and the scripts I am working with, if needed...<|||||>Yes please! Ideally if you could give us some minimal code that reproduces the issue, that would make it much easier for us to track it down. Also, sorry for the delay in replying here - I was away on St. Patrick's Day so I'm only getting to my GitHub backlog now!<|||||>I have done it myself.  Lol to much green beer! Sent from Yahoo Mail for iPhone On Monday, March 20, 2023, 2:40 PM, Matt ***@***.***> wrote: Yes please! Ideally if you could give us some minimal code that reproduces the issue, that would make it much easier for us to track it down. Also, sorry for the delay in replying here - I was away on St. Patrick's Day so I'm only getting to my GitHub backlog now! — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: ***@***.***> <|||||>Hi @Rocketknight1 , sorry for the delay. Attached below is some code and example image & label pairs all zipped up. Let me know if you prefer another format/delivery mechanism ``` |- TFSegFormerExample.py L ExampleData | - images L labels ``` on L165 of the code is the compile step, and different versions of the model can be commented/uncommented to see the various error codes: L168 is the base case, where no loss function is defined - this works L171 defines SparseCatLoss - this does not work L174 defines KLD loss - this does not work L171 defines no loss but uses meanIoU as a metric - this does not work (These look like tensorshape issues to me, and typically i would debug it by looking at the last layer shape of `model.summary()`.. but the output of `model.summary()` for this model is not super expressive for this model, I'm not quite sure why - but maybe that is a whole different question) [TFSegformerExample.zip](https://github.com/huggingface/transformers/files/11056233/TFSegformerExample.zip) <|||||>Ah, I see! The issue here is caused by some specific behaviour of the SegFormer models when using inputs of this resolution. The model outputs are actually at a lower resolution than the inputs - you can check this by manually passing in a batch. The output logits come out at 128x128, whereas the input is 512x512. This results in the loss computation failing because the logit and label tensors can't be aligned with each other. If you use the model's [internal loss computation](https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/segformer/modeling_tf_segformer.py#L793-L811) by not passing any loss argument to `compile()`, then logits are upscaled before applying the cross-entropy loss and training works correctly. If you want to use your own custom loss function you'll have to do something similar. I'm not sure exactly why the output resolution for SegFormer is different from the input resolution, but it's not a bug in the Hugging Face TensorFlow implementation because the original model and our PyTorch implementation do this as well. @sayakpaul do you know why the model does that?<|||||>thx for that code highlight @Rocketknight1 , super helpful and i understand it now -- i would need a similar upsampling routine. related Q to finding the output resolution - is there a reason that `summary()` does not provide info on all the layers/internal architecture of the model? <|||||>> @sayakpaul do you know why the model does that? It's very likely because of how the model is designed and how it accumulates the multiple-resolution features and decodes them into a segmentation map. @NielsRogge might have better inputs. > i would need a similar upsampling routine. You can check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) that has this. ```py def compute_metrics(eval_pred): logits, labels = eval_pred # logits are of shape (batch_size, num_labels, height, width), so # we first transpose them to (batch_size, height, width, num_labels) logits = tf.transpose(logits, perm=[0, 2, 3, 1]) # scale the logits to the size of the label logits_resized = tf.image.resize( logits, size=tf.shape(labels)[1:], method="bilinear", ) # compute the prediction labels and compute the metric pred_labels = tf.argmax(logits_resized, axis=-1) metrics = metric.compute( predictions=pred_labels, references=labels, num_labels=num_labels, ignore_index=-1, reduce_labels=image_processor.do_reduce_labels, ) # add per category metrics as individual key-value pairs per_category_accuracy = metrics.pop("per_category_accuracy").tolist() per_category_iou = metrics.pop("per_category_iou").tolist() metrics.update( {f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)} ) metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) return {"val_" + k: v for k, v in metrics.items()} ``` > related Q to finding the output resolution - is there a reason that summary() does not provide info on all the layers/internal architecture of the model? That is because we wrap everything as layers, and that has a limitation like this one. We do this to support cross-loading from PyTorch (because of variable naming). @Rocketknight1 might have more to add to this. <|||||>Yeah, refactoring our TF models to make `summary()` more usable is absolutely on the list! Unfortunately it's quite a big list, but it's definitely there.<|||||>Awesome, thanks so much for all the helpful info @Rocketknight1 & @sayakpaul . I can close this issue now as i understand the landscape much better and it seems the requested feature is already on your list! thanks again - i really appreciate it!
transformers
22,091
closed
flan-t5-xl and flan-t5-xxl model deployment on Sagemaker fails on deploying from HuggingFace Hub
### System Info Code to replicate from model hub - https://huggingface.co/google/flan-t5-large/tree/main -> Deploy -> Amazon SageMaker endpoint -> AWS ``` from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'google/flan-t5-xl', 'HF_TASK':'text2text-generation' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) predictor.predict({ 'inputs': "The answer to the universe is" }) ``` The endpoint invocation fails with below error - `2023-03-10 19:15:14,508 [INFO ] W-google__flan-t5-xl-5-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory /.sagemaker/mms/models/google__flan-t5-xl or `from_tf` and `from_flax` set to False.` This is possibly because if you look at files under "Files and Versions"( [Link](https://huggingface.co/google/flan-t5-xl/tree/main)) the model has been split up into multiple (pytorch_model-00001-of-00002.bin files because of the size) and the out of the box solution is looking for one pytorch_model.bin file and failing ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Provided code in the issue description ### Expected behavior Should work out of the box on SageMaker deployment
03-10-2023 19:49:08
03-10-2023 19:49:08
@philschmid could you please help here? I've gone though your workaround [here](https://www.philschmid.de/deploy-t5-11b)<|||||>You need to install a more recent version of Transformers, 4.17.0 won't support sharded checkpoints.<|||||>@rags1357 you can check out this blog post: [Deploy FLAN-T5 XXL on Amazon SageMaker](https://www.philschmid.de/deploy-flan-t5-sagemaker) <|||||>Thank you @sgugger and @philschmid , will try it with the updated Transformers version<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,090
closed
Add TF port of BLIP
Work in progress right now, will update this when it's closer to being ready!
03-10-2023 17:10:42
03-10-2023 17:10:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>The TF port is mostly complete now and tests are passing locally - I just need to go around updating docs and auto classes and so on. The main code should be ready for review!<|||||>Looks like there are many comments to address for now. Please ping me again when it's ready for second review!<|||||>Got through a lot of the comments today, but I have a couple of other things to do - will try to finish them tomorrow!<|||||>The last remaining big issue is that some of the pt-tf equivalence tests fail when weights don't match up between models. This is caused by the cross-attention weights not being built, presumably because those layers aren't being called in the forward pass. I'm working on figuring out why and resolving that!<|||||>The issue seems to be that in all of our other models, cross-attention layers are only added when `config.add_cross_attention` is True, but in the case of BLIP it only checks `config.is_decoder`. As a result, the PyTorch models often initialize cross-attention layers that aren't used, which causes weight mismatch issues for us in crossloading tests, because TF only creates weights on first use.<|||||>It's coming, don't worry! This cross-attention behaviour is just very odd and I'm trying to track it down first<|||||>Hi all! I've addressed all comments and local tests look good. The remaining issues are: - Converting checkpoints so the tests don't need `from_pt` - Maybe adding more auto classes I'm not sure about the auto classes, though - they're missing in the original PT version of the model as well, so this didn't seem like the right PR to add them. <|||||>cc @sgugger - I think this is ready for a final review at last!<|||||>Got it, I'll figure out some way to re-enable those tests, or override them with versions that do work!<|||||>@sgugger this should be ready for review with all comments addressed! The failing test is in an unrelated model<|||||>@sgugger Sorry for the confusion - that equivalence test is present in both the `test_modeling_tf_blip` and `test_modeling_blip` file. Do we want to keep it in both?<|||||>Yes we do.<|||||>Going to leave the `pt-to-tf` changes in this PR rather than making a separate one, since they're needed for proper BLIP conversion!
transformers
22,089
closed
[`Gpt-neo-x`] Fix gpt neo-x multi gpu training
# What does this PR do? This PR attempts to solve some issues users may encounter when training gpt-neo-x in multiple GPUs Related: https://github.com/lvwerra/trl/pull/210 This PR might be not needed so putting it as draft for now
03-10-2023 14:22:14
03-10-2023 14:22:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes that also what I have thought after opening the PR, I'll find a workaroud to set everything on the correct device
transformers
22,088
closed
Key error during Training
We are trying to do text summarization using BERT2BERT model. But facing this error mentioned below: The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) [/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py](https://localhost:8080/#) in get_loc(self, key, method, tolerance) 3361 return self._engine.get_loc(casted_key) 3362 except KeyError as err: -> 3363 raise KeyError(key) from err 3364 3365 if is_scalar(key) and isna(key) and not self.hasnans: KeyError: 6868 Link for google collab: ![error img](https://user-images.githubusercontent.com/90444415/224336228-f39d1e76-3542-4028-9eb4-c0aded3bc762.png) [https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](url)
03-10-2023 14:05:31
03-10-2023 14:05:31
Without seeing the code you run or the whole traceback, there is nothing we can do to help you.<|||||>The below is the code link, you can access it: [https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](url)<|||||>> Without seeing the code you run or the whole traceback, there is nothing we can do to help you. The below is the code link, you can access it: [https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](https://github.com/huggingface/transformers/issues/url)<|||||>The link does not point to anything.<|||||>Sorry about that, Please use the link below mentioned: https://github.com/kashalakarthik/BERT-Hugging-face please copy paste the link in browser if it is not working. The links to download the dataset also mentioned in the repository.<|||||>You are not passing a `dataset` to your Trainer but a pandas dataframe, so this can't work. The Trainer only accepts PyTorch `Dataset` objects.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,087
closed
Add AutoModelForZeroShotImageClassification
# What does this PR do? Adds `AutoModelForZeroShotImageClassification` and `TFAutoModelForZeroShotImageClassification` to transformers. CC @MKhalusova will be adding a task guide in a separate PR ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-10-2023 13:44:45
03-10-2023 13:44:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>The CI tests are failing due to unrelated issues, I will rebase the PR to the main branch once they are fixed.
transformers
22,086
closed
GPT-J specific half precision on CPU note
This PR adds a note to the GPT-J model doc indicating that the half precision on CPU example is specific to the model, and doesn't generally apply. Related to https://github.com/huggingface/transformers/issues/21989
03-10-2023 13:33:02
03-10-2023 13:33:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>My bad, I misunderstood the interaction in the issue. Tried the example, and it does indeed not work. I'll do another pass.<|||||>Did another take. It should now be clear that half-precision works on CUDA devices only. Btw, not sure if this is relevant but I tried this example without explicitly sending to device on a GPU, and it threw the same error.
transformers
22,085
closed
Some problems when using vit model
### Feature request I am a novice in AI. Recently, I am learning the vit model. There are many teaching about nlp in the document. I want to know how to load the vit model downloaded from the hugging face, and how to use the imagenet dataset to train and test the vit model. I can't find the relevant tutorial. My question is really very basic. If you can answer my question with a simple code, I will be grateful. ### Motivation Better use of the tansforms library. ### Your contribution No contribution at present.
03-10-2023 13:04:25
03-10-2023 13:04:25
Better use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,084
closed
input_ids_seq_length is always 1
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.0 (gpu) - Jax version: 0.3.13 - JaxLib version: 0.3.10 - Using GPU in script?: yes I am trying to generate output that is equal in length to the input (partially to avoid hallucinations and repetitions). In src/transformers/generation/utils.py I read how input length is determined: If self.config.is_encoder_decoder (which is the case for me), input_ids_seq_length calculates the length of the input ids coming from _prepare_decoder_input_ids_for_generation, which makes a tensor with dimension (batch_size, 1) filled with start_tokens. This means the input_ids_seq_length is always 1, making it useless for determining the input length (and determining the output length based on that). ### Who can help? @sgugger @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The problem arises in a script of my own, but this example also highlights it: (the task I am working on is not summarization but grammar correction, thats why I want the input length to be equal to the output length) ``` from transformers import AutoTokenizer, T5ForConditionalGeneration, GenerationConfig tokenizer = AutoTokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") config = GenerationConfig(max_new_tokens=0) input_ids = tokenizer("summarize: My friends are cool but they eat too many carbs.", return_tensors="pt").input_ids outputs = model.generate(input_ids, generation_config=config) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Expected behavior I would expect the output length to be determined by the input length + max_new_tokens: generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length This is the case, but input_ids_seq_length is (wrongfully) always 1, making the output length independent of the input and equal to max_new_tokens+1.
03-10-2023 12:00:15
03-10-2023 12:00:15
cc @gante <|||||>Hey @ChrisSpraaklab 👋 In both types of models, `input_ids_seq_length` is relative to the output of the model, which is different for encoder-decoder (does not contain the prompt) and decoder-only models (contains the prompt). I agree that we might benefit from a rework there, for clarity :) In any case, let's sort out your immediate issue! As the argument indicates, `max_new_tokens` will make the model generate up to `max_new_tokens` new tokens. As such, if you want to generate an output equal to the input, you'll have to set `max_new_tokens=input_ids.shape[1]`. Also, bear in mind that encoder-decoder models ALWAYS start the output with a BOS token. As such, the length of the output will be the length of the input + 1.<|||||>@gante Thanks for your quick response. However, what I mean is that when input_ids_seq_length is set to input_ids.shape[-1], this value is always equal to 1 (as it comes from _prepare_decoder_input_ids_for_generation). ``` # 5. Prepare `input_ids` which will be used for auto-regressive generation if self.config.is_encoder_decoder: input_ids = self._prepare_decoder_input_ids_for_generation( batch_size, decoder_start_token_id=generation_config.decoder_start_token_id, bos_token_id=generation_config.bos_token_id, model_kwargs=model_kwargs, device=inputs_tensor.device, ) else: input_ids = inputs_tensor if model_input_name == "input_ids" else model_kwargs.pop("input_ids") # 6. Prepare `max_length` depending on other stopping criteria. input_ids_seq_length = input_ids.shape[-1] has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None if has_default_max_length and generation_config.max_new_tokens is None: warnings.warn( f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. " "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we" " recommend using `max_new_tokens` to control the maximum length of the generation.", UserWarning, ) elif generation_config.max_new_tokens is not None: generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length if not has_default_max_length: logger.warn( f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(=" f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. " "Please refer to the documentation for more information. " "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)", UserWarning, ) ``` In my understanding, doing as you suggested would make this line equivalent to 1+1, as `max_new_tokens=input_ids.shape[1]` (equal to 1) and `input_ids_seq_length = input_ids.shape[-1]` (equal to 1) ``` generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length ```<|||||>@ChrisSpraaklab inside generate, in encoder-decoder models like T5, `input_ids` is related to the decoder input ids. They are not the same as the `input_ids` you feed to `.generate()`, which will be used inside the encoder. Sadly, because `.generate()` is used with many types of models, we have this naming clash :) Have you tried running ```py from transformers import AutoTokenizer, T5ForConditionalGeneration, GenerationConfig tokenizer = AutoTokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("summarize: My friends are cool but they eat too many carbs.", return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=input_ids.shape[1]) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ?<|||||>Thanks! Your solution does indeed produce the result I was looking for. I was just quite confused about the naming convention and documentation around max_new_tokens. I was under the impression that its value would be added to the length of in the input of the encoder, not the decoder. However, I now understand why it doesn't behave as I expected it to. <|||||>So... despite that we input a token sequence `input_ids` in the `generate()` function, the length of this is irrelevant in the encoder-decoder model, and the `max_new_tokens` in `generate()` only refers to the length of the decoder input, which, because of BOS, is [always 1](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1274) in our case. Yes, this is somewhat confusing indeed. Are there ways to motivate `generate()` to be more concise, but still run until EOS is generated, e.g., by setting a prior on the EOS? <|||||>Hey @davidavdav -- yeah, you can try using Beam Search (i.e. `num_beams>1`) and pass a NEGATIVE [`length_penalty`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.length_penalty). This will nudge the output towards shorter outputs!<|||||>BTW, if you come across better variable names, by all means, please suggest them :) We have so many features on our to-do list (including better docs) that every little help is precious!<|||||>Ah thanks, @gante---I do appreciate the difficulty of choosing sensible parameter/variable names, the number of times I am refactoring names back and forth in my own code is quite scary!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,083
closed
[Safetensors] Add explicit flag to from pretrained
## Give user more control when loading safetensors It would be very helpful if we could give the user more control over loading safetensors checkpoints by adding a `use_safetensors` flag to `from_pretrained`. At the moment, `safetensors` weights are **always** loaded when `safetensors` is installed and are **silently** not loaded if `safetensors` is installed, **but** the model has no `safetensors` weights. By giving the user the option to do `use_safetensors=True/False` we enable two new use cases: 1.) If a user only wants to load models from safetensors checkpoints, they can now pass `use_safetensors=True` which will lead to an error if no safetensors checkpoints are available => this can then e.g. give the user the guarantee that no pickle is used when loading checkpoints 2.) If a user doesn't want to load `safetensors` checkpoints, but not uninstall the library, they can now pass `use_safetensors=False` which will never load safetensors checkpoitns. This is super helpful for testing as well. Also this feature would unblock this PR: in `diffusers`. https://github.com/huggingface/diffusers/pull/2123
03-10-2023 11:33:38
03-10-2023 11:33:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,082
closed
Inconsistent training steps between Trainer and DeepSpeed
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes DeepSpeed general environment info: torch install path ............... ['/home/fenia/anaconda3/envs/benchmark/lib/python3.8/site-packages/torch'] torch version .................... 1.12.0+cu113 deepspeed install path ........... ['/home/fenia/anaconda3/envs/benchmark/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.8.1, unknown, unknown torch cuda version ............... 11.3 torch hip version ................ None nvcc version ..................... 11.8 deepspeed wheel compiled w. ...... torch 1.12, cuda 11.3 ### Who can help? @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hello! There seems to be some incosistency in the number of training steps when using DeepSpeed with HF trainer. It looks like DeepSpeed is doing things correctly but ends up training more steps in order to match Trainer. They both continue training even after learning rate has dropped to 0. From the official examples: ``` ds_config_zero2={ "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 10, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` ``` DISTRIBUTED_ARGS="--nproc_per_node 2 --nnodes 1 --node_rank 0 --master_addr localhost --master_port 6000" python -m torch.distributed.launch $DISTRIBUTED_ARGS \ run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --do_train \ --output_dir /tmp/test-clm2 \ --max_train_samples=148 \ --gradient_accumulation_steps=16 \ --overwrite_output_dir \ --max_steps=200 \ --logging_steps=10 \ --deepspeed="ds_config_zero2.json" ``` I attach the training output: [output.txt](https://github.com/huggingface/transformers/files/10934213/output.txt) The same behavior is observed even if training with Trainer+DeepSpeed on a single GPU. ### Expected behavior Expected number of steps should match between Trainer and DeepSpeed logging. Thank you very much in advance!
03-10-2023 11:24:58
03-10-2023 11:24:58
Thank you for the great and easy to reproduce report, @fenchri Indeed, you found a grad accumulation bug in HF Trainer. This is not an bug in DeepSpeed or its integration. I did: ``` diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py index 344523842..a75110ee9 100755 --- a/src/transformers/trainer.py +++ b/src/transformers/trainer.py @@ -1886,6 +1886,7 @@ class Trainer: if step % args.gradient_accumulation_steps == 0: self.control = self.callback_handler.on_step_begin(args, self.state, self.control) + print(f"HF STEP {step+1}") if ( ((step + 1) % args.gradient_accumulation_steps != 0) and args.local_rank != -1 ``` and now running w/o deepspeed: ``` python -m torch.distributed.launch --nproc_per_node 1 --nnodes 1 --node_rank 0 \ --master_addr localhost --master_port 6000 \ examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path sshleifer/tiny-gpt2 --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 --do_train --block_size 10 --output_dir \ output_dir --max_train_samples=148 --gradient_accumulation_steps=16 \ --overwrite_output_dir --max_steps=10 --logging_steps=1 ``` Since you set `--max_train_samples=148 --gradient_accumulation_steps=16` at step 8->9 the dataset wraps over, but the grad accum counter ignores the wrapping and waits for `((step + 1) % args.gradient_accumulation_steps != 0)` so when we run it, we get: ``` [skipped the first 7 grad acc cycles] {'loss': 10.823, 'learning_rate': 1.5e-05, 'epoch': 1.65} 70%|███████████████████████████████████████████████████████████████████████▍ | 7/10 [00:00<00:00, 10.01it/s] HF STEP 49 HF STEP 50 HF STEP 51 HF STEP 52 HF STEP 53 HF STEP 54 HF STEP 55 HF STEP 56 HF STEP 57 HF STEP 58 HF STEP 59 HF STEP 60 HF STEP 61 HF STEP 62 HF STEP 63 HF STEP 64 {'loss': 10.8266, 'learning_rate': 1e-05, 'epoch': 1.86} 80%|█████████████████████████████████████████████████████████████████████████████████▌ | 8/10 [00:01<00:00, 10.01it/s] HF STEP 65 HF STEP 66 HF STEP 67 HF STEP 68 HF STEP 69 HF STEP 70 HF STEP 71 HF STEP 72 HF STEP 73 HF STEP 74 HF STEP 1 HF STEP 2 HF STEP 3 HF STEP 4 HF STEP 5 HF STEP 6 HF STEP 7 HF STEP 8 HF STEP 9 HF STEP 10 HF STEP 11 HF STEP 12 HF STEP 13 HF STEP 14 HF STEP 15 HF STEP 16 {'loss': 17.593, 'learning_rate': 5e-06, 'epoch': 2.22} 90%|███████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:01<00:00, 11.05it/s] HF STEP 17 HF STEP 18 HF STEP 19 HF STEP 20 HF STEP 21 HF STEP 22 HF STEP 23 HF STEP 24 HF STEP 25 HF STEP 26 HF STEP 27 HF STEP 28 HF STEP 29 HF STEP 30 HF STEP 31 HF STEP 32 {'loss': 10.8249, 'learning_rate': 0.0, 'epoch': 2.43} ``` you can see that between iteration 8 and 9 there are more than 16 grad accumulation steps happening. ------------- Until this is fixed, specifically to your needs, @fenchri - as long as you're using deepspeed the grad accumulation is performed correctly since it performs it on its own. But you end up running more than steps than specified.<|||||>Hmm, actually looking at earlier steps, this appears to be odd as well: ``` {'loss': 10.8252, 'learning_rate': 3e-05, 'epoch': 0.86} 40%|████████████████████████████████████████▊ | 4/10 [00:00<00:01, 5.14it/s] HF STEP 65 HF STEP 66 HF STEP 67 HF STEP 68 HF STEP 69 HF STEP 70 HF STEP 71 HF STEP 72 HF STEP 73 HF STEP 74 HF STEP 75 HF STEP 76 HF STEP 77 HF STEP 78 HF STEP 79 HF STEP 80 HF STEP 81 HF STEP 82 HF STEP 83 HF STEP 84 HF STEP 85 HF STEP 86 HF STEP 87 HF STEP 88 HF STEP 89 HF STEP 90 ``` it did 9 additional dataset pulls here as well (25 instead of 16), and this is not at the grad accum boundary edit: ah, it's because bs=2, so it hits the rollover already at step 4->5, that's why. <|||||>ok, actually I came up with a fix, will push shortly for you to try Please try https://github.com/huggingface/transformers/pull/22098 <|||||>Thanks @stas00 for having a look and apologies for the late reply. Indeed, the fix resolves the issue! :tada: On a related note, the computation happening [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1654) seems to chop `num_update_steps_per_epoch` even if even if `drop_last` is False. This results in having `100` training epochs instead of `87`, which then gets printed [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1740). Nevertheless, with the current fix the training stops at the desired number of steps, so should be fine :) I am happy to open another issue related to this though if you think is needed :) Thank you!<|||||>> Thanks @stas00 for having a look and apologies for the late reply. Indeed, the fix resolves the issue! tada excellent! Thank you for testing the PR, @fenchri > I am happy to open another issue related to this though if you think is needed :) yes, please. One Issue at a time.
transformers
22,134
closed
How can I create a repository automatically when defining the `Trainer`?
### Describe the bug I'm trying to fine-tune XLM-RoBERTa model on a German corpus for NER task. To handle the training loop I'm using the 🤗 Transformers `Trainer`, so first I need to define the training attributes using the `TrainingArguments` class: ````python from transformers import TrainingArguments # Set the number of epochs, batch size, and logging steps num_epochs = 3 batch_size = 24 logging_steps = len(panx_de_encoded["train"]) // batch_size # Define the model name model_name = f"{xlmr_model_name}-finetuned-panx-de" # Define the training arguments for the model training_args = TrainingArguments( output_dir=model_name, # Directory to save model checkpoints and outputs log_level="error", # Logging level num_train_epochs=num_epochs, # Number of training epochs per_device_train_batch_size=batch_size, # Batch size per device for training per_device_eval_batch_size=batch_size, # Batch size per device for evaluation evaluation_strategy="epoch", # Evaluate model's prediction on the validation set at the end of each epoch save_steps=1e6, # Save checkpoint every 1000000 steps (i.e., disable checkpointing to speed up training) weight_decay=0.01, # Weight decay for optimizer disable_tqdm=False, # Whether to show progress bar during training logging_steps=logging_steps, # Determines the number of steps between each logging message push_to_hub=True # Whether to push the model to the Hugging Face model hub ) ```` * Next, I log in to the hugging face hub with `Write` role and define the `Trainer` as follows: ````python from transformers import Trainer trainer = Trainer(model_init=model_init, # A function that instantiates the model to be used args=training_args, # Arguments to tweak for training data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=panx_de_encoded["train"], eval_dataset=panx_de_encoded["validation"], tokenizer=xlmr_tokenizer) ```` Unfortunately, I have the following error: ````python Cloning https://huggingface.co/ahmad1289/xlm-roberta-base-finetuned-panx-de into local empty directory. --------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in clone_from(self, repo_url, token) 691 self.local_dir, --> 692 env=env, 693 ) /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_subprocess.py in run_subprocess(command, folder, check, **kwargs) 68 cwd=folder or os.getcwd(), ---> 69 **kwargs, 70 ) /opt/conda/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs) 511 raise CalledProcessError(retcode, process.args, --> 512 output=stdout, stderr=stderr) 513 return CompletedProcess(process.args, retcode, stdout, stderr) CalledProcessError: Command '['git', 'lfs', 'clone', 'https://user:[email protected]/ahmad1289/xlm-roberta-base-finetuned-panx-de', '.']' returned non-zero exit status 2. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /tmp/ipykernel_23/987298996.py in <module> 8 train_dataset=panx_de_encoded["train"], 9 eval_dataset=panx_de_encoded["validation"], ---> 10 tokenizer=xlmr_tokenizer) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers) 401 # Create clone of distant repo and output directory if needed 402 if self.args.push_to_hub: --> 403 self.init_git_repo() 404 # In case of pull, we need to make sure every process has the latest. 405 if is_torch_tpu_available(): /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in init_git_repo(self) 2551 self.args.output_dir, 2552 clone_from=repo_name, -> 2553 use_auth_token=use_auth_token, 2554 ) 2555 except EnvironmentError: /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs) 122 ) 123 --> 124 return fn(*args, **kwargs) 125 126 return _inner_fn # type: ignore /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in __init__(self, local_dir, clone_from, repo_type, token, git_user, git_email, revision, skip_lfs_files, client) 516 517 if clone_from is not None: --> 518 self.clone_from(repo_url=clone_from) 519 else: 520 if is_git_repo(self.local_dir): /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs) 122 ) 123 --> 124 return fn(*args, **kwargs) 125 126 return _inner_fn # type: ignore /opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in clone_from(self, repo_url, token) 731 732 except subprocess.CalledProcessError as exc: --> 733 raise EnvironmentError(exc.stderr) 734 735 def git_config_username_and_email( OSError: WARNING: 'git lfs clone' is deprecated and will not be updated with new flags from 'git clone' 'git clone' has been updated in upstream Git to have comparable speeds to 'git lfs clone'. Cloning into '.'... remote: Repository not found fatal: repository 'https://huggingface.co/ahmad1289/xlm-roberta-base-finetuned-panx-de/' not found Error(s) during clone: git clone failed: exit status 128 ````` It appears that the model repository with the name `xlm-roberta-base-finetuned-panx-de` does not currently exist. However, as described in the [Hugging Face course](https://huggingface.co/course/en/chapter4/3?fw=pt), the `push_to_hub()` function (which should be used later in the notebook) handles both the creation of the repository and the push of the model and tokenizer files to that repository. Is there anything else that I might be missing? * [Full notebook](https://github.com/ahmad-alismail/NLP-with-Transformers/blob/master/4-nlp-with-transformers-multilingual-ner.ipynb) ### System info ```shell - huggingface_hub version: 0.12.1 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Running in iPython ?: Yes - iPython shell: ZMQInteractiveShell - Running in notebook ?: Yes - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: 2.7.11 - Tensorflow: 2.11.0 - Torch: 1.13.0 - Jinja2: 3.1.2 - Graphviz: 0.8.4 - Pydot: 1.4.2 - Pillow: 9.3.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_HUB_OFFLINE: False - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False {'huggingface_hub version': '0.12.1', 'Platform': 'Linux-5.15.89+-x86_64-with-debian-bullseye-sid', 'Python version': '3.7.12', 'Running in iPython ?': 'Yes', 'iPython shell': 'ZMQInteractiveShell', 'Running in notebook ?': 'Yes', 'Running in Google Colab ?': 'No', 'Token path ?': PosixPath('/root/.cache/huggingface/token'), 'Has saved token ?': False, 'Configured git credential helpers': '', 'FastAI': '2.7.11', 'Tensorflow': '2.11.0', 'Torch': '1.13.0', 'Jinja2': '3.1.2', 'Graphviz': '0.8.4', 'Pydot': '1.4.2', 'Pillow': '9.3.0', 'hf_transfer': 'N/A', 'ENDPOINT': 'https://huggingface.co', 'HUGGINGFACE_HUB_CACHE': '/root/.cache/huggingface/hub', 'HUGGINGFACE_ASSETS_CACHE': '/root/.cache/huggingface/assets', 'HF_HUB_OFFLINE': False, 'HF_TOKEN_PATH': '/root/.cache/huggingface/token', 'HF_HUB_DISABLE_PROGRESS_BARS': None, 'HF_HUB_DISABLE_SYMLINKS_WARNING': False, 'HF_HUB_DISABLE_IMPLICIT_TOKEN': False, 'HF_HUB_ENABLE_HF_TRANSFER': False} ```
03-10-2023 11:13:05
03-10-2023 11:13:05
Hi @ahmad-alismail , thanks for reporting this. ~If you don't mind I'll transfer this issue to the `transformers` repo and rename it. A breaking change has being introduced in [`huggingface_hub==0.12.0`](https://github.com/huggingface/huggingface_hub/releases/tag/v0.12.0). Since then, `Repository` do not handle the repo creation if not existing on the Hub.~ ~It seems that the `Trainer` push_to_hub method do not handle the repo creation before calling `Repository` which now fails. This has to be fixed [here](https://github.com/huggingface/transformers/blob/b90fbc7e0ba41dfd6b343e7e2274443f19087f36/src/transformers/trainer.py#L3555). In the meantime, you need to manually create the repo before using `Trainer.push_to_hub` or downgrade to `huggingface_hub==0.11.1`.~ ~@sgugger @ydshieh I'll open a PR today to fix this.~ **EDIT:** I cannot transfer the issue to `transformers` (most likely because I'm not a maintainer there) so if someone can do it :pray: **EDIT 2:** it seems that the repo creation [is already handled](https://github.com/huggingface/transformers/blob/b90fbc7e0ba41dfd6b343e7e2274443f19087f36/src/transformers/trainer.py#L3430) in the `Trainer` class. @sgugger @ydshieh an idea why the `create_repo` was not called?<|||||>@ahmad-alismail which version of `transformers` do you have?<|||||>Yeah, looks the number line of the error in the PR description has a difference of > 1000. Better to know which `transformers` version is used here.<|||||>Hi @Wauplin @ydshieh, thanks for your reply! The version of `transformers` is 4.11.3<|||||>@ahmad-alismail Could you try to update the `transformers` package to latest release (4.26.1) and re-run your script? Version 4.11.3 was released [in September 2021](https://pypi.org/project/transformers/#history) and is therefore outdated.<|||||>@Wauplin It's working perfectly! I truly appreciate your help – thank you so much!
transformers
22,081
closed
Fix gradient checkpointing bug in switch transformer
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-10-2023 11:04:37
03-10-2023 11:04:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,080
closed
Fix gradient checkpointing bug in Speecht5
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-10-2023 10:57:42
03-10-2023 10:57:42
Sorry fixed that<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@KMFODA it is possible that this is the wrong branch -- the diff doesn't contain changes to speecht5 :)<|||||>Yes the diff seems to have changed a bit, can you please double check 🙏 @KMFODA ?<|||||>Sorry yes I think the branches were mixed up. This should now introduce the changes to SpeechT5 and fix the formatting issues @gante spotted in [modeling_speech_to_text.py](https://github.com/huggingface/transformers/pull/22080/commits/4da5f4baaf4eddb71787ba4dbbbd3791953e481a).<|||||>Perfect, thank you for the changes @KMFODA 💛
transformers
22,079
closed
Fix gradient checkpointing bug in Speech2Text
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-10-2023 10:48:51
03-10-2023 10:48:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry fixed that<|||||>@KMFODA your fork probably needs to pull from `main` :p
transformers
22,078
closed
Generate - Fix broken documentation links
# What does this PR do? Fixes #22077 `main_classes` is not on the same level as `generation_strategies`, hence the broken link. EDIT: confirmed that it works in the CI docs.
03-10-2023 09:45:02
03-10-2023 09:45:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
22,077
closed
Broken link in Documentation
### System Info @sgugger Where I started https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.contrastive_search What doesnt exist https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/generation_strategies https://huggingface.co/docs/transformers/main/en/main_classes/generation_strategies ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Go to the links above ### Expected behavior The pages would exist.
03-10-2023 08:57:24
03-10-2023 08:57:24
Hey @datavistics -- the problem is not present on `main`, but rather in the previous release (which we can't fix unless we make a new release) See these docs: https://huggingface.co/docs/transformers/main/en/main_classes/text_generation<|||||>I still get the broken link here: https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.contrastive_search Which links to v4.26.1 in one of the links.<|||||>Should be fixed by #22078 on main.