repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
21,974
closed
[DETR, YOLOS] Fix device bug
# What does this PR do? This PR fixes a device bug for DETR and YOLOS's `post_process_object_detection` methods, which currently give a device mismatch between CPU and CUDA when running the model on CUDA. The PR also makes sure the postprocess methods are tested in the integration tests.
03-06-2023 16:47:05
03-06-2023 16:47:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,973
closed
GitHub Private Vulnerability reporting
### Feature request Enable Private vulnerability reporting in the GitHub repository, please. https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository ### Motivation I may have identified something low-impact vulnerability, but I would like to report it privately via the usual channel and request a CVE in that case. ### Your contribution The report that I will submit, when the feature is enabled, please.
03-06-2023 16:13:51
03-06-2023 16:13:51
cc @Michellehbn <|||||>Hi @Sim4n6, Thanks for reaching out to us! 🤗 We have a bug bounty program with HackerOne and would love for you to submit security vulnerability reports to https://hackerone.com/hugging_face. This is a private program so we will need to invite you. Do you happen to have an H1 username? Or you can send [email protected] an email and we'll send you an invite! <|||||>it is [Sim4n6](https://hackerone.com/sim4n6?type=user). No problem.<|||||>Invite sent! Thanks again! <|||||>Done.
transformers
21,972
closed
Support for `Flax` Trainer
### Feature request A `Trainer` class similar to PyTorch/Tensorflow (deprecating in v5) . ### Motivation The process of training HuggingFace models in `PyTorch` has become more accessible due to the availability of the `Trainer` class and extensive documentation and tutorials. Similarly, in `Tensorflow`, training models require only a single line of code (`model.fit`). and it makes sense to deprecate the current trainer to avoid redundancy. However, the process of training `Flax` models currently requires boilerplate code similar to `PyTorch` which a `Trainer` class would be helpful to eliminate. Making flax `Trainer` available will allow for really fast training in GPU/TPUs and with the support of weights conversion, one can instantly convert the model to `Tensorflow`/`PyTorch` for inference and deployment. ### Your contribution I can submit the PR. cc @sanchit-gandhi since it's related to flax.
03-06-2023 16:08:17
03-06-2023 16:08:17
This is also something we do not want to add or maintain for Flax, as researchers usually dislike Trainer classes very much.<|||||>Understood, thanks for such a quick reply :hugs:
transformers
21,971
closed
No clear documentation for enabling padding in FeatureExtractionPipeline
### System Info - `transformers` version: 4.26.0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The documentation for the [FeatureExtractionPipeline](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/pipelines#transformers.FeatureExtractionPipeline) does not currently provide instructions on how to use the truncation and padding arguments. These arguments can be passed in the `tokenize_kwargs` parameter, which is parsed by the [self._sanitize_parameters](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/pipelines/feature_extraction.py#L58) method. While the `truncation` argument can also be passed as a separate keyword argument, the `padding` argument can only be recognized if it is included in `tokenize_kwargs`. To improve clarity, it would be beneficial for the documentation to explicitly state that the existence of `tokenize_kwargs` parameter for passing tokenizer arguments and add that only the `truncation` argument can be used as a keyword argument, while other tokenizer parameters should be included in `tokenize_kwargs`. I can submit a PR to add the documentation if it sounds good to you! ### Expected behavior Below is the code to show that how padding should be used in FeatureExtractionPipeline. ```python example = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank." tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False) pipeline_without_padding_as_an_argument = pipeline( "feature-extraction", model="bert-base-uncased", tokenizer=tokenizer, return_tensors=True, ) pipeline_with_padding_as_an_argument = pipeline( "feature-extraction", model="bert-base-uncased", tokenizer=tokenizer, padding="max_length", return_tensors=True, ) pipeline_with_padding_in_kwarg = pipeline( "feature-extraction", model="bert-base-uncased", tokenizer=tokenizer, tokenize_kwargs={"padding": "max_length"}, return_tensors=True, ) print( pipeline_without_padding_as_an_argument(example).shape ) # torch.Size([1, 22, 768]) print( pipeline_with_padding_as_an_argument(example).shape ) # torch.Size([1, 22, 768]) padding = max_length not working print( pipeline_with_padding_in_kwarg(example).shape ) # torch.Size([1, 512, 768]) padding = max_length working ```
03-06-2023 15:12:33
03-06-2023 15:12:33
I simplified a bit for future readers: ```python example = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank." pipeline_without_padding_as_an_argument = pipeline( "feature-extraction", model="bert-base-uncased", return_tensors=True, ) pipeline_with_padding_in_kwarg = pipeline( "feature-extraction", model="bert-base-uncased", tokenize_kwargs={"padding": "max_length"}, return_tensors=True, ) ``` Also please note that `padding: "max_length"` makes unnecessary long tensors in most cases, slowing down the overal inference of your model. I would refrain heavily from using it in a pipeline. `{"padding": True}` should be better.<|||||>That makes sense. Do we need to make it clear in the documentation about how to use `tokenize_kwargs`?<|||||>PRs to make it clearer are welcome for sure ! <|||||>@Narsil Submitted PR #22031 that adds the `tokenize_kwargs` definition in `FeatureExtrationPipeline`. Thanks!<|||||>Thanks it looks great.
transformers
21,970
closed
Unable to load a pretrained model
### System Info - `transformers` version: 4.18.0 - Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no OSError: Can't load config for '~/Unixcoder/model/changesets_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '~/Unixcoder/model/changesets_model' is the correct path to a directory containing a config.json file I am trying to load a second model during the course of training the first model. I am able to load the model for the first time, but not for the second time(even though all the config files are present) I am trying to load the model downloaded from https://huggingface.co/microsoft/unixcoder-base/tree/main ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. load the model from downloaded version of https://huggingface.co/microsoft/unixcoder-base/tree/main 2. train the model on custom dataset 3. load the similar model that is pretrained on different dataset ### Expected behavior model is successfully loaded
03-06-2023 14:42:22
03-06-2023 14:42:22
Could you paste the code you are using? ```py from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("microsoft/unixcoder-base") model = AutoModel.from_pretrained("microsoft/unixcoder-base") ``` works perfectly fine.<|||||>from transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup, RobertaConfig, RobertaModel, RobertaTokenizer) tokenizer = RobertaTokenizer.from_pretrained(args.model_name_or_path) config = RobertaConfig.from_pretrained(args.model_name_or_path) model = RobertaModel.from_pretrained(args.model_name_or_path) ............................. model = RobertaModel.from_pretrained('./changesets_model') model = Model(model) model.to(args.device)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,969
closed
Add check before int casting for PIL conversion
# What does this PR do? Adds safeguards when images are converted to PIL images: * Additional condition if inferring `do_rescale` * Raise an error if values cannot be cast to `uint8` The PIL library is used for resizing images in image processors. If not explicitly set, whether or not to rescale pixel values is inferred based on the input type: if float then values are multiplied by 255. If the input image has integer values between 0-255, but are of floating type, then these pixel are rescaled to values between [0, 65025]. This results in overflow errors when cast to `uint` [here](https://github.com/huggingface/transformers/blob/bc33fbf956eef62d0ba8d3cd67ee955ad5defcdb/src/transformers/image_transforms.py#L162) before converting to a `PIL.Image.Image`. Fixes #21915 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
03-06-2023 14:38:39
03-06-2023 14:38:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,968
closed
[TF] Fix creating a PR while pushing in TF framework
# What does this PR do? Fixes #21967, where models in TF can't push and open a PR. A test should probably be added
03-06-2023 14:33:19
03-06-2023 14:33:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yep, adding this<|||||>I had a question, the `create_pr` function parameter is not available in the corresponding `PyTorch` or `Flax` implementation, I was wondering if this is intended. <|||||>Not sure I follow, torch already has this parameter.
transformers
21,967
closed
[TF] Can't open a pr when pushing
Creating a PR when uploading a model should work. ```python >>> from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration >>> import jax.numpy as jnp >>> model = FlaxT5ForConditionalGeneration.from_pretrained("./art/flan-ul2", dtype = jnp.bfloat16, from_pt = True) >>> model.push_to_hub("google/flan-ul2", use_auth_token = "XXXXX",create_pr = True) >>> del model >>> model = TFT5ForConditionalGeneration.from_pretrained("./art/flan-ul2", from_pt = True) >>> model.push_to_hub("google/flan-ul2", use_auth_token = "XXXXX",create_pr = True) File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 2986, in push_to_hub self.create_model_card(**base_model_card_args) TypeError: create_model_card() got an unexpected keyword argument 'create_pr' ```
03-06-2023 14:32:52
03-06-2023 14:32:52
cc @gante
transformers
21,966
closed
Use larger atol in `torch.allclose` for some tests
# What does this PR do? Running CI against torch 2.0 (and therefore CUDA 11.7), some tests for `BridgeTowerModel` failed: - test_disk_offload - test_cpu_offload - test_model_parallelism While with torch `1.13.1` (with CUDA 11.6), the difference between `base_output` and `new_output` in these 3 tests are 0.0, we get `1e-7 ~ 3e-6` as difference for torch `2.0`. **Deep debugging reveals that the first non-zero difference occurs when `nn.MultiheadAttention` is called.** This PR increase the atol to `1e-5` (the default one is `1e-8`) in `ModelTesterMixin`. If we don't feel super good with this for all model tests, we can override these 3 tests in `BridgeTowerModelTest`. (But when more models start using `nn.MultiheadAttention`, it's best to keep this larger value in common testing file)
03-06-2023 14:30:42
03-06-2023 14:30:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,965
closed
[🛠️] Fix-whisper-breaking-changes
# What does this PR do? Should fix the backward compatibility issue with `model.config.forced_decoder_ids = ...` and should help users who want to generate with timestamps. Fixes #21937 and #21878
03-06-2023 10:20:34
03-06-2023 10:20:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Test are failing because we do not check the `model.config` updating and will look good! <|||||>The same should now be applied to both the `TF` and the `Flax` version as the overwriting of the `generate` function is also supported. Will open a follow up PR for these <|||||>For TF and flax #21334
transformers
21,964
closed
Add BridgeTowerForContrastiveLearning
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-06-2023 07:16:56
03-06-2023 07:16:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger We have addressed all of your review in the latest commit. We have also added tests for BridgeTowerForContrastiveLearning. Would you please review the latest commit? We are looking forward to having this PR merged into main. Thanks<|||||>Thanks for adding this model @abhiwand @tileintel ! Great to see it merged into the library :) For the model tests, some of the configuration values in `BridgeTowerModelTester` result in large models being created and used in the test suite e.g. `vocab_size = 50265` set [here](https://github.com/huggingface/transformers/blob/bcc8d30affba29c594320fc80e4a4422fb850175/tests/models/bridgetower/test_modeling_bridgetower.py#L97), which results in periodic OOM errors in the CI runs. Could you add a follow up PR for `BridgeTowerModelTester` and `BridgeTowerModelTest` to have smaller default values to create lighter tests? A good reference for this would be [CLIP](https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_clip.py). Ideally we would also have a similar structure of test classes for the different modalities i.e. `BridgeTower[Text|Vision]ModelTest(er)`.
transformers
21,963
closed
Fix bert issue
# What does this PR do? This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes issue #21737 for Bert. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.(#21737) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younesbelkada, @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-06-2023 05:31:32
03-06-2023 05:31:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada or @gante, Can anyone of you please help me with the error I am getting after running fix-copies. Running fix-copies made changes to multiple files(19) and now I am getting tests_torch, tests_torch_and_tf error on it.<|||||>Hi @saswatmeher Thanks for the PR! I would probably try: 1- `pip install --upgrade -e .["quality"]` and then run `make fix-copies` Let us know if this works!<|||||>@younesbelkada It worked. Thanks!
transformers
21,962
closed
use datasets streaming mode in trainer ddp mode cause memory leak
### System Info pytorch 1.11.0 py 3.8 cuda 11.3 transformers 4.26.1 datasets 2.9.0 ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction import os import time import datetime import sys import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler torch.manual_seed(42) from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM from transformers import AdamW, get_linear_schedule_with_warmup hf_model_path ='./Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) from datasets import load_dataset gpus=8 max_len = 576 batch_size_node = 17 save_step = 5000 gradient_accumulation = 2 dataloader_num = 4 max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus #max_step = -1 print("total_step:%d"%(max_step)) import datasets datasets.__version__ dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True) print('load over') shuffled_dataset = dataset.shuffle(seed=42) print('shuffle over') def dataset_tokener(example,max_lenth=max_len): example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] )) return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest") #return tokenizer(example[0], truncation=True, max_length=max_lenth, padding="max_length") new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"]) print('map over') configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False) model = AutoModelForCausalLM.from_pretrained(hf_model_path) model.resize_token_embeddings(len(tokenizer)) seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) from transformers import Trainer,TrainingArguments import os print("strat train") training_args = TrainingArguments(output_dir="./test_trainer", num_train_epochs=1.0, report_to="none", do_train=True, dataloader_num_workers=dataloader_num, local_rank=int(os.environ.get('LOCAL_RANK', -1)), overwrite_output_dir=True, logging_strategy='steps', logging_first_step=True, logging_dir="./logs", log_on_each_node=False, per_device_train_batch_size=batch_size_node, warmup_ratio=0.03, save_steps=save_step, save_total_limit=5, gradient_accumulation_steps=gradient_accumulation, max_steps=max_step, disable_tqdm=False, data_seed=42 ) trainer = Trainer( model=model, args=training_args, train_dataset=new_new_dataset, eval_dataset=None, tokenizer=tokenizer, # Data collator will default to DataCollatorWithPadding, so we change it. data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False), #compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None, #preprocess_logits_for_metrics=preprocess_logits_for_metrics #if training_args.do_eval and not is_torch_tpu_available() #else None, ) trainer.train(resume_from_checkpoint=True) ### Expected behavior use the train code uppper my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py here is result: ![image](https://user-images.githubusercontent.com/15223544/223025177-6f2ec55f-5b2f-42bf-b6cc-dd5f8a8232fe.png) here is memory usage monitor in 12 hours ![image](https://user-images.githubusercontent.com/15223544/223027076-14e32e8b-9608-4282-9a80-f15d0277026d.png) every dataloader work allocate over 24gb cpu memory according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase. i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
03-06-2023 05:22:53
03-06-2023 05:22:53
cc @lhoestq <|||||>Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ? FYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls). Moreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well. Therefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode. Feel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it.<|||||>> Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ? > > FYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls). Moreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well. > > Therefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode. > > Feel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it. it is roughly about between 500000 steps - 650000 steps <|||||>> Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ? > > FYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls). Moreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well. > > Therefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode. > > Feel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it. i try use memory_profiler to profiling memory ,but profiling memory can only report the master thread memory. do you know which tool can report the dataloader worker thread memory?<|||||>I haven't tried memory_profiler with multiprocessing, but you can already try iterating on the DataLoader without multiprocessing and check if you observe a memory leak.<|||||>> I haven't tried memory_profiler with multiprocessing, but you can already try iterating on the DataLoader without multiprocessing and check if you observe a memory leak. set dataloader_num=0 ,no memory leak<|||||>This could be an issue with the torch `DataLoader` then, or python multiprocessing. On the `datasets` side this is the whole code that `yield` example if `num_worker > 0`: https://github.com/huggingface/datasets/blob/c5ca1d86949ec3a5fdaec03b80500fb822bcfab4/src/datasets/iterable_dataset.py#L843 which is almost identical to the code without multiprocessing: https://github.com/huggingface/datasets/blob/c5ca1d86949ec3a5fdaec03b80500fb822bcfab4/src/datasets/iterable_dataset.py#L937-L945 Could you check on another environment that you also observe the memory leak ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,961
closed
Support customized vocabulary for decoding (in model.generate)
### Feature request Use case: Given a small list of tokens that is a subset of the whole vocabulary of the tokenizers for T5. For example, ["put", "move", "pick", "up", "on", "in", "apple", "bag", ....] And when we decode by using `model.generate()`, we want the model only output sentences that consist of words in the above list (i.e., limited vocabulary for beam searching or sampling). Maybe it is already supported in some way? ### Motivation For some applications, we only want to decode sentences with a limited vocabulary instead of allowing open-ended generation. ### Your contribution I'm not sure what is the best way to add this feature, if it is easy to limit the vocab for generate functions, then I can help add this PR.
03-06-2023 01:56:33
03-06-2023 01:56:33
I have read this post: https://huggingface.co/blog/constrained-beam-search But it seems that such Constraints can only support constraints of ensuring some tokens are part of the sentences but cannot prevent other tokens to be selected during decoding. <|||||>Found this post to use `bad_word_list` as the whole vocab - customized vocab as the input: https://stackoverflow.com/questions/63920887/whitelist-tokens-for-text-generation-xlnet-gpt-2-in-huggingface-transformers Will have a try but sounds like a bit awkward to use. <|||||>cc @gante <|||||>Hey @yuchenlin 👋 My first approach would be to use `bad_word_list`, passing to it all but the tokens you want to use. It's a no-code approach, but perhaps not the most efficient computationally. Alternatively, you can write your own processor class that sets to `-inf` the logits of all but the tokens you want to consider. To do it, you would have to: 1. Write your own class that implements the logic. You can see plenty of examples in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py) 2. Use your class at generation time, e.g. ```py tokens_to_keep = tokenizer(xxx) # xxx = list with your valid words my_processor = MyLogitsProcessorClass(tokens_to_keep=tokens_to_keep) model.generate(inputs, ..., logits_processor=LogitsProcessorList([my_processor])) ``` I hope this short guide helps 🤗 <|||||>Hi @gante , Thanks a lot! Yeah I have tried with the `bad_wordLlist` (see example below) and I found that the generated outputs are much worse than before although they are indeed constrained to the given vocabulary. I was using beam search and I'm not sure if it is because that the vocab is so small that the normalization or other process becomes unstable. I will try the logit processor idea as well. Thank you! :D ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") whitelist = ["move", "it", "pick", "up", "focus", "on"] whitelist_ids = [tokenizer.encode(word)[0] for word in whitelist] bad_words_ids=[[id] for id in range(tokenizer.vocab_size) if id not in whitelist_ids] encoder_input_str = "Explain this concept to me: machine learning" input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids outputs = model.generate( input_ids, num_beams=10, do_sample=False, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True, bad_words_ids = bad_words_ids, ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```<|||||>@yuchenlin haha yes, the quality of the output will likely decrease significantly, that is to be expected! Instead of whitelisting words, consider the "soft-whitelisting" alternative: increase the odds of picking a token from the whitelist. You can easily implement this by changing the repetition penalty logits processor to boost the odds of certain tokens :)<|||||>Thanks a lot for the advice! I currently used a simpler method --- adding some random tokens (say 30% of the whole vocab) to the whitelist and it seems to help. Will also try your idea soon! Thanks again! :D <|||||>Just in case you are interested in more diversity of these constraints, I wrote a whole package and paper about this idea: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,960
closed
Add missing parameter definition in layoutlm config
Four parameters in `LayoutLM` config were missing definitions, Added their definition (copied from BertConfig). # What does this PR do? Fix docs, add parameter definition copying them from BertConfig ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-06-2023 00:42:28
03-06-2023 00:42:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @amyeroberts
transformers
21,959
closed
Fix MinNewTokensLengthLogitsProcessor when used with a list of eos tokens
# What does this PR do? MinNewTokensLengthLogitsProcessor is missing support for a list of eos token ids. This PR adds the missing support, in the same way it was added in MinLengthLogitsProcessor. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
03-05-2023 17:10:47
03-05-2023 17:10:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,958
closed
Cannot get the model weight of T5 INT8 model with Transformers 4.26.1
### System Info - `transformers` version: 4.26.1 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.0+cpu - Tensorflow version (GPU?): not installed (No) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <No> ### Who can help? @ArthurZucker @younesbelkada @sgu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The code is shown as follows: ```python import torch import transformers from datasets import load_dataset model_name = 't5-small' model_fp32 = transformers.AutoModelForSeq2SeqLM.from_pretrained( model_name, ) model_int8 = torch.ao.quantization.quantize_dynamic( model_fp32, {torch.nn.Linear}, dtype=torch.qint8) def get_example_inputs(model_name, dataset_name='sst2'): tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) dataset = load_dataset(dataset_name, split='validation') text = dataset[0]['text'] if dataset_name=='lambada' else dataset[0]['sentence'] input = tokenizer(text, padding='max_length', max_length=195, return_tensors='pt') example_inputs = input['input_ids'][0].to('cpu').unsqueeze(0) return example_inputs example_inputs = get_example_inputs(model_name, dataset_name='lambada') output = model_int8.generate(example_inputs) print(output) ``` The error message is shown as follows, [Issue Report.txt](https://github.com/huggingface/transformers/files/10905558/Issue.Report.txt) ### Expected behavior This example's expected behavior is quantizing the FP32 T5 model into INT8 format using INT8 model to generate the output. This code could make a success with the previous version Transformers 4.26.0. After the recent updates, This code cannot run normally anymore. We found the error is result from the fix of another issue: https://github.com/huggingface/transformers/issues/20287. That is probably because that you have used weight from "self.wo". But that weight becomes a function in the INT8 module. It should be used by "self.wo.weight()". Please reconsider the previous fix for that issue to make it compatible.
03-05-2023 08:14:46
03-05-2023 08:14:46
Cool that this seems to have been fixed for you! Could you tell us what the solution was? (for futur reference and other users who might stumble upon the smae problem)<|||||>@ArthurZucker Hi, Arthur, we still did not fix this issue. Could you please have a check for this issue? We just modify the previous description to make the problem more clear for you to solve. <|||||>Hey! Is there a reason why you are not using `load_in_8bit = True`? If you install the `bits_and_bytes` library, getting the 8bit quantized version of the model is as easy as the following: ```python import transformers from datasets import load_dataset model_name = 't5-small' model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name,load_in_8bit = True, device_map="auto") ```<|||||>Hello @XuhuiRen , Your script worked fine on the `main` branch of `transformers` ```python import torch import transformers from datasets import load_dataset model_name = 't5-small' model_fp32 = transformers.AutoModelForSeq2SeqLM.from_pretrained( model_name, ) model_int8 = torch.ao.quantization.quantize_dynamic( model_fp32, {torch.nn.Linear}, dtype=torch.qint8 ) output = model_int8.generate(torch.LongTensor([[0, 1, 2, 3]])) print(output) ``` Can you try it with: ``` pip install git+https://github.com/huggingface/transformers.git ``` For more context, the PR: https://github.com/huggingface/transformers/pull/21843 solved your issue<|||||>Hi, @ArthurZucker and @younesbelkada really thanks for your reply. Your solution is works for my issue.
transformers
21,957
closed
Update expected values in `XLMProphetNetModelIntegrationTest`
# What does this PR do? After #21870, we also need to update some expected values in `XLMProphetNetModelIntegrationTest` (as has been done for `ProphetNetModelIntegrationTest` in that PR)
03-05-2023 05:23:43
03-05-2023 05:23:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,956
closed
[Generate] Fix gradient_checkpointing and use_cache bug for BLOOM
Fixes #21737 for Bloom. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a GitHub issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? cc @younesbelkada @gante
03-05-2023 02:41:51
03-05-2023 02:41:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,955
closed
LLaMA Implementation
# What does this PR do? Implementation of LLaMA models (https://arxiv.org/abs/2302.13971). Model weights can be requested [here](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform). Weight conversion script is included. Weights conversion can be run via: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights \ --model_size 7B \ --output_dir /output/path ``` Models can then be loaded via: ```python tokenizer = transformers.LLaMATokenizer.from_pretrained("/output/path/tokenizer/") model = transformers.LLaMAForCausalLM.from_pretrained("/output/path/llama-7b/") ``` Example: ```bash batch = tokenizer( "The primary use of LLaMA is research on large language models, including", return_tensors="pt", add_special_tokens=False ) batch = {k: v.cuda() for k, v in batch.items()} generated = model.generate(batch["input_ids"], max_length=100) print(tokenizer.decode(generated[0])) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/21796 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-05-2023 00:05:39
03-05-2023 00:05:39
does this work with int8?<|||||>> does this work with int8? No idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models.<|||||>nice work! thanks for the upload and I hope it gets pulled<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>It looks like the tests which are currently failing are unrelated to the LLaMA code, so this should be good to review/use. If folks can try it out (particularly with the larger, sharded models) and see if there are any issues, that will be helpful!<|||||>> It looks like the tests which are currently failing are unrelated to the LLaMA code, so this should be good to review/use. > > If folks can try it out (particularly with the larger, sharded models) and see if there are any issues, that will be helpful! At lest the convert script seems to work fine. I was able to convert 7B to 30B. I do not have enough ram to convert 65B.<|||||>Great work. thanks for putting this together<|||||>After replacing transformers from Kobold with this PR I am able to load the shards as expected. Just I cant generate anything because Kobold still needs some changes. ![image](https://user-images.githubusercontent.com/7269941/222938895-2e8b9d71-6a88-417d-b7ed-14d8216d2ef4.png) <|||||>> > does this work with int8? > > No idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models. Int8 seems not working but float16 is fine, in my hasty put-together test at https://github.com/zsc/llama_infer . Please throw a comment in case you find something!<|||||>@zphang I'm not able to get something like `tokenizer = AutoTokenizer.from_pretrained("/data/llama/hf/7b/tokenizer/")` to work. Is this intentional or just leaving AutoTokenizer for future work?<|||||>> @zphang I'm not able to get something like `tokenizer = AutoTokenizer.from_pretrained("/data/llama/hf/7b/tokenizer/")` to work. Is this intentional or just leaving AutoTokenizer for future work? What issue are you having / what is the error?<|||||>I have tested the code and these are my findings: 1. The conversion script works. 2. Loading the model works. 3. Loading the tokenizer with `transformers.LLaMATokenizer.from_pretrained` works. 4. Loading the tokenizer with `AutoTokenizer.from_pretrained` does not work and generates this error: ``` OSError: /tmp/converted/tokenizer/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//tmp/converted/tokenizer//None' for available files. ``` 5. The generated text seems to be incoherent. If I try these default values for the generation parameters: ``` model.generate(input_ids, eos_token_id=2, do_sample=True, temperature=1, top_p=1, typical_p=1, repetition_penalty=1, top_k=50, min_length=0, no_repeat_ngram_size=0, num_beams=1, penalty_alpha=0, length_penalty=1, early_stopping=False, max_new_tokens=200).cuda() ``` with this prompt: ``` Common sense questions and answers Question: What color is the sky? Factual answer: ``` I get ``` Common sense questions and answers Question: What color is the sky? Factual answer: Tags: python, django, django-models Question: Using Django with multiple databases I am attempting to use django with multiple databases, and I have the following code: \begin{code} DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': ':memory:', }, 'db_one': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'db_one', }, 'db_two': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'db_two', }, } ``` It seems to me that prompts are being completely ignored. 6. Loading in 8-bit mode with `load_in_8bit=True` works.<|||||>This is OK: `tokenizer = transformers.LLaMATokenizer.from_pretrained("/data/llama/hf/7b/tokenizer/")` If using `tokenizer = AutoTokenizer.from_pretrained("/data/llama/hf/7b/tokenizer/"` then it will complain no "config.json". ``` OSError: /data/llama/hf/7b/tokenizer/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/llama/hf/7b/tokenizer//None' for available files. ``` I then hacked by softlinking `/data/llama/hf/7b/tokenizer/special_tokens_map.json` to `/data/llama/hf/7b/tokenizer/config.json` and it works. So maybe just rename? Anyway, can now happily play with LLaMA in Hugging Face world and thanks for the great work!<|||||>Thanks for the comments. Looks like the saved tokenizer doesn't work for `AutoTokenizer` but works if you directly instantiate from `LLaMATokenizer`. Maybe one of the HF folks can chime in on the best way to address that. > The generated text seems to be incoherent. If I try these default values for the generation parameters: Can you check the input_ids you're using to generate? The tokenizer currently adds both BOS and EOS tokens by default, and an EOS might cause the model to ignore your prompt. Perhaps I can set EOS to not be added by default so it operates closer to expected behavior.<|||||>For this prompt: ``` 'Common sense questions and answers\n\nQuestion: What color is the sky?\nFactual answer:' ``` these are the input_ids: ``` tensor([[ 1, 13103, 4060, 5155, 322, 6089, 13, 13, 16492, 29901, 1724, 2927, 338, 278, 14744, 29973, 13, 29943, 19304, 1234, 29901, 2]], device='cuda:0') ``` I do not know how to interpret these numbers, but if there is an EOS token in that tensor and that token is causing the text generation to derail, changing that default would be valuable.<|||||>1 is BOS and 2 is EOS. Can you try without the last input id? I also added an example in my PR message.<|||||>I confirm that doing this ``` input_ids = input_ids[:, :-1] ``` to remove the last input id before calling `model.generate(...)` causes the text generation to become coherent: ``` Common sense questions and answers Question: What color is the sky? Factual answer: The sky is blue. The sky is blue, and it is a fact that it is blue. The sky is indisputably blue. ```<|||||>Added a commit that should fix the tokenizer issues, and not add BOS and EOS by default.<|||||>Awesome, I confirm that the text generation is coherent by default now. I still cannot load the tokenizer with `AutoTokenizer.from_pretrained`. The error has now changed to this: ``` File "/tmp/transformers/src/transformers/models/auto/tokenization_auto.py", line 694, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "/tmp/transformers/src/transformers/models/auto/auto_factory.py", line 610, in __getitem__ raise KeyError(key) KeyError: <class 'transformers.models.llama.configuration_llama.LLaMAConfig'> ```<|||||>> > does this work with int8? > > No idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models. After the fix with EOS, int8 (bitsandbytes) looks decent. Example in https://github.com/zsc/llama_infer/blob/main/README.md<|||||>After https://github.com/huggingface/transformers/pull/21955/commits/459e2ac9f551650ced58deb1c65f06c3d483d606, `AutoTokenizer.from_pretrained` now works as expected. <|||||>KoboldAI now works<|||||>I'd like to see a more memory-efficient conversion script, the current version loads everything into system memory which makes converting the 30B and 65B variants challenging on some systems<|||||>Yes, this is a quick and dirty version that loads everything into memory. One issue is that the way the weights are sharded (for tensor parallelism) is orthogonal to the way that HF shards the weights (by layer). So either we have to load everything in at once, or we have to load/write multiple times. The latter would be slower but useful for folks with less memory.<|||||>Has anyone tested loading 65B with `accelerate` to load on multiple GPUs?<|||||>I can't load the 7B model to cuda with one A4000 should I just change the gpu? <|||||>I'm observing some strange behavior with the tokenizer when encoding sequences beginning with a newline: ``` >>> t = AutoTokenizer.from_pretrained("llama_hf/tokenizer") >>> res = t.encode("\nYou:") >>> res [29871, 13, 3492, 29901] >>> t.decode(res) 'You:' ``` The newline seems to get lost somewhere along the way. EDIT: Looking into this, it seems it might be the expected behavior of `sentencepiece`.<|||||>> Has anyone tested loading 65B with `accelerate` to load on multiple GPUs? ||fp16|int8(bitsandbytes)| |--|--|--| |V100|OK, 5xV100|Bad results, short generated sequences| |A100|OK, 6xA100 when using "auto"|OK, 3xA100| Yes, I currently have a 65B fp16 model running on 6xV100 now (5X should be enough). My working code is at https://github.com/zsc/llama_infer/ . If there are CUDA OOM due to bad distribution of weights among cards, one thing worth trying is tweaking the device_map (`accelerate` seems to only counts weights when enforcing the memory cap in device_map, so there is an art for setting custom cap a little lower for every card, especially card 0). Strangely, int8 (LLM.int8 to be specific) for 65B model works like a charm on A100, but leads to bad results on V100 with abnormally short generated sequences.<|||||>> Strangely, int8 (LLM.int8 to be specific) for 65B model works like a charm on A100, but leads to bad results on V100 with abnormally short generated sequences. I will have a look at this later next week. The V100 takes a different code path than the A100 because the V100 does not support Int8 tensor cores. I think that is the issue here. We will soon publish FP4 inference which should be more universal and easier to use.<|||||>Jumping on @thomasw21 comment, we sadly cannot accept any code licensed GPLv3 as it would taint the whole Transformers library under that license. This means that the modeling code should be copied from GPT-NeoX whenever possible (with Copied from statements) since I believe that this model is very close to it and that you should be super familiar with it @zphang ;-) , and that no parts of the modeling code should be copy-pasted from the original Llama code. We also cannot attribute Copyright to Meta-AI /Meta in all those files, as attributing that copyright would admit the code in the PR is based on theirs and thus get us back to the license problem.<|||||>I tried quantizing LLaMa using [GPTQ](https://arxiv.org/abs/2210.17323). I've confirmed that 4 or 3 bit quantization works well, at least on LLaMa 7b. | Model([LLaMa-7B](https://arxiv.org/abs/2302.13971)) | Bits | group-size | Wikitext2 | PTB | C4 | | --------- | ---- | ---------- | --------- | --------- | ------- | | FP16 | 16 | - | 5.67 | 8.79 | 7.05 | | RTN | 4 | - | 6.28 | 9.68 | 7.70 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | 64 | **6.16** | **9.66** | **7.52** | | RTN | 3 | - | 25.66 | 61.25 | 28.19 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | 64 | **12.24** | **16.77** | **9.55** | I have released the [code](https://github.com/qwopqwop200/GPTQ-for-LLaMa) for this. <|||||>Running into an issue with the following code ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("models/facebook_LLaMA-7b") output = [[v] for k, v in tokenizer.get_vocab().items() if any(c in str(k) for c in "<>[]")] debug_output = tokenizer.batch_decode(output) print(debug_output) ``` This is supposed to only print tokens that are including one of the <>[] characters, but I am getting a wide range back at the beginning causing our code to effect way to many tokens. This includes tokens that are blatantly false such as every singular letter of the alphabet. Is this to be expected with the kind of tokenizer that is used and do we have to find an alternative implementation or is something going wrong here on a deeper level? If I try the same thing with a tokenizer from the usual models we run such as the GPT-J tokenizer this code runs properly.<|||||>> I tried quantizing LLaMa using [GPTQ](https://arxiv.org/abs/2210.17323). I've confirmed that 4 or 3 bit quantization works well, at least on LLaMa 7b. > > Model([LLaMa-7B](https://arxiv.org/abs/2302.13971)) Bits group-size Wikitext2 PTB C4 > FP16 16 - 5.67 8.79 7.05 > RTN 4 - 6.28 9.68 7.70 > [GPTQ](https://arxiv.org/abs/2210.17323) 4 64 **6.16** **9.66** **7.52** > RTN 3 - 25.66 61.25 28.19 > [GPTQ](https://arxiv.org/abs/2210.17323) 3 64 **12.24** **16.77** **9.55** > I have released the [code](https://github.com/qwopqwop200/GPTQ-for-LLaMa) for this. Do you plan to publish 4-bit quantized models on the hub? If not, I'd be more than happy to generate them and push them up.<|||||>> > [GPTQ를](https://arxiv.org/abs/2210.17323) 사용하여 LLaMa를 양자화해 보았습니다 . 적어도 LLaMa 7b에서는 4비트 또는 3비트 양자화가 잘 작동함을 확인했습니다. > > Model( [LLaMa-7B](https://arxiv.org/abs/2302.13971) ) Bits group-size Wikitext2 PTB C4 [FP16](https://github.com/qwopqwop200/GPTQ-for-LLaMa) > > 16 - 5.67 8.79 7.05 > > RTN 4 - > > 6.28 9.68 7.70 > > [GPTQ](https://arxiv.org/abs/2210.17323) 4 64 **6.16 ** **9.66 ** **7.52** RTN > > 3 - 25.66 61.25 28.19 **GPTQ ** > > [3](https://arxiv.org/abs/2210.17323) 64 **12.75 ****1** .**** **** ****[](https://github.com/qwopqwop200/GPTQ-for-LLaMa) > > 허브에 4비트 양자화 모델을 게시할 계획입니까? 그렇지 않다면 기꺼이 생성하고 밀어 올릴 것입니다. There are no plans to publish a 4-bit quantization model yet. And to get advantages in speed and memory, you need to change and use dedicated 3-bit CUDA Kernels. I haven't even tested 3-bit CUDA Kernels yet.<|||||>Maybe this issure need to be fixed? the [llama-int8](https://github.com/tloen/llama-int8) can works. but this ``` from transformers import LLaMaForCausalLM,LLaMaPreTrainedModel model = LLaMaForCausalLM.from_pretrained("Z:/LLaMA/llama-7b-hf") ``` errors: File [G:\pythons\huggingface\transformers\src\transformers\modeling_utils.py:2632](file:///G:/pythons/huggingface/transformers/src/transformers/modeling_utils.py:2632), in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2622 if dtype_orig is not None: 2623 torch.set_default_dtype(dtype_orig) 2625 ( 2626 model, 2627 missing_keys, 2628 unexpected_keys, 2629 mismatched_keys, 2630 offload_index, 2631 error_msgs, -> 2632 ) = cls._load_pretrained_model( 2633 model, 2634 state_dict, 2635 loaded_state_dict_keys, # XXX: rename? 2636 resolved_archive_file, 2637 pretrained_model_name_or_path, 2638 ignore_mismatched_sizes=ignore_mismatched_sizes, 2639 sharded_metadata=sharded_metadata, ... 1268 return modules[name] -> 1269 raise AttributeError("'{}' object has no attribute '{}'".format( 1270 type(self).__name__, name)) AttributeError: 'LLaMaLayerNorm' object has no attribute 'bias' <|||||># > Maybe this issure need to be fixed? > > the [llama-int8](https://github.com/tloen/llama-int8) can works. > > but this > > ``` > from transformers import LLaMaForCausalLM,LLaMaPreTrainedModel > model = LLaMaForCausalLM.from_pretrained("Z:/LLaMA/llama-7b-hf") > ``` > > errors: > > File G:\pythons\huggingface\transformers\src\transformers\modeling_utils.py:2632, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2622 if dtype_orig is not None: 2623 torch.set_default_dtype(dtype_orig) 2625 ( 2626 model, 2627 missing_keys, 2628 unexpected_keys, 2629 mismatched_keys, 2630 offload_index, 2631 error_msgs, -> 2632 ) = cls._load_pretrained_model( 2633 model, 2634 state_dict, 2635 loaded_state_dict_keys, # XXX: rename? 2636 resolved_archive_file, 2637 pretrained_model_name_or_path, 2638 ignore_mismatched_sizes=ignore_mismatched_sizes, 2639 sharded_metadata=sharded_metadata, ... 1268 return modules[name] -> 1269 raise AttributeError("'{}' object has no attribute '{}'".format( 1270 type(self).**name**, name)) > > AttributeError: 'LLaMaLayerNorm' object has no attribute 'bias' I'm not sure how that code even runs, the case doesn't match. You're importing `LLaMaForCausalLM` and the code provides `LLaMAForCausalLM` Also, where did you pull the model from? Is that the copy from decapoda-research on the hub?<|||||>> Jumping on @thomasw21 comment, we sadly cannot accept any code licensed GPLv3 as it would taint the whole Transformers library under that license. This means that the modeling code should be copied from GPT-NeoX whenever possible (with Copied from statements) since I believe that this model is very close to it and that you should be super familiar with it @zphang ;-) , and that no parts of the modeling code should be copy-pasted from the original Llama code. > > We also cannot attribute Copyright to Meta-AI /Meta in all those files, as attributing that copyright would admit the code in the PR is based on theirs and thus get us back to the license problem. Note that under Apache 2.0 the correct thing to do is credit _EleutherAI_ with the copyright. I have gone ahead and added the proper copyright headers, including adding a notice informing the reading that the code has been modified from it's original version and where it was adapted from. **Note that I have not changed the code itself in any way.** If it is currently the case that some of the code is based on the FairSeq implementation that still needs to be removed and rewritten using GPT-NeoX (whether the implementation in this library or the GPT-NeoX library itself) as a reference. However once all GPLv3 licensed code has been removed, the headers I added should be correct. cc: @zphang @sgugger @thomasw21 for visibility. It might be worth mentioning this in the documentation of the model, as I anticipate this will cause some people confusion. > Also, where did you pull the model from? Is that the copy from decapoda-research on the hub? @zoidbb this PR includes a script for converting the models from their original format to a HF-compliant one.<|||||>@henk717 your code doesn't work for me with any model. I consistently error on the line `debug_output = tokenizer.batch_decode(output)`<|||||>Thanks for your contribution! However, I find that the model usually ends up repeating the same sentence during inference. For example, by running the code below: ``` import transformers import os os.environ['CUDA_VISIBLE_DEVICES']='3' config = transformers.AutoConfig.from_pretrained("./llama-7b-hf/llama-7b") tokenizer = transformers.AutoTokenizer.from_pretrained("./llama-7b-hf/tokenizer") model = transformers.AutoModelForCausalLM.from_pretrained("./llama-7b-hf/llama-7b", config=config).cuda() batch = tokenizer( ["The primary use of LLaMA is research on large language models, including "], return_tensors="pt" ) batch = {k: v.cuda() for k, v in batch.items()} input_ids = batch["input_ids"] # tensor([[1,...]]) generated = model.generate(input_ids, temperature=0.8, top_p=0.95, max_length=256).cuda() print(tokenizer.decode(generated[0]) ``` **the LLaMA-7b-hf output is:** ![1](https://user-images.githubusercontent.com/45878717/223327316-99672393-3d31-4531-bd3c-bc218f1dddd6.png) **while the original LLaMA(7b) output is (temperature=0.8, top_p=0.95):** ![2](https://user-images.githubusercontent.com/45878717/223327389-5c635bd2-121a-4990-9998-3fbc6f448bd2.png) Now it seems that the two models are quite different with each other. I wonder if they can output the same sentence, given the same input and parameters?<|||||>after install your branch of llama, i can't import LLaMaTokenizer as below ImportError: cannot import name 'LLaMaTokenizer' from 'transformers' but import LLaMaForCausalLM is successful... how could this happen?<|||||>Hey all! Great to see that the community is so fired up for this PR 🤗 🔥 ! @zphang thanks a lot for this contribution, don't hesitate to ping me once this is ready for review,. In the mean time let's all give him some time, as it was never stated that this should already work or compile! Let's wait until then to report potential issues ! 😉 <|||||>@StellaAthena Fully in line with your change of copyright and also to explicitly mention in the doc page of the model that the code is based on GPT-NeoX and not the original Llama code. This can be put in the usual place we say: "This model was contributed by xxx. The original code can be found [here](yyy)."<|||||>> > Has anyone tested loading 65B with `accelerate` to load on multiple GPUs? > > fp16 int8(bitsandbytes) > V100 OK, 5xV100 Bad results, short generated sequences > A100 OK, 6xA100 when using "auto" OK, 3xA100 > Yes, I currently have a 65B fp16 model running on 6xV100 now (5X should be enough). My working code is at https://github.com/zsc/llama_infer/ . If there are CUDA OOM due to bad distribution of weights among cards, one thing worth trying is tweaking the device_map (`accelerate` seems to only counts weights when enforcing the memory cap in device_map, so there is an art for setting custom cap a little lower for every card, especially card 0). > > Strangely, int8 (LLM.int8 to be specific) for 65B model works like a charm on A100, but leads to bad results on V100 with abnormally short generated sequences. CC: @zsc Just a heads up on the issue with V100 and bitsandbytes, I've encountered a bit weird and may be relevant behavior with long inputs, but relaxing the quantization threshold by `BitsAndBytesConfig(llm_int8_threshold=5.0)` (which I believe is set to 6 by default according to the [doc](https://huggingface.co/docs/transformers/main/main_classes/quantization)) resolved the issue. Ref: https://github.com/huggingface/transformers/issues/21987 <|||||>@zoidbb Thank you!!! I'm sorry, the error is the branch of "thomas/llama", not this!<|||||>Heads up to folks already using this PR: I will be pushing some changes today to address the maintainers' feedback that will break compatibility. In particular, I will be making a major change to how the RoPE encodings are computed. Please pull and re-convert the weights when the branch is updated. Sorry for the inconvenience.<|||||>> Thanks for the heads up <|||||>As mentioned above, I've pushed an update (previous git hash was: 6a17e7f) that introduces some breaking changes to incorporate the Transformers maintainers' feedback. (There are still some tweaks to go, but nothing that should be breaking from here.) **Please rerun the weight conversion scripts.** Major changes: - Removal of Decoder class - `add_bos` on by default. `add_eos` is still off. - Reimplementation of RoPE based on NeoX version. This required a weight permutation (NeoX's RoPE slices the head_dims in half, the Meta implementation interleaves), which is performed in the conversion script. **Note**: the weights are still the same shape, so if you don't rerun the conversion script and only rename the keys, the weights will load but be completely incorrect. As before, please let me know if you run into any issues, it is great to have this many people trying and testing the implementation.<|||||>Running into issues with the model conversion on the latest update, loading the model fails (On the latest version of the pull request git) and pytorch_model-00033-of-00033 is missing. Using the custom KoboldAI loader it reports a key error, and using the official HF loader it mentions not all weights could be initialized correctly but does finish the loading process.<|||||>> Running into issues with the model conversion on the latest update, loading the model fails (On the latest version of the pull request git) and pytorch_model-00033-of-00033 is missing. > > Using the custom KoboldAI loader it reports a key error, and using the official HF loader it mentions not all weights could be initialized correctly but does finish the loading process. Confirmed this is an issue on 7B and 13B both, the last file is missing both from disk and from the index file.<|||||>@zphang you have a typo in the line defining the filename for the last .bin file, you use n_layers instead of n_layers+1. this leads to part 32 being overwritten, instead of part 33 being written (in the case of 7B).<|||||>Hmm now I am a little confused. There is an off-by-one error, but that's it going from 0 to n_layers instead of 1 to n_layers+1. It ought to load just fine though, since it's consistent with the index JSON. (The +1 is for the embedding layers and final norm)<|||||>Pushed fix for off-by-one naming.<|||||>@zphang now im getting this error: ``` RuntimeError: Failed to import transformers.models.llama.tokenization_t5 because of the following error (look up to see its traceback): No module named 'transformers.models.llama.tokenization_t5' ```<|||||>Try again now!<|||||>Seems to load, running the new model conversion through its paces. Thanks for your work.<|||||>I am not having any luck yet, my errors remain identical. For context I use AutoModelForCausalLM in the code.<|||||>> I am not having any luck yet, my errors remain identical. For context I use AutoModelForCausalLM in the code. I'm also using that, and it's working fine. Did you make sure to grab the latest commits and install them? Also you have run the conversion again.<|||||>Late night coding moment, my update script tricked me into thinking it was applying the update but it was only cloning the git instead of applying it (Normally I run a different one that does more that does do that) so my last few attempts used the same code. The update fixed it, our software also has some regex's to clean up output that were not being applied correctly previously because of an issue I did not manage to track down. On this new iteration of the fork that bug got fixed, so while I do not know what the cause behind that was on the old version I am happy to report the latest version is an improvement for us in functionality.<|||||>The model behaves much better with these latest changes, I was always confused why I either got long elaborate nonsense or really short answers when using it as a chatbot :P Thanks for your work @zphang this is epic.<|||||>Hi thanks for your great work. Can we fine-tune it for now?<|||||>@zphang there seems to be an issue when using generate with `num_return_sequences`. Running this code: ```python import torch, os from transformers import AutoTokenizer, AutoModelForCausalLM os.environ['CUDA_LAUNCH_BLOCKING'] = "1" modeldir = "./llama-7b" tokenizer = AutoTokenizer.from_pretrained(modeldir) model = AutoModelForCausalLM.from_pretrained(modeldir, torch_dtype=torch.float16).to("cuda") input_ids = tokenizer('aaaaa', return_tensors="pt").input_ids.to("cuda") for i in range(100): print(i) model.generate(input_ids, max_new_tokens=20, do_sample=True, num_return_sequences=4) ``` I get this output: ``` 0 1 2 3 4 5 6 7 8 /opt/conda/conda-bld/pytorch_1670525541990/work/aten/src/ATen/native/cuda/Indexing.cu:1088: indexSelectSmallIndex: block: [18,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1670525541990/work/aten/src/ATen/native/cuda/Indexing.cu:1088: indexSelectSmallIndex: block: [18,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1670525541990/work/aten/src/ATen/native/cuda/Indexing.cu:1088: indexSelectSmallIndex: block: [18,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ... a bunch more of those ... /opt/conda/conda-bld/pytorch_1670525541990/work/aten/src/ATen/native/cuda/Indexing.cu:1088: indexSelectSmallIndex: block: [18,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1670525541990/work/aten/src/ATen/native/cuda/Indexing.cu:1088: indexSelectSmallIndex: block: [18,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[1], line 12 10 for i in range(100): 11 print(i) ---> 12 model.generate(input_ids, 13 max_new_tokens=20, 14 do_sample=True, 15 num_return_sequences=4) File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27), in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/generation/utils.py:1452](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/generation/utils.py:1452), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs) 1444 input_ids, model_kwargs = self._expand_inputs_for_generation( 1445 input_ids=input_ids, 1446 expand_size=generation_config.num_return_sequences, 1447 is_encoder_decoder=self.config.is_encoder_decoder, 1448 **model_kwargs, 1449 ) 1451 # 13. run sample -> 1452 return self.sample( 1453 input_ids, 1454 logits_processor=logits_processor, 1455 logits_warper=logits_warper, 1456 stopping_criteria=stopping_criteria, 1457 pad_token_id=generation_config.pad_token_id, 1458 eos_token_id=generation_config.eos_token_id, 1459 output_scores=generation_config.output_scores, 1460 return_dict_in_generate=generation_config.return_dict_in_generate, 1461 synced_gpus=synced_gpus, 1462 **model_kwargs, 1463 ) 1465 elif is_beam_gen_mode: 1466 if generation_config.num_return_sequences > generation_config.num_beams: File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/generation/utils.py:2468](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/generation/utils.py:2468), in GenerationMixin.sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs) 2465 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2467 # forward pass to get next token -> 2468 outputs = self( 2469 **model_inputs, 2470 return_dict=True, 2471 output_attentions=output_attentions, 2472 output_hidden_states=output_hidden_states, 2473 ) 2475 if synced_gpus and this_peer_finished: 2476 continue # don't waste resources running the code we don't need File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:772](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:772), in LLaMAForCausalLM.forward(self, input_ids, attention_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 769 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 771 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) --> 772 outputs = self.model( 773 input_ids=input_ids, 774 attention_mask=attention_mask, 775 past_key_values=past_key_values, 776 inputs_embeds=inputs_embeds, 777 use_cache=use_cache, 778 output_attentions=output_attentions, 779 output_hidden_states=output_hidden_states, 780 return_dict=return_dict, 781 ) 783 hidden_states = outputs[0] 784 logits = self.lm_head(hidden_states) File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:581](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:581), in LLaMAModel.forward(self, input_ids, attention_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 578 if attention_mask is None: 579 attention_mask = torch.ones(inputs_embeds.shape[:2], dtype=torch.bool, device=inputs_embeds.device) --> 581 attention_mask = self._prepare_decoder_attention_mask( 582 attention_mask, input_shape, inputs_embeds, past_key_values_length 583 ) 585 hidden_states = inputs_embeds 587 if self.gradient_checkpointing and self.training: File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:490](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:490), in LLaMAModel._prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) 484 combined_attention_mask = _make_causal_mask( 485 input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length 486 ).to(inputs_embeds.device) 488 if attention_mask is not None: 489 # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] --> 490 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to( 491 inputs_embeds.device 492 ) 493 combined_attention_mask = ( 494 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask 495 ) 497 return combined_attention_mask File [~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:77](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/abstract/projs/pytorch_tests/~/mambaforge/envs/reproenv/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:77), in _expand_mask(mask, dtype, tgt_len) 73 expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) 75 inverted_mask = 1.0 - expanded_mask ---> 77 return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) RuntimeError: CUDA error: device-side assert triggered ``` My env is ``` mamba create -n reproenv python=3.10 jupyter pip pytorch pytorch-cuda=11.7 -c pytorch -c nvidia conda activate reproenv pip install git+https://github.com/zphang/transformers@llama_push sentencepiece ```<|||||>When using the 30B variant I got this error (do_sample=true): ``` File "/secondary/thies/llama/app.py", line 184, in completions ids_outputs = model.generate(tok_inputs, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, temperature=temperature, do_sample=True) File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/transformers/generation/utils.py", line 1452, in generate return self.sample( File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/transformers/generation/utils.py", line 2504, in sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` 7B variant works fine.<|||||>> When using the 30B variant I got this error (do_sample=true): > > ``` > File "/secondary/thies/llama/app.py", line 184, in completions > ids_outputs = model.generate(tok_inputs, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, temperature=temperature, do_sample=True) > File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context > return func(*args, **kwargs) > File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/transformers/generation/utils.py", line 1452, in generate > return self.sample( > File "/secondary/thies/.virtualenvs/llama/lib/python3.10/site-packages/transformers/generation/utils.py", line 2504, in sample > next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) > RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 > ``` > > 7B variant works fine. I can also reproduce this when using int8 on my 6900xt<|||||>Also getting ``` RuntimeError: probability tensor contains either `inf`, `nan` or element < 0` ``` <|||||>I noticed that 6a17e7f3b45454429f2b0405e673dc4aaa695546 version seems to work and fixes the `RuntimeError: probability tensor contains either `inf`, `nan` or element < 0` error.<|||||>I will try to follow up on some of the other reported issues today/tomorrow. I did indeed see an issue with the combined/expanded masks as @BlackSamorez mentioned, although I will need to take a closer look at the fix since there are so many entry points for HF code. I also pushed a fix for what I think was causing some of the above issues. Part of it affects the config file, so you should go into the model config.json and set `pad_token_id=0`. With that first, the example code given by @AbstractQbit should run fine now. I also wrote some very preliminary fine-tuning code https://github.com/zphang/minimal-llama/ if folks are interested in testing it out. It also uses the code from this PR.<|||||>Loading model directly results in warning about not using some weights. Is this expected? Other models (e.g. Bloom) does not have this issue. ```from transformers.models.llama.modeling_llama import LLaMAModel LLaMAModel.from_pretrained('llama-7b') ``` throws this: ``` Some weights of the model checkpoint at llama-7b were not used when initializing LLaMAModel: ['lm_head.weight'] - This IS expected if you are initializing LLaMAModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LLaMAModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).``` <|||||>`LLaMAModel` excludes the LM Head. You want to use `LLaMAForCausalLM` instead.<|||||><img width="977" alt="image" src="https://user-images.githubusercontent.com/106687776/223898571-7866feb7-ba02-4641-be86-c424aef03883.png"> @zphang getting the following err. any clue?<|||||>> `LLaMAModel` excludes the LM Head. You want to use `LLaMAForCausalLM` instead. So the error is expected when calling this class? I see other models are silencing it with `_keys_to_ignore_on_load_unexpected = [r"decoder\.version", r"lm_head.weight"]` in XXXPreTrainedModel. I know why I'm calling this class directly, I was just uneasy that I got this error with LLaMA and not with other models (e.g. Bloom), but later I found this difference in `_keys_to_ignore_on_load_unexpected `<|||||>I don't think I see that in OPT or GPT-NeoX-20B.<|||||>Can the code be used for finetuning? I tried simply to finetune llama-33b using deepspeed on 8x40gA100, but it seems batch-size can only be 1, while in the same ds config and GPU environment, I can finetune opt-iml-33b with batch_size 8. I wonder if it is the code that can not be well used for finetuning using deepspeed?<|||||>> <img alt="image" width="977" src="https://user-images.githubusercontent.com/106687776/223898571-7866feb7-ba02-4641-be86-c424aef03883.png"> > > @zphang getting the following err. any clue? Try installing llama requirements packages (Before that, make sure you installed zphang's llama_push repo) `git clone https://github.com/facebookresearch/llama` `pip install -r requirements.txt`<|||||>> Can the code be used for finetuning? I tried simply to finetune llama-33b using deepspeed on 8x40gA100, but it seems batch-size can only be 1, while in the same ds config and GPU environment, I can finetune opt-iml-33b with batch_size 8. I wonder if it is the code that can not be well used for finetuning using deepspeed? I have the same problem, on 4*A100 40G, for BLOOMZ 7B can be 8 batch, llama 7B can only be 1batch.<|||||>I get different results in the original PR implementation (which had licensing issues) and the current, modified implementation. Using the prompt `This is a story about` and ``` output = model.generate(input_ids, do_sample=False) ``` I get these outputs: ## Original: > This is a story about how we got here, and the challenges ahead. The first time I saw a drone was in 2013 when my wife and I were visiting our daughter at college. We went to her dorm room one afternoon to drop off some things she needed for class that day, and as soon as we walked into the lobby, there it was: a quadcopter hovering just above us, its four propellers spinning lazily in the air. It wasn’t doing anything special—it was just flying around on its own, like an insect buzzing around your head. But it made me feel uneasy; I didn’t know what to make of this new technology. The next year, after I had moved back home from Washington, D.C., where I worked for a nonprofit, I started noticing them more often while walking through neighborhoods or hiking in the woods. Sometimes they would be parked ## Current: > This is a story about a man who was born in the 1920s and lived through the Great Depression, World War II, and the Korean War. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He Maybe I am doing something wrong, so can someone double check this?<|||||>We are noticing that unlike other models this model / tokenizer isn't adding spaces at the beginning of the generation. Our interface is designed for continuous generation so if a user submits "Programmers are busy debugging the code" we would expect the generation " of the model generation." resulting in "Programmers are busy debugging the code of the model generation." Instead we are seeing "of the model generation." resulting in "Programmers are busy debugging the codeof the model generation". Is this to be expected on the latest version of the PR and conversion?<|||||>> We are noticing that unlike other models this model / tokenizer isn't adding spaces at the beginning of the generation. Our interface is designed for continuous generation so if a user submits "Programmers are busy debugging the code" we would expect the generation " of the model generation." resulting in "Programmers are busy debugging the code of the model generation." > > Instead we are seeing "of the model generation." resulting in "Programmers are busy debugging the codeof the model generation". > > Is this to be expected on the latest version of the PR and conversion? This is expected based on the tokenizer provided by Facebook. The eos and bos tokens are literally empty strings. Perhaps for HuggingFace purposes this should be... reconsidered?<|||||>> I get different results in the original PR implementation (which had licensing issues) and the current, modified implementation. > > Using the prompt `This is a story about` and > > ``` > output = model.generate(input_ids, do_sample=False) > ``` > > I get these outputs: > ## Original: > > > This is a story about how we got here, and the challenges ahead. > > The first time I saw a drone was in 2013 when my wife and I were visiting our daughter at college. We went to her dorm room one afternoon to drop off some things she needed for class that day, and as soon as we walked into the lobby, there it was: a quadcopter hovering just above us, its four propellers spinning lazily in the air. It wasn’t doing anything special—it was just flying around on its own, like an insect buzzing around your head. But it made me feel uneasy; I didn’t know what to make of this new technology. The next year, after I had moved back home from Washington, D.C., where I worked for a nonprofit, I started noticing them more often while walking through neighborhoods or hiking in the woods. Sometimes they would be parked > > ## Current: > > > This is a story about a man who was born in the 1920s and lived through the Great Depression, World War II, and the Korean War. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He > > Maybe I am doing something wrong, so can someone double check this? @oobabooga What settings are you using to generate the text? I noticed I had to increase my repetition penalty, otherwise I'd see similar results as you. I'm currently getting pretty good results with these settings: ``` temp 0.8 rep_pen 1.18 top_k 67 top_p 0 do_sample false ```<|||||>Ok, I have looked more carefully and the biggest difference is that the original implementation didn't add an `bos_token` to the prompt and the current one does. Trying again but removing the `bos_token` from the current version I get these outputs for the prompt `My name is` ## Original > My name is Kyle and I am a 20 year old college student. I am a very outgoing person and I love to meet new people. I am very easy to get along with and I am very open minded. I am a very active person and I love to be outdoors. I love to go hiking, camping, and fishing. I am a very easy going person and I love to have fun. I am very respectful and I am very easy to ## Current > My name is Kyle and I am a 20 year old college student. I am a very outgoing person and I love to meet new people. I am very easy to get along with and I am very open minded. I am a very active person and I love to be outdoors. I love to go hiking, camping, and fishing. I am a very athletic person and I love to play sports. I am a very competitive person and I love to There is still a difference, but this time it's subtle. This is how I am testing the original implementation: ``` python import gc import importlib from pathlib import Path import torch import os import subprocess def install(name): subprocess.call(['pip', 'uninstall', '-y', 'transformers']) subprocess.call(['pip', 'install', name]) install('git+https://github.com/oobabooga/transformers@llama_push') import transformers model = transformers.AutoModelForCausalLM.from_pretrained(Path(f"models/llama-7b-original"), torch_dtype=torch.float16).cuda() tokenizer = transformers.AutoTokenizer.from_pretrained(Path(f"models/llama-7b-original")) input_ids = tokenizer.encode("My name is", return_tensors='pt').cuda() print(input_ids) output = model.generate(input_ids, do_sample=False, max_new_tokens=100)[0] print(tokenizer.decode(output, skip_special_tokens=True)) ``` And the current: ``` python import gc import importlib from pathlib import Path import torch import os import subprocess def install(name): subprocess.call(['pip', 'uninstall', '-y', 'transformers']) subprocess.call(['pip', 'install', name]) install('git+https://github.com/zphang/transformers@llama_push') import transformers model = transformers.AutoModelForCausalLM.from_pretrained(Path(f"models/llama-7b-current"), torch_dtype=torch.float16).cuda() tokenizer = transformers.AutoTokenizer.from_pretrained(Path(f"models/llama-7b-current")) input_ids = tokenizer.encode("My name is", return_tensors='pt').cuda() input_ids = input_ids[:,1:] print(input_ids) output = model.generate(input_ids, do_sample=False, max_new_tokens=100)[0] print(tokenizer.decode(output, skip_special_tokens=True)) ``` @rohvani the point is to use deterministic sampling to make sure that the two implementations yield the exact same results.<|||||>> Ok, I have looked more carefully and the biggest difference is that the original implementation didn't add an `bos_token` to the prompt and the current one does. > > Trying again but removing the `bos_token` from the current version I get these outputs for the prompt `My name is` > > ## Original > > My name is Kyle and I am a 20 year old college student. I am a very outgoing person and I love to meet new people. I am very easy to get along with and I am very open minded. I am a very active person and I love to be outdoors. I love to go hiking, camping, and fishing. I am a very easy going person and I love to have fun. I am very respectful and I am very easy to > > ## Current > > My name is Kyle and I am a 20 year old college student. I am a very outgoing person and I love to meet new people. I am very easy to get along with and I am very open minded. I am a very active person and I love to be outdoors. I love to go hiking, camping, and fishing. I am a very athletic person and I love to play sports. I am a very competitive person and I love to > > There is still a difference, but this time it's subtle. This is how I am testing the original implementation: > > ```python > import gc > import importlib > from pathlib import Path > import torch > import os > import subprocess > > def install(name): > subprocess.call(['pip', 'uninstall', '-y', 'transformers']) > subprocess.call(['pip', 'install', name]) > > install('git+https://github.com/oobabooga/transformers@llama_push') > import transformers > > model = transformers.AutoModelForCausalLM.from_pretrained(Path(f"models/llama-7b-original"), torch_dtype=torch.float16).cuda() > tokenizer = transformers.AutoTokenizer.from_pretrained(Path(f"models/llama-7b-original")) > > input_ids = tokenizer.encode("My name is", return_tensors='pt').cuda() > print(input_ids) > output = model.generate(input_ids, do_sample=False, max_new_tokens=100)[0] > print(tokenizer.decode(output, skip_special_tokens=True)) > ``` > > And the current: > > ```python > import gc > import importlib > from pathlib import Path > import torch > import os > import subprocess > > def install(name): > subprocess.call(['pip', 'uninstall', '-y', 'transformers']) > subprocess.call(['pip', 'install', name]) > > install('git+https://github.com/zphang/transformers@llama_push') > import transformers > > model = transformers.AutoModelForCausalLM.from_pretrained(Path(f"models/llama-7b-current"), torch_dtype=torch.float16).cuda() > tokenizer = transformers.AutoTokenizer.from_pretrained(Path(f"models/llama-7b-current")) > > input_ids = tokenizer.encode("My name is", return_tensors='pt').cuda() > input_ids = input_ids[:,1:] > print(input_ids) > output = model.generate(input_ids, do_sample=False, max_new_tokens=100)[0] > print(tokenizer.decode(output, skip_special_tokens=True)) > ``` > > @rohvani the point is to use deterministic sampling to make sure that the two implementations yield the exact same results. We dont want the same results, as the previous implementation was flawed.<|||||>LGTM then.<|||||>Regarding the above comments: yes, it is important to separate the tokenizer behavior and model behavior. On the tokenizer: I think you should not rely on the default appending/prepending of EOS/BOS, especially when testing across different models. Different model families have different special token behavior in different cases (e.g. I believe for most cases in generation, you do not want EOS appended, even though this is the default in some models). It would be better to provide add_eos_token / add_bos_token explicitly in your usage. On the model: Yes, I believe there may still be some subtle differences between the actual model implementation between the old and refactored version. Off the top of my head, the RoPE embeddings and RMSNorms have totally different implementations: they should be mathematically identical, but may be numerically different. I will continue to look into this, but providing more reproducible examples of differences would be extremely helpful. Thanks all for the feedback! It has been very helpful in helping me track down issues with my implementation.<|||||>To come back on the BOS token point, I actually am not certain that is the issue. In models that behave correctly it is the choice of the model to add the space. But it is not guaranteed behavior. For example on a correct model something like "I like to play foot" might generate "ball with my friends" resulting in "I like to play football with my friends" but these models still have the ability to detect when a space should be used. That means that on the existing models this behavior is not hardcoded. In the earlier mentioned examples the behavior persists on this LLaMA implementation when it does not make sense and there clearly should have been a space. It also means workarounds for the issue are unreliable because if you wish to continue a partial generation it would be undesirable for us to forcefully inject spaces while the AI may have been trying to use one of those combined words like football when it got cut off. <|||||>> We are noticing that unlike other models this model / tokenizer isn't adding spaces at the beginning of the generation. Our interface is designed for continuous generation so if a user submits "Programmers are busy debugging the code" we would expect the generation " of the model generation." resulting in "Programmers are busy debugging the code of the model generation." > > Instead we are seeing "of the model generation." resulting in "Programmers are busy debugging the codeof the model generation". > > Is this to be expected on the latest version of the PR and conversion? This is a SentencePiece behavior rather than something model specific - it always trims space/linefeed(s) at the beginning of output. Unless I am mistaken, SPM itself must be modified to fix this behavior.<|||||>@zphang very interested in this new feature, do you guys have any timeline of merging it into master?<|||||>> It also means workarounds for the issue are unreliable because if you wish to continue a partial generation it would be undesirable for us to forcefully inject spaces while the AI may have been trying to use one of those combined words like football when it got cut off. Hi, just tested this, and I understand what you mean. This is an issue with the tokenizer decoding rather than encoding. As @syskn mentioned, this is an issue with the sentencepiece, not the implementation (e.g. the T5 tokenizer, which also uses SPM, has the same issue). Given that this behavior also exists in T5, a widely used model, I don't think the HF folks are likely to want to have this depart from the expected behavior then. I think this may be something you need to address downstream on your end. One simple (if manual) fix may be the following: ```python vocab = tokenizer.convert_ids_to_tokens(range(tokenizer.vocab_size)) NO_SPACE_SET = { i for i, tok in enumerate(vocab) if not tok.startswith("▁") } def llama_decode(model_output): decoded = tokenizer.decode(model_output) if model_output and model_output[0] not in NO_SPACE_SET: decoded = " " + decoded return decoded # Prepends space print(f"'{llama_decode([7199, 1612, 825])}'") # doesn't prepend space print(f"'{llama_decode([53, 19, 1633])}'") ``` (Basically tokenizers are weird and difficult.)<|||||>@zpang that is unfortunate because that means that the way sentencepiece is handled isn't suitable for the real world use cases of models that complete text. I will then indeed have to do the always add a space workaround which can hinder the experience. T5 is an entirely different model not suitable for our use case. A model like LLaMA has a real world use case of completing text in a text editor in a seamless experience rather than an input and an output field. While this behavior isn't bad or even desirable in a seperate output field it can really hinder continuous generation if everything is unified in a single editor. For now ill resort to implementing a workaround like you suggested, but it is good for the wider AI community to knows this kind of behavior on a model intended to complete text hurts real world performance. It would be much better if the model were able to pick a space as a token rather than me having to pick between no spaces or always spaces.<|||||>>T5 is an entirely different model not suitable for our use case. A model like LLaMA has a real world use case of completing text in a text editor in a seamless experience rather than an input and an output field. I understand where you're coming from, but I'm just trying to provide the context of broader language models. Many language models (in the modern usage of the term) aren't centered on generation. This includes broad classes of models like BERT and T5 that aren't used in GPT-like ways. And on your second note: I'm not sure if this is a fundamental limitation of sentencepiece, or just the chosen implementation (on the SPM side) for decoding tokens. As described above, the information to decode correctly with the space is there, it's just that the decode method doesn't support it. Thinking about it more, I think we could add an option to the tokenizer implementation to enable this behavior -- this would not be that out of place given the existence of other previous hacks like "add prefix space". (Thinking about it even more, maybe this *should* be the default behavior.) Also, to answer more questions about merging, the HF folks have asked me to ping them when I think this is good to merge. I plan to spend some time looking through the issues raised above this weekend, and pinging them on Monday, and I think I can raise this as an issue as well.<|||||>@zphang I'm not sure what has changed in the your branch. But I'm getting this error ```ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.``` While trying this code ``` import gc import importlib from pathlib import Path import torch import os import subprocess def install(name): subprocess.call(['pip', 'uninstall', '-y', 'transformers']) subprocess.call(['pip', 'install', name]) install('git+https://github.com/zphang/transformers@llama_push') import transformers model = transformers.AutoModelForCausalLM.from_pretrained(Path(f"models/llama-7b-current"), torch_dtype=torch.float16).cuda() tokenizer = transformers.AutoTokenizer.from_pretrained(Path(f"models/llama-7b-current")) input_ids = tokenizer.encode("My name is", return_tensors='pt').cuda() input_ids = input_ids[:,1:] print(input_ids) output = model.generate(input_ids, do_sample=False, max_new_tokens=100)[0] print(tokenizer.decode(output, skip_special_tokens=True)) ``` <|||||>On the other hand, If I try with the original implementation (with which this PR was started), i'm getting completely gibberish, This is the output for the input prompt `My name is` ``` My name isanto Dresclockulaireɒ повеisterug Clarkjuierslexvir polcibleteiyenibtzewINF CommissionFEzat Metropolasi summ Package Stadiumiarogleorf EnciqueowieщенędzISO mand Fichieravantskystag _ILvueutilsверropSERTгро PitaudiatzATCH midortenscheid Tableisoreicheanguage riencloudflare!("meisterINTlauychcykhausenrameramePlotнат Kontrola LauritoundialذativvorñaccalachJB shootfaretr Epetuenig százonCreate chamber dob▲ascibilcinaglia ``` I was using this model hosted on HF "decapoda-research/llama-7b-hf"<|||||>@zphang Hi, I got some error with follow code: ```python import transformers import torch device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = transformers.LLaMATokenizer.from_pretrained("LLaMA-hf/llama-7b") model = transformers.LLaMAForCausalLM.from_pretrained("LLaMA-hf/llama-7b").to(device) batch = tokenizer( "The primary use of LLaMA is research on large language models, including", return_tensors="pt", add_special_tokens=False ) batch = {k: v.cuda() for k, v in batch.items()} generated = model.generate(batch["input_ids"], max_length=100) print(tokenizer.decode(generated[0])) ``` Error: ```bash File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:228, in LLaMAAttention.forward(self, hidden_states, past_key_value, attention_mask, output_attentions) 226 kv_seq_len += offset 227 cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) --> 228 query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, offset=offset) 229 # [bsz, nh, t, hd] 231 if past_key_value is not None: 232 # reuse k, v, self_attention File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:142, in apply_rotary_pos_emb(q, k, cos, sin, offset) 140 cos = cos[..., offset : q.shape[-2] + offset, :] 141 sin = sin[..., offset : q.shape[-2] + offset, :] --> 142 q_embed = (q * cos) + (rotate_half(q) * sin) 143 k_embed = (k * cos) + (rotate_half(k) * sin) 144 return q_embed, k_embed File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:136, in rotate_half(x) 134 x1 = x[..., : x.shape[-1] // 2] 135 x2 = x[..., x.shape[-1] // 2] --> 136 return torch.cat((-x2, x1), dim=-1) RuntimeError: Tensors must have same number of dimensions: got 3 and 4 ``` transformers version: [a3dfcc02d249cbd14ce9089f57d4040146f3f090](https://github.com/zphang/transformers/commit/a3dfcc02d249cbd14ce9089f57d4040146f3f090) <|||||>@mymusise where did you get these models from?<|||||>> @mymusise where did you get these models from? @amrrs Did you mean the transformers implementation of llama? Here, the latest code of this PR: https://github.com/zphang/transformers/tree/llama_push <|||||>> @zphang Hi, I got some error with follow code: > > ```python > import transformers > import torch > > device = "cuda:0" if torch.cuda.is_available() else "cpu" > > tokenizer = transformers.LLaMATokenizer.from_pretrained("LLaMA-hf/llama-7b") > model = transformers.LLaMAForCausalLM.from_pretrained("LLaMA-hf/llama-7b").to(device) > > batch = tokenizer( > "The primary use of LLaMA is research on large language models, including", > return_tensors="pt", > add_special_tokens=False > ) > batch = {k: v.cuda() for k, v in batch.items()} > generated = model.generate(batch["input_ids"], max_length=100) > print(tokenizer.decode(generated[0])) > ``` > > Error: > > ```shell > File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:228, in LLaMAAttention.forward(self, hidden_states, past_key_value, attention_mask, output_attentions) > 226 kv_seq_len += offset > 227 cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) > --> 228 query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, offset=offset) > 229 # [bsz, nh, t, hd] > 231 if past_key_value is not None: > 232 # reuse k, v, self_attention > > File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:142, in apply_rotary_pos_emb(q, k, cos, sin, offset) > 140 cos = cos[..., offset : q.shape[-2] + offset, :] > 141 sin = sin[..., offset : q.shape[-2] + offset, :] > --> 142 q_embed = (q * cos) + (rotate_half(q) * sin) > 143 k_embed = (k * cos) + (rotate_half(k) * sin) > 144 return q_embed, k_embed > > File ./venv/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py:136, in rotate_half(x) > 134 x1 = x[..., : x.shape[-1] // 2] > 135 x2 = x[..., x.shape[-1] // 2] > --> 136 return torch.cat((-x2, x1), dim=-1) > > RuntimeError: Tensors must have same number of dimensions: got 3 and 4 > ``` > > transformers version: [a3dfcc02d249cbd14ce9089f57d4040146f3f090](https://github.com/zphang/transformers/commit/a3dfcc02d249cbd14ce9089f57d4040146f3f090) @zphang Hi, here's a PR for fix this: https://github.com/zphang/transformers/pull/3/files Looks like this was little accidentally changed in this commit: https://github.com/zphang/transformers/commit/e2faccb96ae76460408f559bdde51dc6dda28847#diff-06392bad3b9e97be9ade60d4ac46f73b6809388f4d507c2ba1384ab872711c51L135 <|||||>There is some difference between the implementation of rotary position embedding in [PR ](https://github.com/zphang/transformers/blob/a3dfcc02d249cbd14ce9089f57d4040146f3f090/src/transformers/models/llama/modeling_llama.py#L100) and [llama inference](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/llama/model.py#L47) maybe the problem is in [this line](https://github.com/zphang/transformers/blob/a3dfcc02d249cbd14ce9089f57d4040146f3f090/src/transformers/models/llama/modeling_llama.py#L135) change the following into `x2 = x[..., :x.shape[-1] // 2]`<|||||>> You can try to install sentencepiece ``` pip install sentencepiece ```<|||||>Thanks all, corrected two mistakes from my commit last night. Hopefully this version is close to final.<|||||>Inference with this code requires significantly more memory than with Facebook's impl. Not sure if I'm doing something wrong or if it's a real problem. - Facebook impl: 30B model (re-sharded for world size 2) fits in ~68GB VRAM and can do 1024 sequence length no problem. ~16 tokens per second. - This impl: 30B model exceeds 96GB VRAM and puts additional 20ish GB on CPU. Fails for 1024 sequence length; can do 128 but very slow. Quantizing brings it down to 34GB but still slower than Facebook.<|||||>@haydenshively What code are you using on the Transformers side? Are you sure you are sending `torch_dtype=torch.float16` or `torch_dtype=torch.bfloat16` as it looks like you have a model in FP32 with all the memory used.<|||||>Giving a heads up: the HF folks have requested I rename some of the class names to follow full camel-case (e.g. `LlamaModel` rather than `LLaMAModel`, so expect that to kick in in an upcoming commit.<|||||>@zphang I saw the tokenizer config changed to where the BOS token is a space, is the proper behavior where the model decides the usage still planned? For example the test sentence "My favorite sport is foot" produces "My favorite sport is foot bal". While I agree that always spaces is better than never spaces (It would have been my workaround to), i'd rather see the earlier mentioned tokenizer change to where the tokenizer no longer trims the first space if the model decides to do it. Or did it turn out the model can't do that after all?<|||||>I added a `decode_with_prefix_space` option (though currently still defaults to False). I've also asked the HF folks to chime in on this issue (they should be reviewing it tomorrow), so the current functionality isn't quite finalized yet. (`decode_with_prefix_space` only adds a space if the first token requires a prefix space.)<|||||>> I added a `decode_with_prefix_space` option (though currently still defaults to False). I've also asked the HF folks to chime in on this issue (they should be reviewing it tomorrow), so the current functionality isn't quite finalized yet. > > (`decode_with_prefix_space` only adds a space if the first token requires a prefix space.) Am I correct in seeing that the BOS token is now a space? Would this cause issues if I enable that option to where we then get double spaces or does it change the bos behavior? Because on the latest conversions I am noticing the spaces are being added, but also in the cases where they shouldn't be. Our software uses AutoModelForCausalLM by default, I assume to apply decode_with_prefix_space I have to load it more direct? Update: Seems like I can answer my own question. I'd need to manually call the LLaMATokenizer with the add_bos_token=False, decode_with_prefix_space=True options. This should then eliminate the always adding a space behavior, and restores the behavior I seek. Personally I would vote that to be the defaults since to me adding a space as the BOS at all times is undesirable behavior compared to the decode_with_prefix_space solution. And I can imagine other projects who do not want decode_with_prefix_space would not want the space as the bos token either.<|||||>> I'd need to manually call the LLaMATokenizer with the add_bos_token=False, decode_with_prefix_space=True options. Yes. Alternatively, set those attributes after instantiating (I presume from an AutoTokenizer). I don't really have a strong preference for what the default should be, but I suspect the HF folks might because these models feed into other HF infrastructure (e.g. pipelines) which may influence what they want for the default. I think that as long as the implementation supports the different use-cases, and the user can choose what behavior they want, that would be best for all.<|||||>Understood, for downstream projects the suggested behavior most closely resembles what the AutoModelForCausalLM models combined with AutoTokenizer do for the other implementations. So I hope that is the behavior that HF will conclude, otherwise it will be the beginning of us needing to do model specific tokenizer calls rather than relying on AutoTokenizer's defaults.<|||||>I also started to run into `ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.` just as in https://github.com/huggingface/transformers/pull/21955#issuecomment-1465291081<|||||>What tokenizer settings should I use now to *not* having an extra space on the beginning? Adding extra space on the beginning suddenly started to break my code.<|||||>> @haydenshively What code are you using on the Transformers side? Are you sure you are sending `torch_dtype=torch.float16` or `torch_dtype=torch.bfloat16` as it looks like you have a model in FP32 with all the memory used. It seems `torch_dtype` is set in class `PretrainedConfig` initialization, but bfloat16 is not set.<|||||>> I also started to run into `ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.` just as in [#21955 (comment)](https://github.com/huggingface/transformers/pull/21955#issuecomment-1465291081) Ah, yes if sentencepiece is not available, the tokenizer import will magically fail at runtime: <img width="464" alt="image" src="https://user-images.githubusercontent.com/1486609/224893072-1b6940b5-f2a8-42d8-a51c-637cd1fe6507.png"> <|||||>I've been doing some testing with this model. In particular, I made the following dummy function that utilizes the model's forward method: ``` def gen_next_tokens(inp, model, max_gen_len, mask_id): tokenized = tokenizer.encode(inp, return_tensors='pt').to(DEV) total_len = min(model.seqlen, max_gen_len + tokenized.shape[1]) tokens = torch.full((1, total_len), mask_id).to(DEV) tokens[0, :tokenized.shape[1]] = tokenized[0] attention_mask = tokens != mask_id past_key_values = None for cur_id in range(tokenized.shape[1], total_len): print(cur_id - tokenized.shape[1]) if past_key_values: output = model(tokens, attention_mask, past_key_values=past_key_values, use_cache=True) else: output = model(tokens, attention_mask=attention_mask, use_cache=True) past_key_values = output.past_key_values logits = output.logits next_token = torch.argmax(logits, axis=-1)[0][cur_id-1] tokens[0, cur_id] = next_token attention_mask[0, cur_id] = True ``` When taking the past_key_values tuples I get and plugging them back into the model's forward function, I get the following error: ``` [/usr/local/lib/python3.9/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) 490 ) 491 combined_attention_mask = ( --> 492 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask 493 ) 494 RuntimeError: The size of tensor a (391) must match the size of tensor b (782) at non-singleton dimension 3 ``` Is this expected behavior? Is there some form of preprocessing that needs to be done on past_key_values?<|||||>> I've been doing some testing with this model. In particular, I made the following dummy function that utilizes the model's forward method: > > ``` > def gen_next_tokens(inp, model, max_gen_len, mask_id): > tokenized = tokenizer.encode(inp, return_tensors='pt').to(DEV) > total_len = min(model.seqlen, max_gen_len + tokenized.shape[1]) > > tokens = torch.full((1, total_len), mask_id).to(DEV) > tokens[0, :tokenized.shape[1]] = tokenized[0] > attention_mask = tokens != mask_id > past_key_values = None > > for cur_id in range(tokenized.shape[1], total_len): > print(cur_id - tokenized.shape[1]) > if past_key_values: > output = model(tokens, attention_mask, past_key_values=past_key_values, use_cache=True) > else: > output = model(tokens, attention_mask=attention_mask, use_cache=True) > > past_key_values = output.past_key_values > logits = output.logits > > next_token = torch.argmax(logits, axis=-1)[0][cur_id-1] > tokens[0, cur_id] = next_token > attention_mask[0, cur_id] = True > ``` > > When taking the past_key_values tuples I get and plugging them back into the model's forward function, I get the following error: > > ``` > [/usr/local/lib/python3.9/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) > 490 ) > 491 combined_attention_mask = ( > --> 492 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask > 493 ) > 494 > > RuntimeError: The size of tensor a (391) must match the size of tensor b (782) at non-singleton dimension 3 > ``` > > Is this expected behavior? Is there some form of preprocessing that needs to be done on past_key_values? Firstly, you are not doing this right. When you pass `past_key_values` you are not supposed to be passing tokens for which you provide `past_key_values` (in your example you need to pass `next_token` as `input_ids` when using cache). Secondly, [a fix](https://github.com/huggingface/transformers/pull/21955/commits/ef61b1ba1a8ee9fd354b640b059c3474b676c0c5) has just been pushed to finally fix LLaMA's `past_key_values`. Give it a try!<|||||>> > I've been doing some testing with this model. In particular, I made the following dummy function that utilizes the model's forward method: > > ``` > > def gen_next_tokens(inp, model, max_gen_len, mask_id): > > tokenized = tokenizer.encode(inp, return_tensors='pt').to(DEV) > > total_len = min(model.seqlen, max_gen_len + tokenized.shape[1]) > > > > tokens = torch.full((1, total_len), mask_id).to(DEV) > > tokens[0, :tokenized.shape[1]] = tokenized[0] > > attention_mask = tokens != mask_id > > past_key_values = None > > > > for cur_id in range(tokenized.shape[1], total_len): > > print(cur_id - tokenized.shape[1]) > > if past_key_values: > > output = model(tokens, attention_mask, past_key_values=past_key_values, use_cache=True) > > else: > > output = model(tokens, attention_mask=attention_mask, use_cache=True) > > > > past_key_values = output.past_key_values > > logits = output.logits > > > > next_token = torch.argmax(logits, axis=-1)[0][cur_id-1] > > tokens[0, cur_id] = next_token > > attention_mask[0, cur_id] = True > > ``` > > > > > > > > > > > > > > > > > > > > > > > > When taking the past_key_values tuples I get and plugging them back into the model's forward function, I get the following error: > > ``` > > [/usr/local/lib/python3.9/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) > > 490 ) > > 491 combined_attention_mask = ( > > --> 492 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask > > 493 ) > > 494 > > > > RuntimeError: The size of tensor a (391) must match the size of tensor b (782) at non-singleton dimension 3 > > ``` > > > > > > > > > > > > > > > > > > > > > > > > Is this expected behavior? Is there some form of preprocessing that needs to be done on past_key_values? > > Firstly, you are not doing this right. When you pass `past_key_values` you are not supposed to be passing tokens for which you provide `past_key_values` (in your example you need to pass `next_token` as `input_ids` when using cache). Secondly, [a fix](https://github.com/huggingface/transformers/pull/21955/commits/ef61b1ba1a8ee9fd354b640b059c3474b676c0c5) has just been pushed to finally fix LLaMA's `past_key_values`. Give it a try! Excellent, I'll give that a shot. Thanks so much! EDIT: Worked like a charm. Looks good to me.<|||||>I got the following error: Some weights of the model checkpoint at /media/user/80EADB63EADB53CE/LLaMA/LLaMA_converted/llama-7b were not used when initializing LLaMAForCausalLM: ['model.decoder.layers.0.self_attn.v_proj.weight', 'model.decoder.layers.4.self_attn.k_proj.weight', 'model.decoder.layers.19.self_attn.k_proj.weight', 'model.decoder.layers.23.attention_norm.weight', 'model.decoder.layers.5.feed_forward.w3.weight', 'model.decoder.layers.14.self_attn.k_proj.weight', 'model.decoder.layers.26.feed_forward.w1.weight', 'model.decoder.layers.22.feed_forward.w3.weight', 'model.decoder.layers.12.attention_norm.weight', 'model.decoder.layers.3.self_attn.k_proj.weight', 'model.decoder.layers.1.feed_forward.w1.weight', 'model.decoder.layers.20.self_attn.q_proj.weight', 'model.decoder.layers.14.self_attn.o_proj.weight', 'model.decoder.layers.10.attention_norm.weight', 'model.decoder.layers.1.self_attn.q_proj.weight', 'model.decoder.layers.28.self_attn.v_proj.weight', 'model.decoder.layers.8.self_attn.q_proj.weight', 'model.decoder.layers.26.attention_norm.weight', 'model.decoder.layers.6.feed_forward.w2.weight', 'model.decoder.layers.8.feed_forward.w3.weight', 'model.decoder.layers.26.feed_forward.w3.weight', 'model.decoder.layers.10.feed_forward.w2.weight', 'model.decoder.layers.5.self_attn.q_proj.weight', 'model.decoder.layers.10.ffn_norm.weight', 'model.decoder.layers.15.attention_norm.weight', 'model.decoder.layers.0.attention_norm.weight', 'model.decoder.layers.26.self_attn.o_proj.weight', 'model.decoder.layers.1.self_attn.k_proj.weight', 'model.decoder.layers.24.feed_forward.w1.weight', 'model.decoder.layers.27.self_attn.k_proj.weight', 'model.decoder.layers.12.feed_forward.w1.weight', 'model.decoder.layers.6.self_attn.v_proj.weight', 'model.decoder.layers.2.self_attn.k_proj.weight', 'model.decoder.layers.10.self_attn.v_proj.weight', 'model.decoder.layers.9.self_attn.k_proj.weight', 'model.decoder.layers.31.ffn_norm.weight', 'model.decoder.layers.4.self_attn.o_proj.weight', 'model.decoder.layers.10.self_attn.k_proj.weight', 'model.decoder.layers.6.attention_norm.weight', 'model.decoder.layers.0.feed_forward.w1.weight', 'model.decoder.layers.11.self_attn.k_proj.weight', 'model.decoder.layers.14.attention_norm.weight', 'model.decoder.layers.18.feed_forward.w2.weight', 'model.decoder.layers.22.self_attn.k_proj.weight', 'model.decoder.layers.24.self_attn.o_proj.weight', 'model.decoder.layers.4.feed_forward.w2.weight', 'model.decoder.layers.23.self_attn.q_proj.weight', 'model.decoder.layers.25.feed_forward.w2.weight', 'model.decoder.layers.3.self_attn.v_proj.weight', 'model.decoder.layers.18.feed_forward.w1.weight', 'model.decoder.layers.30.self_attn.o_proj.weight', 'model.decoder.layers.7.ffn_norm.weight', 'model.decoder.layers.18.self_attn.q_proj.weight', 'model.decoder.layers.26.self_attn.k_proj.weight', 'model.decoder.layers.14.feed_forward.w1.weight', 'model.decoder.layers.27.self_attn.v_proj.weight', 'model.decoder.layers.14.feed_forward.w3.weight', 'model.decoder.layers.27.ffn_norm.weight', 'model.decoder.layers.14.self_attn.q_proj.weight', 'model.decoder.layers.25.ffn_norm.weight', 'model.decoder.layers.10.self_attn.q_proj.weight', 'model.decoder.layers.28.attention_norm.weight', 'model.decoder.layers.24.self_attn.q_proj.weight', 'model.decoder.layers.2.feed_forward.w1.weight', 'model.decoder.layers.2.self_attn.q_proj.weight', 'model.decoder.layers.0.self_attn.o_proj.weight', 'model.decoder.layers.18.ffn_norm.weight', 'model.decoder.layers.4.self_attn.q_proj.weight', 'model.decoder.layers.11.feed_forward.w3.weight', 'model.decoder.layers.25.self_attn.q_proj.weight', 'model.decoder.layers.25.self_attn.v_proj.weight', 'model.decoder.layers.21.self_attn.o_proj.weight', 'model.decoder.layers.3.ffn_norm.weight', 'model.decoder.layers.4.feed_forward.w3.weight', 'model.decoder.layers.16.feed_forward.w2.weight', 'model.decoder.layers.30.self_attn.v_proj.weight', 'model.decoder.layers.6.ffn_norm.weight', 'model.decoder.layers.12.self_attn.o_proj.weight', 'model.decoder.layers.12.feed_forward.w2.weight', 'model.decoder.layers.27.feed_forward.w1.weight', 'model.decoder.layers.29.feed_forward.w2.weight', 'model.decoder.layers.20.feed_forward.w3.weight', 'model.decoder.layers.24.attention_norm.weight', 'model.decoder.layers.29.self_attn.v_proj.weight', 'model.decoder.layers.23.self_attn.v_proj.weight', 'model.decoder.layers.19.attention_norm.weight', 'model.decoder.layers.26.self_attn.v_proj.weight', 'model.decoder.layers.14.ffn_norm.weight', 'model.decoder.layers.2.attention_norm.weight', 'model.decoder.layers.28.ffn_norm.weight', 'model.decoder.layers.12.ffn_norm.weight', 'model.decoder.layers.19.ffn_norm.weight', 'model.decoder.layers.30.feed_forward.w1.weight', 'model.decoder.layers.0.feed_forward.w2.weight', 'model.decoder.layers.21.ffn_norm.weight', 'model.decoder.layers.3.self_attn.q_proj.weight', 'model.decoder.layers.21.feed_forward.w1.weight', 'model.decoder.layers.27.feed_forward.w2.weight', 'model.decoder.layers.29.self_attn.k_proj.weight', 'model.decoder.layers.22.self_attn.o_proj.weight', 'model.decoder.layers.27.attention_norm.weight', 'model.decoder.layers.13.feed_forward.w3.weight', 'model.decoder.layers.8.self_attn.o_proj.weight', 'model.decoder.layers.14.feed_forward.w2.weight', 'model.decoder.layers.11.feed_forward.w1.weight', 'model.decoder.layers.15.self_attn.q_proj.weight', 'model.decoder.layers.5.self_attn.o_proj.weight', 'model.decoder.layers.12.self_attn.q_proj.weight', 'model.decoder.layers.12.self_attn.k_proj.weight', 'model.decoder.layers.18.self_attn.v_proj.weight', 'model.decoder.layers.22.feed_forward.w2.weight', 'model.decoder.layers.1.feed_forward.w3.weight', 'model.decoder.layers.20.attention_norm.weight', 'model.decoder.layers.3.attention_norm.weight', 'model.decoder.layers.24.self_attn.k_proj.weight', 'model.decoder.layers.7.feed_forward.w3.weight', 'model.decoder.layers.24.feed_forward.w2.weight', 'model.decoder.layers.25.feed_forward.w1.weight', 'model.decoder.norm.weight', 'model.decoder.layers.20.self_attn.o_proj.weight', 'model.decoder.layers.16.self_attn.k_proj.weight', 'model.decoder.layers.10.self_attn.o_proj.weight', 'model.decoder.layers.28.feed_forward.w2.weight', 'model.decoder.layers.29.attention_norm.weight', 'model.decoder.layers.17.self_attn.q_proj.weight', 'model.decoder.layers.19.feed_forward.w1.weight', 'model.decoder.layers.31.feed_forward.w2.weight', 'model.decoder.layers.7.self_attn.v_proj.weight', 'model.decoder.layers.21.attention_norm.weight', 'model.decoder.layers.19.feed_forward.w2.weight', 'model.decoder.layers.4.self_attn.v_proj.weight', 'model.decoder.layers.23.feed_forward.w3.weight', 'model.decoder.layers.9.feed_forward.w3.weight', 'model.decoder.layers.17.self_attn.o_proj.weight', 'model.decoder.layers.11.self_attn.v_proj.weight', 'model.decoder.layers.13.feed_forward.w2.weight', 'model.decoder.layers.17.feed_forward.w3.weight', 'model.decoder.layers.31.feed_forward.w3.weight', 'model.decoder.layers.13.self_attn.k_proj.weight', 'model.decoder.layers.9.feed_forward.w2.weight', 'model.decoder.layers.8.attention_norm.weight', 'model.decoder.layers.4.attention_norm.weight', 'model.decoder.layers.18.self_attn.k_proj.weight', 'model.decoder.layers.4.feed_forward.w1.weight', 'model.decoder.layers.2.feed_forward.w3.weight', 'model.decoder.layers.13.ffn_norm.weight', 'model.decoder.layers.19.feed_forward.w3.weight', 'model.decoder.layers.20.ffn_norm.weight', 'model.decoder.layers.18.self_attn.o_proj.weight', 'model.decoder.layers.28.self_attn.o_proj.weight', 'model.decoder.layers.7.self_attn.q_proj.weight', 'model.decoder.layers.7.self_attn.k_proj.weight', 'model.decoder.layers.0.self_attn.k_proj.weight', 'model.decoder.layers.9.feed_forward.w1.weight', 'model.decoder.layers.13.self_attn.q_proj.weight', 'model.decoder.layers.17.self_attn.v_proj.weight', 'model.decoder.layers.13.self_attn.o_proj.weight', 'model.decoder.layers.8.feed_forward.w1.weight', 'model.decoder.layers.0.feed_forward.w3.weight', 'model.decoder.layers.1.feed_forward.w2.weight', 'model.decoder.layers.3.self_attn.o_proj.weight', 'model.decoder.layers.16.self_attn.v_proj.weight', 'model.decoder.layers.7.self_attn.o_proj.weight', 'model.decoder.layers.11.self_attn.o_proj.weight', 'model.decoder.layers.15.feed_forward.w2.weight', 'model.decoder.layers.15.feed_forward.w1.weight', 'model.decoder.layers.28.self_attn.q_proj.weight', 'model.decoder.layers.8.ffn_norm.weight', 'model.decoder.layers.22.attention_norm.weight', 'model.decoder.layers.11.attention_norm.weight', 'model.decoder.layers.17.self_attn.k_proj.weight', 'model.decoder.layers.7.feed_forward.w2.weight', 'model.decoder.layers.27.feed_forward.w3.weight', 'model.decoder.layers.16.attention_norm.weight', 'model.decoder.layers.21.self_attn.v_proj.weight', 'model.decoder.layers.15.feed_forward.w3.weight', 'model.decoder.layers.20.self_attn.k_proj.weight', 'model.decoder.layers.15.self_attn.o_proj.weight', 'model.decoder.layers.31.self_attn.q_proj.weight', 'model.decoder.layers.2.self_attn.o_proj.weight', 'model.decoder.layers.29.feed_forward.w3.weight', 'model.decoder.layers.21.feed_forward.w2.weight', 'model.decoder.layers.23.self_attn.k_proj.weight', 'model.decoder.layers.5.self_attn.v_proj.weight', 'model.decoder.layers.5.ffn_norm.weight', 'model.decoder.layers.15.self_attn.v_proj.weight', 'model.decoder.layers.17.ffn_norm.weight', 'model.decoder.layers.8.feed_forward.w2.weight', 'model.decoder.layers.9.self_attn.v_proj.weight', 'model.decoder.layers.0.ffn_norm.weight', 'model.decoder.layers.20.self_attn.v_proj.weight', 'model.decoder.layers.23.self_attn.o_proj.weight', 'model.decoder.layers.29.ffn_norm.weight', 'model.decoder.layers.30.feed_forward.w2.weight', 'model.decoder.layers.11.ffn_norm.weight', 'model.decoder.layers.2.self_attn.v_proj.weight', 'model.decoder.layers.2.feed_forward.w2.weight', 'model.decoder.layers.3.feed_forward.w2.weight', 'model.decoder.layers.18.attention_norm.weight', 'model.decoder.layers.28.self_attn.k_proj.weight', 'model.decoder.embed_tokens.weight', 'model.decoder.layers.29.feed_forward.w1.weight', 'model.decoder.layers.30.ffn_norm.weight', 'model.decoder.layers.20.feed_forward.w2.weight', 'model.decoder.layers.12.self_attn.v_proj.weight', 'model.decoder.layers.20.feed_forward.w1.weight', 'model.decoder.layers.25.attention_norm.weight', 'model.decoder.layers.16.ffn_norm.weight', 'model.decoder.layers.25.feed_forward.w3.weight', 'model.decoder.layers.7.feed_forward.w1.weight', 'model.decoder.layers.8.self_attn.v_proj.weight', 'model.decoder.layers.6.feed_forward.w1.weight', 'model.decoder.layers.16.self_attn.q_proj.weight', 'model.decoder.layers.6.feed_forward.w3.weight', 'model.decoder.layers.21.feed_forward.w3.weight', 'model.decoder.layers.22.self_attn.q_proj.weight', 'model.decoder.layers.17.attention_norm.weight', 'model.decoder.layers.27.self_attn.q_proj.weight', 'model.decoder.layers.17.feed_forward.w1.weight', 'model.decoder.layers.11.feed_forward.w2.weight', 'model.decoder.layers.12.feed_forward.w3.weight', 'model.decoder.layers.13.self_attn.v_proj.weight', 'model.decoder.layers.28.feed_forward.w3.weight', 'model.decoder.layers.1.attention_norm.weight', 'model.decoder.layers.15.self_attn.k_proj.weight', 'model.decoder.layers.31.self_attn.k_proj.weight', 'model.decoder.layers.14.self_attn.v_proj.weight', 'model.decoder.layers.5.feed_forward.w2.weight', 'model.decoder.layers.19.self_attn.o_proj.weight', 'model.decoder.layers.31.self_attn.v_proj.weight', 'model.decoder.layers.22.ffn_norm.weight', 'model.decoder.layers.29.self_attn.q_proj.weight', 'model.decoder.layers.30.feed_forward.w3.weight', 'model.decoder.layers.5.self_attn.k_proj.weight', 'model.decoder.layers.7.attention_norm.weight', 'model.decoder.layers.29.self_attn.o_proj.weight', 'model.decoder.layers.31.self_attn.o_proj.weight', 'model.decoder.layers.16.feed_forward.w1.weight', 'model.decoder.layers.21.self_attn.q_proj.weight', 'model.decoder.layers.16.feed_forward.w3.weight', 'model.decoder.layers.1.ffn_norm.weight', 'model.decoder.layers.1.self_attn.v_proj.weight', 'model.decoder.layers.23.ffn_norm.weight', 'model.decoder.layers.30.attention_norm.weight', 'model.decoder.layers.22.self_attn.v_proj.weight', 'model.decoder.layers.5.feed_forward.w1.weight', 'model.decoder.layers.13.feed_forward.w1.weight', 'model.decoder.layers.3.feed_forward.w1.weight', 'model.decoder.layers.22.feed_forward.w1.weight', 'model.decoder.layers.10.feed_forward.w3.weight', 'model.decoder.layers.5.attention_norm.weight', 'model.decoder.layers.30.self_attn.k_proj.weight', 'model.decoder.layers.13.attention_norm.weight', 'model.decoder.layers.23.feed_forward.w1.weight', 'model.decoder.layers.17.feed_forward.w2.weight', 'model.decoder.layers.23.feed_forward.w2.weight', 'model.decoder.layers.9.self_attn.o_proj.weight', 'model.decoder.layers.24.self_attn.v_proj.weight', 'model.decoder.layers.26.feed_forward.w2.weight', 'model.decoder.layers.26.ffn_norm.weight', 'model.decoder.layers.9.attention_norm.weight', 'model.decoder.layers.28.feed_forward.w1.weight', 'model.decoder.layers.2.ffn_norm.weight', 'model.decoder.layers.19.self_attn.q_proj.weight', 'model.decoder.layers.24.feed_forward.w3.weight', 'model.decoder.layers.15.ffn_norm.weight', 'model.decoder.layers.26.self_attn.q_proj.weight', 'model.decoder.layers.11.self_attn.q_proj.weight', 'model.decoder.layers.9.ffn_norm.weight', 'model.decoder.layers.24.ffn_norm.weight', 'model.decoder.layers.31.attention_norm.weight', 'model.decoder.layers.10.feed_forward.w1.weight', 'model.decoder.layers.25.self_attn.o_proj.weight', 'model.decoder.layers.27.self_attn.o_proj.weight', 'model.decoder.layers.3.feed_forward.w3.weight', 'model.decoder.layers.6.self_attn.k_proj.weight', 'model.decoder.layers.4.ffn_norm.weight', 'model.decoder.layers.18.feed_forward.w3.weight', 'model.decoder.layers.19.self_attn.v_proj.weight', 'model.decoder.layers.30.self_attn.q_proj.weight', 'model.decoder.layers.31.feed_forward.w1.weight', 'model.decoder.layers.0.self_attn.q_proj.weight', 'model.decoder.layers.6.self_attn.o_proj.weight', 'model.decoder.layers.25.self_attn.k_proj.weight', 'model.decoder.layers.8.self_attn.k_proj.weight', 'model.decoder.layers.9.self_attn.q_proj.weight', 'model.decoder.layers.6.self_attn.q_proj.weight', 'model.decoder.layers.16.self_attn.o_proj.weight', 'model.decoder.layers.1.self_attn.o_proj.weight', 'model.decoder.layers.21.self_attn.k_proj.weight'] - This IS expected if you are initializing LLaMAForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LLaMAForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of LLaMAForCausalLM were not initialized from the model checkpoint at /media/user/80EADB63EADB53CE/LLaMA/LLaMA_converted/llama-7b and are newly initialized: ['model.layers.13.self_attn.k_proj.weight', 'model.layers.22.mlp.gate_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.2.post_attention_layernorm.weight', 'model.layers.10.self_attn.o_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.26.self_attn.rotary_emb.inv_freq', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.29.mlp.down_proj.weight', 'model.layers.2.self_attn.o_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.11.mlp.gate_proj.weight', 'model.layers.10.mlp.up_proj.weight', 'model.layers.29.post_attention_layernorm.weight', 'model.layers.4.input_layernorm.weight', 'model.layers.22.self_attn.o_proj.weight', 'model.layers.17.input_layernorm.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.6.post_attention_layernorm.weight', 'model.layers.8.self_attn.o_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.28.mlp.gate_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.11.mlp.up_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.14.self_attn.o_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.28.mlp.down_proj.weight', 'model.layers.17.mlp.down_proj.weight', 'model.layers.25.input_layernorm.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.1.input_layernorm.weight', 'model.layers.19.self_attn.o_proj.weight', 'model.layers.9.mlp.gate_proj.weight', 'model.layers.26.mlp.down_proj.weight', 'model.layers.18.mlp.gate_proj.weight', 'model.layers.10.mlp.gate_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.10.mlp.down_proj.weight', 'model.layers.16.input_layernorm.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.16.self_attn.rotary_emb.inv_freq', 'model.layers.27.mlp.down_proj.weight', 'model.layers.8.self_attn.rotary_emb.inv_freq', 'model.layers.1.mlp.gate_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.27.post_attention_layernorm.weight', 'model.layers.6.mlp.gate_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.30.mlp.up_proj.weight', 'model.layers.7.input_layernorm.weight', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.28.post_attention_layernorm.weight', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.25.post_attention_layernorm.weight', 'model.layers.5.post_attention_layernorm.weight', 'model.layers.13.self_attn.o_proj.weight', 'model.layers.14.post_attention_layernorm.weight', 'model.layers.18.post_attention_layernorm.weight', 'model.layers.6.input_layernorm.weight', 'model.layers.24.input_layernorm.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.2.mlp.up_proj.weight', 'model.layers.7.self_attn.rotary_emb.inv_freq', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.15.mlp.up_proj.weight', 'model.layers.25.mlp.gate_proj.weight', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.23.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.19.post_attention_layernorm.weight', 'model.layers.19.mlp.gate_proj.weight', 'model.layers.1.self_attn.rotary_emb.inv_freq', 'model.layers.3.mlp.up_proj.weight', 'model.layers.17.mlp.up_proj.weight', 'model.layers.17.post_attention_layernorm.weight', 'model.layers.28.mlp.up_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.7.post_attention_layernorm.weight', 'model.layers.1.self_attn.o_proj.weight', 'model.layers.18.mlp.up_proj.weight', 'model.layers.8.mlp.down_proj.weight', 'model.layers.26.mlp.gate_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.21.mlp.gate_proj.weight', 'model.layers.24.mlp.down_proj.weight', 'model.layers.16.post_attention_layernorm.weight', 'model.layers.1.mlp.down_proj.weight', 'model.layers.7.self_attn.o_proj.weight', 'model.layers.25.mlp.down_proj.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.30.self_attn.rotary_emb.inv_freq', 'model.layers.15.mlp.down_proj.weight', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.19.input_layernorm.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.28.self_attn.rotary_emb.inv_freq', 'model.layers.28.input_layernorm.weight', 'model.layers.12.mlp.gate_proj.weight', 'model.layers.31.self_attn.rotary_emb.inv_freq', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.8.mlp.gate_proj.weight', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.27.input_layernorm.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.20.self_attn.o_proj.weight', 'model.layers.9.post_attention_layernorm.weight', 'model.layers.29.mlp.gate_proj.weight', 'model.layers.12.post_attention_layernorm.weight', 'model.layers.5.input_layernorm.weight', 'model.layers.20.mlp.down_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.5.mlp.up_proj.weight', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.18.input_layernorm.weight', 'model.layers.6.self_attn.o_proj.weight', 'model.layers.15.mlp.gate_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.2.self_attn.rotary_emb.inv_freq', 'model.layers.27.self_attn.o_proj.weight', 'model.layers.10.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.21.self_attn.o_proj.weight', 'model.layers.12.self_attn.o_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.7.mlp.down_proj.weight', 'model.layers.30.mlp.down_proj.weight', 'model.layers.3.mlp.down_proj.weight', 'model.layers.4.post_attention_layernorm.weight', 'model.layers.29.self_attn.rotary_emb.inv_freq', 'model.layers.31.mlp.down_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.17.mlp.gate_proj.weight', 'model.layers.26.input_layernorm.weight', 'model.layers.1.post_attention_layernorm.weight', 'model.layers.12.self_attn.rotary_emb.inv_freq', 'model.layers.4.mlp.gate_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.24.self_attn.o_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.9.mlp.up_proj.weight', 'model.layers.27.mlp.up_proj.weight', 'model.layers.15.post_attention_layernorm.weight', 'model.layers.6.mlp.up_proj.weight', 'model.layers.24.self_attn.rotary_emb.inv_freq', 'model.layers.14.mlp.up_proj.weight', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.4.mlp.up_proj.weight', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.25.mlp.up_proj.weight', 'model.layers.31.post_attention_layernorm.weight', 'model.layers.20.self_attn.rotary_emb.inv_freq', 'model.layers.17.self_attn.o_proj.weight', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.22.post_attention_layernorm.weight', 'model.layers.9.self_attn.rotary_emb.inv_freq', 'model.embed_tokens.weight', 'model.layers.2.mlp.down_proj.weight', 'model.layers.15.self_attn.o_proj.weight', 'model.layers.27.self_attn.rotary_emb.inv_freq', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.22.self_attn.rotary_emb.inv_freq', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.22.input_layernorm.weight', 'model.layers.13.mlp.gate_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.11.post_attention_layernorm.weight', 'model.layers.3.input_layernorm.weight', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.15.self_attn.rotary_emb.inv_freq', 'model.layers.8.input_layernorm.weight', 'model.layers.11.self_attn.rotary_emb.inv_freq', 'model.layers.19.mlp.down_proj.weight', 'model.layers.18.mlp.down_proj.weight', 'model.layers.2.mlp.gate_proj.weight', 'model.layers.11.self_attn.o_proj.weight', 'model.layers.9.mlp.down_proj.weight', 'model.layers.20.mlp.gate_proj.weight', 'model.layers.23.mlp.gate_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.1.mlp.up_proj.weight', 'model.layers.8.post_attention_layernorm.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.12.mlp.down_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.3.self_attn.o_proj.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.14.mlp.gate_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.23.self_attn.o_proj.weight', 'model.layers.13.mlp.up_proj.weight', 'model.layers.6.self_attn.rotary_emb.inv_freq', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.25.self_attn.o_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.30.input_layernorm.weight', 'model.layers.16.mlp.gate_proj.weight', 'model.layers.3.self_attn.rotary_emb.inv_freq', 'model.layers.2.input_layernorm.weight', 'model.layers.13.input_layernorm.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.30.mlp.gate_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.20.mlp.up_proj.weight', 'model.layers.12.mlp.up_proj.weight', 'model.layers.15.input_layernorm.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.22.mlp.up_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.7.mlp.gate_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.20.input_layernorm.weight', 'model.layers.7.mlp.up_proj.weight', 'model.layers.10.post_attention_layernorm.weight', 'model.layers.13.post_attention_layernorm.weight', 'model.layers.16.self_attn.o_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.0.self_attn.rotary_emb.inv_freq', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.18.self_attn.o_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.3.post_attention_layernorm.weight', 'model.layers.24.mlp.gate_proj.weight', 'model.layers.5.self_attn.o_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.11.input_layernorm.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.20.post_attention_layernorm.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.21.mlp.up_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.24.post_attention_layernorm.weight', 'model.layers.19.self_attn.rotary_emb.inv_freq', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.30.self_attn.o_proj.weight', 'model.layers.30.post_attention_layernorm.weight', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.23.mlp.up_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.19.mlp.up_proj.weight', 'model.layers.3.mlp.gate_proj.weight', 'model.layers.26.mlp.up_proj.weight', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.9.input_layernorm.weight', 'model.layers.21.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.11.mlp.down_proj.weight', 'model.layers.14.mlp.down_proj.weight', 'model.layers.31.input_layernorm.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.29.input_layernorm.weight', 'model.layers.13.mlp.down_proj.weight', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.12.input_layernorm.weight', 'model.layers.21.mlp.down_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.29.self_attn.o_proj.weight', 'model.layers.17.self_attn.rotary_emb.inv_freq', 'model.layers.21.input_layernorm.weight', 'model.layers.14.input_layernorm.weight', 'model.layers.31.mlp.gate_proj.weight', 'model.layers.14.self_attn.rotary_emb.inv_freq', 'model.layers.29.mlp.up_proj.weight', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.23.mlp.down_proj.weight', 'model.layers.10.input_layernorm.weight', 'model.layers.22.mlp.down_proj.weight', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.4.self_attn.o_proj.weight', 'model.layers.16.mlp.down_proj.weight', 'model.layers.23.input_layernorm.weight', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.28.self_attn.o_proj.weight', 'model.layers.5.self_attn.rotary_emb.inv_freq', 'model.layers.5.mlp.gate_proj.weight', 'model.layers.21.post_attention_layernorm.weight', 'model.layers.8.mlp.up_proj.weight', 'model.layers.4.self_attn.rotary_emb.inv_freq', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.24.mlp.up_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.16.mlp.up_proj.weight', 'model.layers.18.self_attn.rotary_emb.inv_freq', 'model.layers.13.self_attn.rotary_emb.inv_freq', 'model.layers.5.mlp.down_proj.weight', 'model.layers.25.self_attn.rotary_emb.inv_freq', 'model.layers.27.mlp.gate_proj.weight', 'model.layers.31.self_attn.o_proj.weight', 'model.layers.31.mlp.up_proj.weight', 'model.layers.23.post_attention_layernorm.weight', 'model.norm.weight', 'model.layers.4.mlp.down_proj.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.<|||||>@ Everyone: I've pushed another breaking change to the code (renaming all LLaMA tokenizer, configuration and model classes to match the camelcase convention in HF Transformers (`LLaMA*` -> `Llama*`). This will require updating code, or at minimum, updating the configuration files for the tokenizers and models. @sgugger I've incorporated all your feedback, except on updating the checkpointing script. I'm not sure if your comment meant that this can be updated in a PR; if not, I should be able to work on this tomorrow. Do let me know if anything else still needs addressing.<|||||>Hello, I have a question about training llama. Does the current version of training not support the zero strategy of deepspeed? I added the deepspeed configuration file, but it doesn't seem to take effect. Looking for your reply, thank you~<|||||>> I get different results in the original PR implementation (which had licensing issues) and the current, modified implementation. > > Using the prompt `This is a story about` and > > ``` > output = model.generate(input_ids, do_sample=False) > ``` > > I get these outputs: > > ## Original: > > This is a story about how we got here, and the challenges ahead. > > The first time I saw a drone was in 2013 when my wife and I were visiting our daughter at college. We went to her dorm room one afternoon to drop off some things she needed for class that day, and as soon as we walked into the lobby, there it was: a quadcopter hovering just above us, its four propellers spinning lazily in the air. It wasn’t doing anything special—it was just flying around on its own, like an insect buzzing around your head. But it made me feel uneasy; I didn’t know what to make of this new technology. The next year, after I had moved back home from Washington, D.C., where I worked for a nonprofit, I started noticing them more often while walking through neighborhoods or hiking in the woods. Sometimes they would be parked > > ## Current: > > This is a story about a man who was born in the 1920s and lived through the Great Depression, World War II, and the Korean War. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He > > Maybe I am doing something wrong, so can someone double check this? me too, there must be something wrong... <|||||>> That may be caused by the different ways of rotary position embedding implementation. I tried to check the output of rotary embeddings between this pr and raw llama infer code as follows, ``` ### rotary implementation in raw code import torch class RotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): super().__init__() inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) self.register_buffer("inv_freq", inv_freq) # Build here to make `torch.jit.trace` work. self.max_seq_len_cached = max_position_embeddings t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) self.cos_cached = emb.cos()[None, None, :, :] self.sin_cached = emb.sin()[None, None, :, :] def forward(self, x, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. if seq_len > self.max_seq_len_cached: self.max_seq_len_cached = seq_len t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1).to(x.device) self.cos_cached = emb.cos()[None, None, :, :].to(dtype=x.dtype) self.sin_cached = emb.sin()[None, None, :, :].to(dtype=x.dtype) return ( self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), ) def rotate_half(x): """Rotates half the hidden dims of the input.""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., : x.shape[-1] // 2] print(x1.shape, x2.shape) return torch.cat((-x2, x1), dim=-1) def apply_rotary_pos_emb(q, k, cos, sin, offset: int = 0): cos = cos[..., offset : q.shape[-2] + offset, :] sin = sin[..., offset : q.shape[-2] + offset, :] q_embed = (q * cos) + (rotate_half(q) * sin) k_embed = (k * cos) + (rotate_half(k) * sin) return q_embed, k_embed q = torch.randn((2,32,2048,4096//32)) k = torch.randn((2,32,2048,4096//32)) v = torch.randn((2,32,2048,4096//32)) rotary_emb = RotaryEmbedding(4096//32) cos, sin = rotary_emb(v, seq_len=2048) q1, k1 = apply_rotary_pos_emb(q, k, cos, sin, offset=0) ### implementation in this repo from typing import Tuple def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) t = torch.arange(end, device=freqs.device) # type: ignore freqs = torch.outer(t, freqs).float() # type: ignore freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64 return freqs_cis def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): ndim = x.ndim assert 0 <= 1 < ndim assert freqs_cis.shape == (x.shape[1], x.shape[-1]) shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] return freqs_cis.view(*shape) def apply_rotary_emb( xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor, ) -> Tuple[torch.Tensor, torch.Tensor]: xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) print("xq_",xq_.shape) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) print("freqs_cis",freqs_cis.shape) print("freqs_cis_torch.view_as_real",torch.view_as_real(freqs_cis).shape) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) print("xq_out",xq_out.shape) xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) return xq_out.type_as(xq), xk_out.type_as(xk) freqs_cis = precompute_freqs_cis( 4096 // 32, 2048 * 2 ) # q = torch.randn((2,32,2048,4096//32)) # k = torch.randn((2,32,2048,4096//32)) # v = torch.randn((2,32,2048,4096//32)) q = q.transpose(2,1) k = k.transpose(2,1) v = v.transpose(2,1) freqs_cis_ = freqs_cis[0:2048] print(f"freqs_cis.shape is {freqs_cis.shape}") q2, k2 = apply_rotary_emb(q, k, freqs_cis=freqs_cis_) for i in range(2048): print(i,q1.transpose(2,1)[0,i]==q2[0,i]) ``` The q1 and q2 exactly match only in position 0. That may have some difference. > > I get different results in the original PR implementation (which had licensing issues) and the current, modified implementation. > > Using the prompt `This is a story about` and > > ``` > > output = model.generate(input_ids, do_sample=False) > > ``` > > > > > > > > > > > > > > > > > > > > > > > > I get these outputs: > > ## Original: > > > This is a story about how we got here, and the challenges ahead. > > > The first time I saw a drone was in 2013 when my wife and I were visiting our daughter at college. We went to her dorm room one afternoon to drop off some things she needed for class that day, and as soon as we walked into the lobby, there it was: a quadcopter hovering just above us, its four propellers spinning lazily in the air. It wasn’t doing anything special—it was just flying around on its own, like an insect buzzing around your head. But it made me feel uneasy; I didn’t know what to make of this new technology. The next year, after I had moved back home from Washington, D.C., where I worked for a nonprofit, I started noticing them more often while walking through neighborhoods or hiking in the woods. Sometimes they would be parked > > > > > > ## Current: > > > This is a story about a man who was born in the 1920s and lived through the Great Depression, World War II, and the Korean War. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He > > > > > > Maybe I am doing something wrong, so can someone double check this? > > me too, there must be something wrong... <|||||>I had some trouble using the models for batch-mode inference. It seems that the padding token is missing. Below is a possible solution: ```python tokenizer.pad_token = "[PAD]" tokenizer.padding_side="left" batch = ["Yuchen Lin is a ", "Yuchen is a ", "I do not think Yuchen is a"] inputs = tokenizer(batch, return_tensors="pt", add_special_tokens=False, padding=True) generated = model.generate(inputs["input_ids"], max_new_tokens=5) print(tokenizer.decode(generated[1], skip_special_tokens=True)) ```<|||||>@marscrazy it cause `rotate_half` behavior is different than origial(facebookresearch/llama), it is alreay patch by @zphang. ``` def rotate_half(x): """Rotates half the hidden dims of the input.""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) ``` > > > > That may be caused by the different ways of rotary position embedding implementation. > > I tried to check the output of rotary embeddings between this pr and raw llama infer code as follows, > > ``` > ### rotary implementation in raw code > import torch > class RotaryEmbedding(torch.nn.Module): > def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): > super().__init__() > inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) > self.register_buffer("inv_freq", inv_freq) > > # Build here to make `torch.jit.trace` work. > self.max_seq_len_cached = max_position_embeddings > t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) > freqs = torch.einsum("i,j->ij", t, self.inv_freq) > # Different from paper, but it uses a different permutation in order to obtain the same calculation > emb = torch.cat((freqs, freqs), dim=-1) > self.cos_cached = emb.cos()[None, None, :, :] > self.sin_cached = emb.sin()[None, None, :, :] > > def forward(self, x, seq_len=None): > # x: [bs, num_attention_heads, seq_len, head_size] > # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. > if seq_len > self.max_seq_len_cached: > self.max_seq_len_cached = seq_len > t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype) > freqs = torch.einsum("i,j->ij", t, self.inv_freq) > # Different from paper, but it uses a different permutation in order to obtain the same calculation > emb = torch.cat((freqs, freqs), dim=-1).to(x.device) > self.cos_cached = emb.cos()[None, None, :, :].to(dtype=x.dtype) > self.sin_cached = emb.sin()[None, None, :, :].to(dtype=x.dtype) > return ( > self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), > self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), > ) > > > def rotate_half(x): > """Rotates half the hidden dims of the input.""" > x1 = x[..., : x.shape[-1] // 2] > x2 = x[..., : x.shape[-1] // 2] > print(x1.shape, x2.shape) > return torch.cat((-x2, x1), dim=-1) > > > def apply_rotary_pos_emb(q, k, cos, sin, offset: int = 0): > cos = cos[..., offset : q.shape[-2] + offset, :] > sin = sin[..., offset : q.shape[-2] + offset, :] > q_embed = (q * cos) + (rotate_half(q) * sin) > k_embed = (k * cos) + (rotate_half(k) * sin) > return q_embed, k_embed > q = torch.randn((2,32,2048,4096//32)) > k = torch.randn((2,32,2048,4096//32)) > v = torch.randn((2,32,2048,4096//32)) > rotary_emb = RotaryEmbedding(4096//32) > cos, sin = rotary_emb(v, seq_len=2048) > q1, k1 = apply_rotary_pos_emb(q, k, cos, sin, offset=0) > > ### implementation in this repo > from typing import Tuple > def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): > freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) > t = torch.arange(end, device=freqs.device) # type: ignore > freqs = torch.outer(t, freqs).float() # type: ignore > freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64 > return freqs_cis > > > def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): > ndim = x.ndim > assert 0 <= 1 < ndim > assert freqs_cis.shape == (x.shape[1], x.shape[-1]) > shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] > return freqs_cis.view(*shape) > > > def apply_rotary_emb( > xq: torch.Tensor, > xk: torch.Tensor, > freqs_cis: torch.Tensor, > ) -> Tuple[torch.Tensor, torch.Tensor]: > xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) > xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) > print("xq_",xq_.shape) > freqs_cis = reshape_for_broadcast(freqs_cis, xq_) > print("freqs_cis",freqs_cis.shape) > print("freqs_cis_torch.view_as_real",torch.view_as_real(freqs_cis).shape) > xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) > print("xq_out",xq_out.shape) > xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) > return xq_out.type_as(xq), xk_out.type_as(xk) > > freqs_cis = precompute_freqs_cis( > 4096 // 32, 2048 * 2 > ) > # q = torch.randn((2,32,2048,4096//32)) > # k = torch.randn((2,32,2048,4096//32)) > # v = torch.randn((2,32,2048,4096//32)) > q = q.transpose(2,1) > k = k.transpose(2,1) > v = v.transpose(2,1) > freqs_cis_ = freqs_cis[0:2048] > print(f"freqs_cis.shape is {freqs_cis.shape}") > q2, k2 = apply_rotary_emb(q, k, freqs_cis=freqs_cis_) > > for i in range(2048): > print(i,q1.transpose(2,1)[0,i]==q2[0,i]) > ``` > > The q1 and q2 exactly match only in position 0. That may have some difference. > > > > I get different results in the original PR implementation (which had licensing issues) and the current, modified implementation. > > > Using the prompt `This is a story about` and > > > ``` > > > output = model.generate(input_ids, do_sample=False) > > > ``` > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I get these outputs: > > > ## Original: > > > > This is a story about how we got here, and the challenges ahead. > > > > The first time I saw a drone was in 2013 when my wife and I were visiting our daughter at college. We went to her dorm room one afternoon to drop off some things she needed for class that day, and as soon as we walked into the lobby, there it was: a quadcopter hovering just above us, its four propellers spinning lazily in the air. It wasn’t doing anything special—it was just flying around on its own, like an insect buzzing around your head. But it made me feel uneasy; I didn’t know what to make of this new technology. The next year, after I had moved back home from Washington, D.C., where I worked for a nonprofit, I started noticing them more often while walking through neighborhoods or hiking in the woods. Sometimes they would be parked > > > > > > > > > ## Current: > > > > This is a story about a man who was born in the 1920s and lived through the Great Depression, World War II, and the Korean War. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He was a man who was a part of the greatest generation. He > > > > > > > > > Maybe I am doing something wrong, so can someone double check this? > > > > > > me too, there must be something wrong... <|||||>Hi, just want to report that I can't seem to clear the model from the VRAM once it's loaded, tried del, torch.cuda.empty_cache(), gc.collect, nothing seems to work. Verified in a simple script to load and delete it, so it's not some external factors as far as I can tell.<|||||>Hi, it's a great job! I have read the code provided by meta for inference and also the code for huggingface. I think the module about positional encoding needs to consider the following question. The meta uses the fairscale framework for training. The variables `wk` and `wq` in the `Attention` module use the class `ColumnParallelLinear` in [fairscale](https://github.com/facebookresearch/fairscale/blob/main/fairscale/nn/model_parallel/layers.py#L257). The shape of the `weight` of this class is `(out_dim, in_dim)`, which is the same as `torch.nn.Linear`. When generating `query_states` and `key_states` from `hidden_state`, the actual operation is `x * (weight^T)`, I am not sure whether the `permute()` function used by the author in `convert_llama_weights_to_hf.py` considers to this point. I will post my doubts here first, and I will try it with code later.<|||||>> Hi, it's a great job! I have read the code provided by meta for inference and also the code for huggingface. I think the module about positional encoding needs to consider the following question. The meta uses the fairscale framework for training. The variables `wk` and `wq` in the `Attention` module use the class `ColumnParallelLinear` in [fairscale](https://github.com/facebookresearch/fairscale/blob/main/fairscale/nn/model_parallel/layers.py#L257). The shape of the `weight` of this class is `(out_dim, in_dim)`, which is the same as `torch.nn.Linear`. When generating `query_states` and `key_states` from `hidden_state`, the actual operation is `x * (weight^T)`, I am not sure whether the `permute()` function used by the author in `convert_llama_weights_to_hf.py` considers to this point. I will post my doubts here first, and I will try it with code later. The following is my code, and the results after running are different except for the shape. Of course, it is also possible that there is a problem with my code. ```python import torch import torch.nn.functional as F in_dim = 8 out_dim = 8 n_head = 1 batch_size = 1 head_dim = in_dim // n_head seq_len = 2 def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): # x: [bs, seq_len, n_local_heads, head_dim/2] ndim = x.ndim assert 0 <= 1 < ndim # [seq_len, head_dim] == (seq_len, head_dim/2) assert freqs_cis.shape == (x.shape[1], x.shape[-1]) shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] return freqs_cis.view(*shape) # -> (1, seq_len, 1, head_dim/2) def meta(w_q, hidden_state, theta=10000.0): xq = F.linear(hidden_state, w_q) freqs = 1.0 / (theta ** (torch.arange(0, head_dim, 2)[: (head_dim // 2)].float() / head_dim)) t = torch.arange(seq_len) freqs = torch.outer(t, freqs).float() freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # [seq_len, head_dim/2] xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) print("meta:") print(xq_out) print(xq_out.shape) # ---------------------------------------------------------------- """store cos(mθ) and sin(mθ)""" class LlamaRotaryEmbedding(torch.nn.Module): def __init__(self, dim, max_position_embeddings=seq_len, base=10000, device=None): super().__init__() """dim == head_dim""" # inv_freq.shape == (dim/2) inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) self.register_buffer("inv_freq", inv_freq) # Build here to make `torch.jit.trace` work. self.max_seq_len_cached = max_position_embeddings # t.shape == (seq_len) t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) # t*inv_freq^T # freqs.shape == (seq_len, dim/2) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation # emb.shape == (seq_len, dim) emb = torch.cat((freqs, freqs), dim=-1) # sim_cached == cos_cached.shape == (1, 1, seq_len, dim) self.cos_cached = emb.cos()[None, None, :, :] self.sin_cached = emb.sin()[None, None, :, :] def forward(self, x, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. return ( self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype, device=x.device), ) """used by apply_rotary_pos_emb()""" def rotate_half(x): """Rotates half the hidden dims of the input.""" """x: (bs, num_head, seq_len, head_dim)""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) def apply_rotary_pos_emb(q, cos, sin, offset: int = 0): """ :param q: Query (bs, num_head, seq_len, head_dim) :param k: Key (bs, num_head, seq_len, head_dim) :param cos: (1, 1, seq_len, head_dim) :param sin: (1, 1, seq_len, head_dim) :param offset: 0 :return: """ cos = cos[..., offset : q.shape[-2] + offset, :] sin = sin[..., offset : q.shape[-2] + offset, :] q_embed = (q * cos) + (rotate_half(q) * sin) return q_embed def hf(w_q, hidden_state): def permute(w): return w.view(n_head, in_dim // n_head // 2, 2, in_dim).transpose(1, 2).reshape(in_dim, in_dim) w_q = permute(w_q) # convert_llama_weights_to_hf.py linear = torch.nn.Linear(in_dim, n_head*head_dim) linear.weight.data = w_q xq = linear(hidden_state) rotary_emb = LlamaRotaryEmbedding(head_dim) cos, sin = rotary_emb(xq, seq_len=seq_len) query_states = apply_rotary_pos_emb(xq, cos, sin, offset=0) print("huggingface:") print(query_states) print(query_states.shape) if __name__ == '__main__': w_q = torch.rand([out_dim, head_dim]) hidden_state = torch.rand([batch_size, seq_len, n_head, head_dim]) meta(w_q, hidden_state) hf(w_q, hidden_state) ```<|||||>Inference is OK, however the result seems not understandable. ![image](https://user-images.githubusercontent.com/26135691/225562507-7072f4c8-6e46-4e0b-8eea-a934d6f0c4e9.png) <|||||>Thanks again for all your work @zphang ! Merging this now and we can continue iterating if there are standing bugs in followups PRs (as we discussed, will ping you on a PR for the conversion script and I have a couple more nits but don't want to hold the merge for them). For anyone still having an issue with the model, I encourage you to create a new issue with a clear reproducer (after checking your bug still exists on the main branch of course). Thanks in advance!<|||||>For some reason, GitHub is not marking this PR as closed even if it has been merged (it sees it merged twice cause I clicked the button twice...). Closing manually, but the (first) [merge commit](https://github.com/huggingface/transformers/commit/0041be5b3d1b9a5e1443e1825d7d80f6dfadcdaa) is here.<|||||>@ longzhang418 > Hello, I have a question about training llama. Does the current version of training not support the zero strategy of deepspeed? I added the deepspeed configuration file, but it doesn't seem to take effect. Looking for your reply, thank you~ Hi, we success use accelerate with ds_3_zero3_offload policy to train llama-13B in 4 A100s with memory 80GB(batch size=16 length=256 works for us ), firstly transfer 13B weights into hf weights format, then our accelerate config is in below: > compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: 'cpu' offload_param_device: none zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' dynamo_backend: 'NO' fsdp_config: {} machine_rank: 0 main_training_function: main megatron_lm_config: {} mixed_precision: 'fp16' num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true use_cpu: false The fsdp should works too, as the impl in standford repo. significantly, we test 13B with ds0/ds1/ds2 on one A100 all failed because of CUDA memory. Finally we failed on test some model pipeline methods, eg. fairscale pp, it's the next thing we need to do. <|||||>What's the motivation for the three special tokens `tokenizer.pad_token, tokenizer.bos_token, tokenizer.eos_token = '' ` [when converting the llama tokenizer?](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L243)<|||||>> What's the motivation for the three special tokens `tokenizer.pad_token, tokenizer.bos_token, tokenizer.eos_token = '' ` [when converting the llama tokenizer?](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L243) Yeah, I am also quite confused. And I noticed even with `add_special_tokens=True`, LlamaTokenizer will not append an EOS token explicitly. <|||||>Just started working on a fix for the tokenizer conversion, will link the PR asap: #22402<|||||>what's the version of huggingface-transformers for LLaMa ? <|||||>When I am converting the llama to hf format (using convert_llama_weights_to_hf.py), each part of layer is the same size as the whole file is anyone who met the same issue before? ```bash python convert_llama_weights_to_hf.py --input_dir models/ --model_size 7B --output_dir llama/fp32/ ``` the subfiles (it is processing): ```bash -rw-rw-r--. 1 wyd anaconda 26G Apr 12 15:55 pytorch_model-1-of-33.bin -rw-rw-r--. 1 wyd anaconda 26G Apr 12 15:59 pytorch_model-2-of-33.bin -rw-rw-r--. 1 wyd anaconda 129M Apr 12 15:59 pytorch_model-3-of-33.bin ``` the raw file: ```bash -rw-rw-r--. 1 wyd anaconda 26G Apr 12 15:50 consolidated.00.fp32.pth ``` it's going right when i converted the llama to fp16.<|||||>I think there might be something off with this model and tokenizer https://discuss.huggingface.co/t/llamatokenizerfast-returns-token-type-ids-but-the-forward-pass-of-the-llamamodel-does-not-receive-token-type-ids/42431 could someone take a look?<|||||>Hey! Yes this is fixed by #24042
transformers
21,954
closed
`from_pretrained` broken 4.26.1
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.0a0+d0d6b1f (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @amyeroberts @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction code block ``` from transformers import CLIPTokenizer, CLIPTextModel import torch tokenizer=CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5"), text_encoder=CLIPTextModel.from_pretrained("runwayml/stable-diffusion-v1-5").to("cuda") ``` output ``` Traceback (most recent call last): File "diffusers-oneflow/examples/poc.py", line 17, in <module> tokenizer=CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5"), File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'runwayml/stable-diffusion-v1-5'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'runwayml/stable-diffusion-v1-5' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. ``` ### Expected behavior Expect to be able to consume and not throw
03-04-2023 22:24:03
03-04-2023 22:24:03
Hey! If you go to `hf.co/runwayml/stable-diffusion-v1-5` you will se that there are no `tokenizer.json` or any files related to tokenization. However there is a `tokenizer` folder. The new version of transformers supports `subfolder`. The model on the hub was modified, thus the command that you are looking for is probably: ```python >>> from transformers import AutoTokenizer >>> AutoTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5",subfolder= "tokenizer") ``` which worked for me 😉 <|||||>oh yes this is great thank you
transformers
21,953
closed
Disable DDP for neuron
# What does this PR do? This PR is to disable DDP when using neuron. Currently, we are overwriting the _wrap_model function in trainer.py to disable DDP. We want to avoid overwriting by disabling DDP when using neuron. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-04-2023 18:59:56
03-04-2023 18:59:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,952
closed
docs: improve clarity for language modeling
# What does this PR do? - Improve clarity of tasks/language_modeling and tasks/masked_language_modeling docs. - In example preprocessing, remove `truncation=True` parameter from `tokenizer` so texts aren't truncated before being concatenated and chunked. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, @stevhliu
03-04-2023 17:27:02
03-04-2023 17:27:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,951
closed
TimeSeriesTransformerModel - 'features' Is 'NoneType'
### System Info Sat Mar 4 10:44:51 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 527.41 Driver Version: 527.41 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A | | 30% 37C P8 18W / 350W | 550MiB / 24576MiB | 27% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1580 C+G ...y\ShellExperienceHost.exe N/A | | 0 N/A N/A 5332 C+G ...e\PhoneExperienceHost.exe N/A | | 0 N/A N/A 7560 C+G ...logioptionsplus_agent.exe N/A | | 0 N/A N/A 7672 C+G ...5n1h2txyewy\SearchApp.exe N/A | | 0 N/A N/A 8268 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 11324 C+G ...cw5n1h2txyewy\LockApp.exe N/A | | 0 N/A N/A 12316 C+G ...(x86)\AnyDesk\AnyDesk.exe N/A | | 0 N/A N/A 12908 C+G ...ge\Application\msedge.exe N/A | | 0 N/A N/A 13064 C+G ...2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 13764 C+G ...3d8bbwe\CalculatorApp.exe N/A | | 0 N/A N/A 14880 C+G ...lPanel\SystemSettings.exe N/A | +-----------------------------------------------------------------------------+ ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, We're trying to train TimeSeriesTransformerModel, but it seems that there's a variable called 'features' that's not getting assigned. However, we can't figure out why is this happening. Our code: ` # Initializing a default Time Series Transformer configuration configuration = TimeSeriesTransformerConfig(prediction_length = 327, lags_sequence = [0, 0, 0]) # Randomly initializing a model (with random weights) from the configuration model = TimeSeriesTransformerForPrediction(configuration) # Accessing the model configuration configuration = model.config #we dont know if passing the data as a dataframe instead if a tesndor would work #currently model.train() is throwing an error, maybe we need to use a gpu? TODO # Setting the model to training mode model.train() # Defining the loss function and optimizer loss_fn = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) # Training loop for epoch in range(100): for batch in dataloader: # Forward pass outputs = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=None, static_categorical_features=None, static_real_features=None, future_values=batch["future_values"], future_time_features=batch["future_time_features"], ) loss = loss_fn(outputs, batch) # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step() # Printing the training loss if (epoch + 1) % 10 == 0: print(f"Epoch [{epoch + 1}/100], Loss: {loss.item()}") ` We're getting this error: `Traceback (most recent call last): File "D:\Final Project\fMRI_Ariel_Lital\train.py", line 58, in <module> outputs = model( File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1813, in forward outputs = self.model( File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1626, in forward transformer_inputs, scale, static_feat = self.create_network_inputs( File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1532, in create_network_inputs embedded_cat = self.embedder(static_categorical_features) File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 250, in forward [ File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 251, in <listcomp> embed(cat_feature_slice.squeeze(-1)) AttributeError: 'NoneType' object has no attribute 'squeeze'` Here is the source code where the error occurs - we noticed 'features' is NoneType but don't know why: `class FeatureEmbedder(nn.Module): def __init__(self, cardinalities: List[int], embedding_dims: List[int]) -> None: super().__init__() self.num_features = len(cardinalities) self.embedders = nn.ModuleList([nn.Embedding(c, d) for c, d in zip(cardinalities, embedding_dims)]) def forward(self, features: torch.Tensor) -> torch.Tensor: if self.num_features > 1: # we slice the last dimension, giving an array of length # self.num_features with shape (N,T) or (N) cat_feature_slices = torch.chunk(features, self.num_features, dim=-1) else: cat_feature_slices = [**features**] return torch.cat( [ embed(cat_feature_slice.squeeze(-1)) for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices) ], dim=-1, ) ` Would appreciate your help. @ArthurZucker and @younesbelkadav [train.txt](https://github.com/huggingface/transformers/files/10887890/train.txt) ### Expected behavior We would like to train a TimeSeriesTransformerModelfor for forcasting on tabular data.
03-04-2023 08:53:31
03-04-2023 08:53:31
@LtlSh can you kindly try with the main branch of transformers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,950
closed
auto_find_batch_size should say what batch size it is using
### Feature request When using `auto_find_batch_size=True` in the trainer I believe it identifies the right batch size but then it doesn't log it to the console anywhere? It would be good if it could log what batch size it is using? ### Motivation I'd like to know what batch size it is using because then I will know roughly how big a batch can fit in memory - this info would be useful elsewhere. ### Your contribution N/A
03-04-2023 08:53:25
03-04-2023 08:53:25
cc @muellerzr <|||||>Also, it would be good if auto find batch size also worked for the eval batch size? Otherwise for the eval batch size I still have to guess a few times and hope I don't run out of memory?<|||||>Thanks! Solved with https://github.com/huggingface/transformers/pull/23800<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,949
closed
just testing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-04-2023 05:20:09
03-04-2023 05:20:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21949). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,948
closed
test pull request for tokengt
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-04-2023 05:06:37
03-04-2023 05:06:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21948). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,947
closed
Pre-training language model with Translation Language Modelling (TLM) objective
### Feature request For large-scale pre-training MLM objective is heavily used. eg: BERT (Devlin et al., 2018), XLM-R (Connaue et al.,2019). The resources available from huggingface is to train a language model (LM) using MLM or CLM objectives. (ie. https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) To recreate XLM (Lample and Connaue, 2019) I wish to pre-train my own language model using both MLM and TLM objectives. Please advice on how to do this using the huggingface transformers. Thank you. ### Motivation MLM + TLM is a common pre-training objective during language modelling training. Therefore for further improvement to these models, we need to first train using MLM + TLM objectives. In addition to this, if I need to customize the masking to be done for noun terms only, or verb terms only, please let me know whether this is allowed. ### Your contribution I would appreciate if the pre-training on TLM objective is also provided.
03-04-2023 02:14:27
03-04-2023 02:14:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>If this is supported in the codebase please provide the steps or refer to an available resource.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,946
closed
Fix gradient checkpointing bug in Roformer
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 20:17:40
03-03-2023 20:17:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,945
closed
Fix gradient checkpointing bug in Rembert
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 20:14:57
03-03-2023 20:14:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,944
closed
Fix gradient checkpointing bug in Pegasus
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 20:12:40
03-03-2023 20:12:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,943
closed
Fix gradient checkpointing bug in OPT
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 20:10:11
03-03-2023 20:10:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,942
closed
[examples/speech-recognition] Add SpecAugment to run_speech_recognition_seq2seq.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hello 👋, In this PR, I tried to add SpecAugment to run_speech_recognition_seq2seq.py training example since it's asked in https://github.com/huggingface/transformers/pull/21298. I tried to not impact the training of other seq2seq models using this script. But I might still have missed something. PS: Also removed useless argument `text_column` :) Thanks in advance! ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @ArthurZucker @sanchit-gandhi @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-03-2023 20:09:30
03-03-2023 20:09:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, Thanks for the review ! I've removed the SpecAugment arguments from `ModelArguments` and left some default values as asked. Also secured the attribute access in the line 447 to define `return_attention_mask` :)<|||||>I think it's okay to have the whisper-specific changes as they are in this PR. We have similar things in `run_translation` for some specific models as well.<|||||>Thanks for the clarification @sgugger and @bofenghuang for the contribution! Merging in that case 🤗
transformers
21,941
closed
Adding Type Hints to TF_Pegasus model
# What does this PR do? Added type hints for remaining `call()` functions for the Pegasus model (changes made only in the models/pegasus/modeling_tf_pegasus.py file). Fixes # [(16059)](https://github.com/huggingface/transformers/issues/16059) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [approval](https://github.com/huggingface/transformers/issues/16059#issuecomment-1441895896) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1
03-03-2023 19:20:54
03-03-2023 19:20:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @mollerup23, mostly looks good! One thing to watch out for is that in some cases the default value of the argument has been changed. It's easy to see if you look in the GitHub "Files changed" tab (see the image - `return_dict` had its default argument changed) ![image](https://user-images.githubusercontent.com/12866554/223161978-7abf2912-865d-4975-80bb-f38899440350.png) If you fix the instances where that happened and double-check that it's all okay in the Files Changed tab, we should be good to go!<|||||>Hi @Rocketknight1, I updated and committed again. Hopefully these fixes help, let me know if there is anything else I should do!
transformers
21,940
closed
[CI] Fix ci
# What does this PR do? A typo made during the full cleanup of the code was making 2 test fail. 2 other slow tests are failing, but they were also failing prior to #20211 .
03-03-2023 18:47:51
03-03-2023 18:47:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>> 2 other slow tests are failing, but they were also failing prior to #20211 . Could you share the names of these 2 tests. I can check on CI runners. I did it once on Friday for one test you mentioned, but let's make sure. <|||||>There is : - `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_with_box_refine_two_stage` - `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_equivalence_cpu_gpu` - `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head` Ran them on the CI runner an they are all green so LGTM.
transformers
21,939
closed
Add TF contrastive image text finetuning example
This PR adds a TF port of the PyTorch example for finetuning the `TFVisionTextDualEncoderModel` class. Functionality is largely the same, but I used a `tf.data` pipeline to efficiently stream images instead of `torchvision`. I also added the ability to specify separate image/text models with arguments, whereas in the PyTorch example you have to create the dual encoder with a separate script. I also caught a small bug in the original model code while writing this - loss is a scalar rather than having shape `(1,)`. That's fixed in here too!
03-03-2023 18:27:48
03-03-2023 18:27:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Update: Woah! The argument names to `PushToHubCallback` across our examples became outdated and tests weren't picking that up. I'm fixing it in this PR too.
transformers
21,938
closed
[Whisper] Fix feature normalization in `WhisperFeatureExtractor`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hello @ArthurZucker @sanchit-gandhi , In this PR, I tried to fix the feature normalization in `WhisperFeatureExtractor` which is actually not used. The line 354 takes the `input_features` from the line 340. In addition, the `zero_mean_unit_var_norm` expects a `padded_inputs["attention_mask"]` also in the sample level (48000) as `padded_inputs["input_features"]`. IMO it should not be rescaled in the line 344 (it's also repeated in the line 363). Please let me know what you think :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-03-2023 17:53:13
03-03-2023 17:53:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,937
closed
Whisper does not respect `config.forced_decoder_ids`
### System Info - `transformers` version: 4.27.0.dev0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu) - Jax version: 0.4.1 - JaxLib version: 0.4.1 ### Who can help? @sanchit-gandhi @ArthurZucker ### Reproduction Previously, the recommended method for setting the language and transcription for Whisper inference was by setting the `config.forced_decoder_ids`. However, this method no longer works on the main branch, for example if we force the model to generate in French: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration from datasets import load_dataset # load model and processor processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"] input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features # set the forced ids model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") # generate token ids predicted_ids = model.generate(input_features) # decode token ids to text transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) print(transcription) ``` **Print Output**: ``` ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] ``` ### Expected behavior Language token should correspond to the forced decoder token id (e.g. `"<|fr|>"` in this example). This was the case in previous transformer versions, so there has been a breaking change that we need to remedy.
03-03-2023 16:57:35
03-03-2023 16:57:35
This is based on the behavior that we enforce through the generate function. If there is a `generation_config`, it will be used and has priority over the `config`. There should be a warning deprecating the control of generation through the `config`. See [here](https://github.com/ArthurZucker/transformers/blob/main/src/transformers/generation/utils.py#LL538C17-L538C17). This is a minor breaking change but should also be adresses with deprecation cycle
transformers
21,936
closed
Whisper breaks on poor quality speech audio
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I don't know if it's a bug, but it's definitely not an expected behaviour for me. Also I saw a thread with such behaviour where @Narsil said that "the model is repeating itself", but I can't find it right now, I'll update the issue when I do. To recognize audio file I'm using script that I found in one of the threads here on github [link](https://colab.research.google.com/drive/1Qz9hUL3Z3SxHLUt7f4vuzZG7KEgV0ofk?usp=share_link) ``` processor = WhisperProcessor.from_pretrained("openai/whisper-small") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(task="transcribe") input_speech, sr = audio2numpy.audio_from_file(file) input_features = processor(input_speech, return_tensors="pt", sampling_rate=16000).input_features predicted_ids = model.generate(input_features, max_length=model.config.max_length, repetition_penalty=1) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) ``` And it works okay if the speech is clear and the utterance is detected good, but when speaker talks fast or not legibly enough, or if there's little silence in audio, then the transcription becomes ugly. It looks like: "Привіт, хороша погода але але але але але але але але але але але але але але але але" Currently I'm using only ukrainian files so I'm not aware if it happens in other languages. ### Expected behavior The text is recognized along the whole audio file without breaking
03-03-2023 16:37:16
03-03-2023 16:37:16
cc @ArthurZucker and @sanchit-gandhi <|||||>Hey! Did you get similar results using the original `Whisper` model? Most probably this is just the model not being very good. <|||||>> Hey! Did you get similar results using the original `Whisper` model? Most probably this is just the model not being very good. No, I don't face such problem with original `Whisper`<|||||>OK! There can be a few things to try, but if you can share an audio file with me it would help a lot! <|||||>@frankiedrake Have you tried using `return_timestamps` ? When experimenting I found that the default for openai is using timestamps (even if not shown) and that the model seems to perform better than without.<|||||>> Sorry for a noob question, where should I put this parameter? I tried to add `no_timestamps=False` to `processor.get_decoder_prompt_ids` call and also to the three other methods call, but result didn't change. Btw it works well with `medium` model, but it's too heavy for my setup<|||||>> OK! There can be a few things to try, but if you can share an audio file with me it would help a lot! Unfortunately, I cannot share it because of NDA, but I'll try to record one myself<|||||>`generate(..., return_timestamps=True)` to get the proper generation mode. That's correct right @ArthurZucker <|||||>> return_timestamps I tried this, but got an error, that model's `forward` method isn't aware of this parameter ```ValueError: The following `model_kwargs` are not used by the model: ['return_timestamps'] (note: typos in the generate arguments will also show up in this list)```<|||||>`pipe = pipeline(..., return_timestamps=True)` should work though.<|||||>> @frankiedrake Have you tried using `return_timestamps` ? When experimenting I found that the default for openai is using timestamps (even if not shown) and that the model seems to perform better than without. I tested all variants, result is the same<|||||>Without being able to reproduce it's really hard. Could you dive to the level of logits and figure out any potential differences ? I'm pretty sure it should come down to a configuration difference in the end, but if *we* can't reproduce, it'd be hard to understand. <|||||>> Without being able to reproduce it's really hard. Could you dive to the level of logits and figure out any potential differences ? > > I'm pretty sure it should come down to a configuration difference in the end, but if _we_ can't reproduce, it'd be hard to understand. Okay, thank you for suggestions, I'll try to look at it, and also will try to find an audio I could share with you<|||||>You are probably not using `main`! The timestamps were not part of the initial release 😉 <|||||>@ArthurZucker Correct! I installed the library from main branch and the transcription is now better with one file, but remains the same with another, probably I can tune the parameters further to make it work even better<|||||>Glad that I could help 😉 <|||||>> OK! There can be a few things to try, but if you can share an audio file with me it would help a lot! Guys, I was able to record an [audio](https://github.com/frankiedrake/demo/blob/master/whisper_test.wav) that reproduces a problem, but it wasn't so easy, so the text I recorded doesn't actually exists 😁, anyway I'd expect the model gives some text instead of repeating a single word multiple times. Also, I noticed that if I don't specify language manually the model seems not to break, in my case it detected Chinese language and emitted some readable text. Output of Ukrainian: `Вереси, єдине м'якості, єдине ісцемо, і два, і бере, і бере, і бере, і бере, і бере, і бере, і бере, і бере, і бере, і бере.` The original whisper produces: `Веріості, є рівні, є мільйорівні, є сьогодні, є два, є бере, є бере, чіжено, чіжено, нервно, нервно.` <|||||>> f I don't specify language manually the model seems not to break Can you share how you specify language in both `openai` and `transformers` ? The difference is likely coming from there.<|||||> > > f I don't specify language manually the model seems not to break > > Can you share how you specify language in both `openai` and `transformers` ? The difference is likely coming from there. I really doubt that it's related because it reproduces even if I don't specify the language in other audios<|||||>Seems like it's a known [issue](https://github.com/microsoft/DialoGPT/issues/45#issuecomment-680087019) with transformers models, and tuning `temperature` and `repetition_penalty` params of `generate` method helps, I was able to stop the model from repeating a single word, text now looks much much better, but seems I noticed a small cut at the end of the audio, will investigate a bit deeper. I read about `repetition_penalty` before but I didn't succeed, maybe the conjunction with the `temperature` is significant. Maybe you can shed some light on why these parameters are so important?<|||||>> Maybe you can shed some light on why these parameters are so important? LMs are know to hallucinate by repeating tokens, or several tokens. I don't think there's a good consensus on why it's the case, but it's a very well know issue on LMs. Adding penalties does help, but it's a clutch in my very personal opinion. Maybe the openai defaults are still different the ones we have <|||||>Do we agree the openai test you're doing is simply ``` whisper whisper_test.wav --model small ``` And get this ``` Detecting language using up to the first 30 seconds. Use `--language` to specify the language Detected language: Chinese [00:00.420 --> 00:02.000] 我問他們 [00:02.000 --> 00:03.320] 他們在那邊 [00:03.320 --> 00:05.280] 順便他們等著過去 [00:05.280 --> 00:06.980] 還要去 [00:06.980 --> 00:08.260] 剛剛說他們背後 ``` right ? <|||||>I have the following result ``` Detecting language using up to the first 30 seconds. Use `--language` to specify the language Detected language: Chinese [00:00.000 --> 00:01.700] 要生一年年老了 [00:01.700 --> 00:03.280] 一生不一年老 [00:03.480 --> 00:05.960] 練一陣子 [00:05.960 --> 00:06.700] 熱情不二 [00:06.700 --> 00:07.620] 謝謝大家 ``` <|||||>Can you share anything reproducible ? Right now it's a back and forth and we can't reproduce anything on our end. Please share a clear (small) script, that I can copy past that should reproduce the issue on `transformers@main` and `whisper@main` otherwise, it's going to be too tedious for us to investigate.<|||||>> Can you share anything reproducible ? Right now it's a back and forth and we can't reproduce anything on our end. > > Please share a clear (small) script, that I can copy past that should reproduce the issue on `transformers@main` and `whisper@main` otherwise, it's going to be too tedious for us to investigate. But I shared a file with you, what exactly you can't reproduce?<|||||>I don't have the same output for neither `openai/whisper` nor `transformers` on your file. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,935
closed
Update feature selection in to_tf_dataset
# What does this PR do? Updates feature selection to ensure returned dataset structure is consistent after merging of datasets PR: https://github.com/huggingface/datasets/pull/5602. The PR makes it possible to return a TF dataset with a dict structure even if only a single feature is selected. Compatibility with this version of datasets was run with [this commit](https://github.com/huggingface/transformers/pull/21935/commits/b64204bc1093edd7e3666ad76354fa09405cf4ec) and had a [successful run](https://github.com/huggingface/transformers/actions/runs/4555221658/jobs/8034018899). Note: In all the cases here, the examples were tested with and without these updates. The models would successfully train with both the new and old dataset structures. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
03-03-2023 16:29:07
03-03-2023 16:29:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @MKhalusova Reference on updated API for selecting features from TF datasets<|||||>Thanks for updating the docs too! Looks neat :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,934
closed
Faster `Skipping the first batches` in Trainer
### Feature request Improve speed when skipping the first batches in `trainer.py`. ### Motivation Skipping batches should be fast. ``` Skipping the first batches: 76%|███████▌ | 93017/122500 [6:35:47<2:26:12, 3.36it/s] ``` In this example, the 'Trainer` already spent ~6h30 simply skipping batches and it estimates another ~2h30 to complete the task. This should not take that long to accomplish as those batches are not used and simply discarded. Early investigation show that the `dataloader` is invoke which implies that it samples, fetches and collates the data where collating can be expensive and useless. Wouldn't simply looping X times the `dataloader.sampler` be sufficient to resume the state of training? This would be a light process that could be done prior to the training loop. Perhaps, as an alternate solution, we could temporarily attach a noop collator to `train_dataloader` while skipping the first batches. ### Your contribution I would be glad to provide a PR. I can investigate the issue further but I would need advices on the matter as I don't have a setup to test all possible combinations of `dataloader`, distributed, `deepspeed` and so forth.
03-03-2023 16:14:28
03-03-2023 16:14:28
This is already done in the main branch. You just need to have `Accelerate` installed as an extra dependency.<|||||>Thanks for the info but I do have `accelerate==0.16.0` installed as reported in my logs but I still get extremely slow `Skipping the first batches` ```yaml CONDA: transformers-4.20.1 name: transformers-4.20.1 channels: - pytorch - huggingface - anaconda - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=2_kmp_llvm - abseil-cpp=20211102.0=hd4dd3e8_0 - aiohttp=3.8.1=py39h7f8727e_1 - aiosignal=1.2.0=pyhd3eb1b0_0 - arrow-cpp=8.0.0=py39h60b952e_0 - async-timeout=4.0.1=pyhd3eb1b0_0 - attrs=21.4.0=pyhd3eb1b0_0 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - blas=1.0=mkl - boost-cpp=1.73.0=h7f8727e_12 - bottleneck=1.3.4=py39hce1f21e_0 - brotli=1.0.9=he6710b0_2 - brotlipy=0.7.0=py39h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.18.1=h7f8727e_0 - ca-certificates=2023.01.10=h06a4308_0 - certifi=2022.12.7=py39h06a4308_0 - cffi=1.15.0=py39hd667e15_1 - charset-normalizer=2.0.4=pyhd3eb1b0_0 - click=8.0.4=py39h06a4308_0 - cryptography=37.0.1=py39h9ce1e76_0 - cudatoolkit=11.6.0=hecad31d_10 - cycler=0.11.0=pyhd3eb1b0_0 - dataclasses=0.8=pyh6d0b6a4_7 - datasets=2.3.2=py_0 - dbus=1.13.18=hb2f20db_0 - dill=0.3.4=pyhd3eb1b0_0 - expat=2.4.4=h295c915_0 - filelock=3.6.0=pyhd3eb1b0_0 - fontconfig=2.13.1=h6c09931_0 - fonttools=4.25.0=pyhd3eb1b0_0 - freetype=2.11.0=h70c0345_0 - frozenlist=1.2.0=py39h7f8727e_0 - fsspec=2022.3.0=py39h06a4308_0 - gflags=2.2.2=he6710b0_0 - giflib=5.2.1=h7b6447c_0 - glib=2.69.1=h4ff587b_1 - glog=0.5.0=h2531618_0 - grpc-cpp=1.46.1=h33aed49_0 - gst-plugins-base=1.14.0=h8213a91_2 - gstreamer=1.14.0=h28cd5cc_2 - h5py=3.7.0=py39h737f45e_0 - hdf5=1.10.6=hb1b8bf9_0 - huggingface_hub=0.8.1=py_0 - icu=58.2=he6710b0_3 - idna=3.3=pyhd3eb1b0_0 - importlib-metadata=4.11.3=py39h06a4308_0 - importlib_metadata=4.11.3=hd3eb1b0_0 - intel-openmp=2021.4.0=h06a4308_3561 - joblib=1.1.0=pyhd3eb1b0_0 - jpeg=9e=h7f8727e_0 - kiwisolver=1.4.2=py39h295c915_0 - krb5=1.19.2=hac12032_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.38=h1181459_1 - lerc=3.0=h295c915_0 - libboost=1.73.0=h28710b8_12 - libbrotlicommon=1.0.9=h166bdaf_7 - libbrotlidec=1.0.9=h166bdaf_7 - libbrotlienc=1.0.9=h166bdaf_7 - libclang=10.0.1=default_hb85057a_2 - libcurl=7.82.0=h0b77cf5_0 - libdeflate=1.8=h7f8727e_5 - libedit=3.1.20210910=h7f8727e_0 - libev=4.33=h7f8727e_1 - libevent=2.1.12=h8f2d780_0 - libffi=3.3=he6710b0_2 - libgcc=7.2.0=h69d50b8_2 - libgcc-ng=12.1.0=h8d9b700_16 - libgfortran-ng=7.5.0=ha8ba4b0_17 - libgfortran4=7.5.0=ha8ba4b0_17 - libllvm10=10.0.1=hbcb73fb_5 - libnghttp2=1.46.0=hce63b2e_0 - libpng=1.6.37=hbc83047_0 - libpq=12.9=h16c4e8d_3 - libprotobuf=3.20.1=h4ff587b_0 - libssh2=1.10.0=h8f2d780_0 - libstdcxx-ng=11.2.0=h1234567_1 - libthrift=0.15.0=hcc01f38_0 - libtiff=4.4.0=hecacb30_0 - libutf8proc=2.6.1=h27cfd23_0 - libuuid=1.0.3=h7f8727e_2 - libwebp=1.2.2=h55f646e_0 - libwebp-base=1.2.2=h7f8727e_0 - libxcb=1.15=h7f8727e_0 - libxkbcommon=1.0.1=hfa300c1_0 - libxml2=2.9.14=h74e7548_0 - libxslt=1.1.35=h4e12654_0 - libzlib=1.2.12=h166bdaf_1 - llvm-openmp=14.0.4=he0ac6c6_0 - lz4-c=1.9.3=h295c915_1 - matplotlib=3.5.1=py39h06a4308_1 - matplotlib-base=3.5.1=py39ha18d171_1 - mkl=2021.4.0=h06a4308_640 - mkl-service=2.4.0=py39h7f8727e_0 - mkl_fft=1.3.1=py39hd3c417c_0 - mkl_random=1.2.2=py39h51133e4_0 - multidict=5.2.0=py39h7f8727e_2 - multiprocess=0.70.12.2=py39h7f8727e_0 - munkres=1.1.4=py_0 - ncurses=6.3=h7f8727e_2 - nodejs=6.11.2=h3db8ef7_0 - nspr=4.33=h295c915_0 - nss=3.74=h0370c37_0 - numexpr=2.8.1=py39h807cd23_2 - numpy=1.22.3=py39he7a7128_0 - numpy-base=1.22.3=py39hf524024_0 - openssl=1.1.1t=h7f8727e_0 - orc=1.7.4=h07ed6aa_0 - packaging=21.3=pyhd3eb1b0_0 - pandas=1.4.2=py39h295c915_0 - parquet-cpp=1.5.1=h34088ae_4 - pcre=8.45=h295c915_0 - pillow=9.2.0=py39hace64e9_1 - pip=21.2.4=py39h06a4308_0 - ply=3.11=py39h06a4308_0 - protobuf=3.20.1=py39h295c915_0 - pyarrow=8.0.0=py39h992f0b0_0 - pycparser=2.21=pyhd3eb1b0_0 - pyopenssl=22.0.0=pyhd3eb1b0_0 - pyparsing=3.0.4=pyhd3eb1b0_0 - pyqt=5.15.7=py39h6a678d5_1 - pyqt5-sip=12.11.0=py39h6a678d5_1 - pysocks=1.7.1=py39h06a4308_0 - python=3.9.12=h12debd9_1 - python-dateutil=2.8.2=pyhd3eb1b0_0 - python-xxhash=2.0.2=py39h7f8727e_0 - python_abi=3.9=1_cp39 - pytorch=1.12.0=py3.9_cuda11.6_cudnn8.3.2_0 - pytorch-mutex=1.0=cuda - pytz=2022.1=py39h06a4308_0 - pyyaml=6.0=py39h7f8727e_1 - qt-main=5.15.2=h327a75a_6 - qt-webengine=5.15.9=hd2b0992_4 - qtwebkit=5.212=h4eab89a_4 - re2=2022.04.01=h295c915_0 - readline=8.1.2=h7f8727e_1 - regex=2022.3.15=py39h7f8727e_0 - requests=2.27.1=pyhd3eb1b0_0 - s2n=1.3.0=h9b69904_0 - sacremoses=master=py_0 - scikit-learn=1.0.2=py39h51133e4_1 - scipy=1.7.3=py39hc147768_0 - setuptools=61.2.0=py39h06a4308_0 - sip=6.6.2=py39h6a678d5_0 - six=1.16.0=pyhd3eb1b0_1 - snappy=1.1.9=h295c915_0 - sqlite=3.38.5=hc218d9a_0 - tensorboardx=2.2=pyhd3eb1b0_0 - threadpoolctl=2.2.0=pyh0d69192_0 - tk=8.6.12=h1ccaba5_0 - tokenizers=0.12.1=py39_0 - toml=0.10.2=pyhd3eb1b0_0 - tornado=6.1=py39h27cfd23_0 - tqdm=4.64.0=py39h06a4308_0 - transformers=4.20.1=pyhd8ed1ab_0 - typing-extensions=4.1.1=hd3eb1b0_0 - typing_extensions=4.1.1=pyh06a4308_0 - tzdata=2022a=hda174b7_0 - urllib3=1.26.9=py39h06a4308_0 - utf8proc=2.6.1=h27cfd23_0 - wheel=0.37.1=pyhd3eb1b0_0 - xxhash=0.8.0=h7f8727e_3 - xz=5.2.5=h7f8727e_1 - yaml=0.2.5=h7b6447c_0 - yarl=1.6.3=py39h27cfd23_0 - zipp=3.8.0=py39h06a4308_0 - zlib=1.2.12=h166bdaf_1 - zstd=1.5.2=ha4553b6_0 - pip: - absl-py==1.3.0 - accelerate==0.16.0 - asttokens==2.2.1 - backcall==0.2.0 - cachetools==5.2.1 - codetiming==1.4.0 - decorator==5.1.1 - executing==1.2.0 - google-auth==2.15.0 - google-auth-oauthlib==0.4.6 - grpcio==1.51.1 - ipython==8.7.0 - jedi==0.18.2 - levenshtein==0.20.9 - markdown==3.4.1 - markupsafe==2.1.1 - matplotlib-inline==0.1.6 - more-itertools==9.0.0 - oauthlib==3.2.2 - parso==0.8.3 - pexpect==4.8.0 - pickleshare==0.7.5 - prompt-toolkit==3.0.36 - psutil==5.9.4 - ptyprocess==0.7.0 - pudb==2022.1.3 - pure-eval==0.2.2 - py-spy==0.3.3+computecanada - pyasn1==0.4.8 - pyasn1-modules==0.2.8 - pygments==2.13.0 - rapidfuzz==2.13.7 - requests-oauthlib==1.3.1 - rsa==4.9 - stack-data==0.6.2 - tensorboard==2.11.0 - tensorboard-data-server==0.6.1 - tensorboard-plugin-wit==1.8.1 - traitlets==5.7.1 - urwid==2.1.2 - urwid-readline==0.13 - wcwidth==0.2.5 - werkzeug==2.2.2 prefix: /gpfs/projects/DT/mtp/WMT20/opt/miniconda3/envs/transformers-4.20.1 ```<|||||>I should point out that I'm running `examples/pytorch/language-modeling/run_mlm.py`.<|||||>I said on the the main branch. You have Transformers 4.20 installed, you need a source install.<|||||>Thank you for the much needed speed improvement. I'm now using `accelerate==0.16.0` with `transformers==4.27.0.dev0` and it no longer takes 11h to skip the first batches.
transformers
21,933
closed
Update README logo
# What does this PR do? Update the README logo, mainly to have it visible in dark-mode (instead of black on black).
03-03-2023 15:39:42
03-03-2023 15:39:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>nice!
transformers
21,932
closed
Some text in the international README files are in the wrong language
### System Info Some of the text in the international README files, such as the model descriptions, are in the wrong language. ### Who can help? _No response_ ### Reproduction N/A ### Expected behavior The text in each README file should be in the correct language. A good starting point is looking at the model list for each language.
03-03-2023 15:23:47
03-03-2023 15:23:47
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,931
closed
[CLAP] Support batched inputs for CLAP. Fixes pipeline issues
# What does this PR do? Support batching the `is_longer` part of the input for `zero_shot_audio_classification`. The model was not able to run batched inputs. Added a test to support this.
03-03-2023 14:47:48
03-03-2023 14:47:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Just need to update the expected values for the doctest and will merge<|||||>Pipeline, CI and doctests are all green 😉
transformers
21,930
closed
Avoid failure in `check_repo.py` due to missing backends
# What does this PR do? My 2 PRs #21903 and #21909 forgot to check missing backends just like `check_all_models_are_auto_configured`, and it could fail on users env where some backends are missinig. This PR fixes this by checking missing backends. Sorry about that. **Remark** ``` from transformers.models.auto.modeling_flax_auto import FLAX_MODEL_MAPPING_NAMES ``` this worked even without backend jax/flax. That's why I missed.
03-03-2023 14:00:04
03-03-2023 14:00:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,929
closed
[Flan-UL2] Add-flan-ul2
# What does this PR do? Adds the documentation for the FLan-UL2 model cc @younesbelkada Fixes #21917
03-03-2023 13:10:05
03-03-2023 13:10:05
Also, why are the international README entries in the wrong language?<|||||>This should not be the case! Will fix this. <|||||>@sgugger it seems that the README translations are not correct for several languages (korean, jp, etc) due to the fact that they do not follow the correct structure ("released with the repository", "released with the blogpost", ..), As this requires a bit of work I think that we can address a solution in a follow-up PR! Wdyt?<|||||>Btw, I opened a new issue for the README translations to keep things nice and organized (#21932).<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot!
transformers
21,928
closed
Use large VM for `repo_utils_job`
# What does this PR do? Use `large` VM for `repo_utils_job`, as #21856 add `torch` for that job which requires more memory.
03-03-2023 12:56:33
03-03-2023 12:56:33
Can you just try to do a fake code modification in one of the repo utils so that the tests are run and we can check it works?<|||||>Should be fine with 2 runs https://app.circleci.com/pipelines/github/huggingface/transformers/58978/workflows/d2090147-1c3f-4b21-8840-36409ab3424b https://app.circleci.com/pipelines/github/huggingface/transformers/58978/workflows/09bc67f7-a079-4865-9ec2-9094820afa0e/jobs/719891 and it shows docker/large<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
21,927
closed
Fix `ZeroShotAudioClassificationPipeline` doctest
# What does this PR do? Fix `ZeroShotAudioClassificationPipeline` doctest
03-03-2023 12:53:37
03-03-2023 12:53:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,926
closed
Sync preprocesses before loading the processor at run_speech_recognition_ctc.py
# What does this PR do? Make sure all processes wait until data is saved before loading the processor from the `output_dir` in the `pytorch/speech-recognition/run_speech_recognition_ctc.py` example. Issue: * Non-main proccesses might try to load the processor from the `output_dir` before it is saved. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
03-03-2023 10:57:22
03-03-2023 10:57:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi <|||||>Updated seq2seq ASR fine-tuning script. I'm not very good with github, I guess there is no need to do a new PR.<|||||>Wait... there is something I don't get correctly. As far as I understand from the [(documentation )](https://huggingface.co/transformers/v4.11.3/main_classes/trainer.html#transformers.TrainingArguments.main_process_first) , any code inside a block `with training_args.main_process_first():` should be executed only by the main process: ``` A context manager for torch distributed environment where on needs to do something on the main process, while blocking replicas, and when it’s finished releasing the replicas. One such use is for datasets’s map feature which to be efficient should be run once on the main process, which upon completion saves a cached version of results and which then automatically gets loaded by the replicas. ``` But in my experience, the code is executed by all the processes, not just the main one. Take this minimal `example.py': ```python from transformers import TrainingArguments,HfArgumentParser from transformers.trainer_utils import is_main_process def main(): parser = HfArgumentParser((TrainingArguments,)) training_args, = parser.parse_args_into_dataclasses() rank = training_args.local_rank main_process = is_main_process(rank) print(f'\nBEFORE WITH - local_rank={rank} is_main_process={main_process}') with training_args.main_process_first(): print(f'\nINSIDE WITH - local_rank={rank}') if __name__ == "__main__": main() ``` If I execute it in a 4 GPU node: `OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node 4 example.py --output_dir none` The synching is working, but all processes execute the "INSIDE" `print` Executing with newer `torchrun` does the same: `OMP_NUM_THREADS=1 torchrun --standalone --nnodes=1 --nproc_per_node=4 example.py --output_dir none` What I am geting wrong?<|||||>No, as the name indicates, it executes the code in the context manager on the main process, and then on all the others. The code is indeed executed in all processes, just in a certain order. Since with Datasets, everything is cached, executing the preprocessing inside that contextmanager means that process 0 will do the preprocessing, and then all the other will load the result from the cache without needing to do the preprocessing.<|||||>Ok, then the PR is not correct, since all the processes will try to write the json files. I removed the original: `if is_main_process(training_args.local_rank):` that should be there inside the `with` block...<|||||>Indeed, your changes are perfect. Is this ready to be merged now?<|||||>It is working in my end without any problem<|||||>Is this good for merge @mpenagar? Changes LGTM!<|||||>Yes, it is ready. Anyway, I don't know how github works. Should I close the PR (there is a "Close" button there)?<|||||>Awesome, thanks for confirming @mpenagar and for your contribution 🤗
transformers
21,925
closed
[Whisper] Index error with model weights like base, large-v2
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. I have been using Whisper from transformers library with my own custom script. The full code is attached below for reference: ``` import argparse import base64 import os import tempfile import torch from transformers import ( AutoFeatureExtractor, AutoTokenizer, WhisperForConditionalGeneration, WhisperProcessor, pipeline, ) from utils.b64 import convert_to_base64 model_dir = "openai/whisper-base" class Whisper: def __init__(self): self.tokenizer = AutoTokenizer.from_pretrained(model_dir) self.feature_extractor = AutoFeatureExtractor.from_pretrained(model_dir) self.processor = WhisperProcessor.from_pretrained(model_dir) self.device = "cuda" if torch.cuda.is_available() else "cpu" # print([x for x in Path(model_dir).iterdir()]) self.model = WhisperForConditionalGeneration.from_pretrained(model_dir).to(self.device) self.is_timestamp = False def predict_raw(self, payload): if payload is None: return {"inputerror": "JSON expected"} if "wav_base64" not in payload: return {"inputerror": "Missing key wav_base64 in payload."} afs = payload["wav_base64"] if not isinstance(afs, str): return {"inputerror": "Audio file to passed as input in base64 format"} if "timestamps" not in payload: timestamp = self.is_timestamp elif "timestamps" in payload and type(payload["timestamps"]) != str: return {"inputerror": "timestamps should be of string datatype"} elif "timestamps" in payload and payload ["timestamps"] != "true": return {"inputerror": "timestamps payload should be of Value True"} else: timestamp = True lang = payload.get("language") print(lang) afs = base64.b64decode(afs) dno = torch.cuda.current_device() if self.device == "cuda" else -1 with tempfile.NamedTemporaryFile() as a_file: a_file.write(afs) pipe = pipeline( task="automatic-speech-recognition", model=self.model, tokenizer=self.tokenizer, feature_extractor=self.feature_extractor, framework="pt", chunk_length_s=30, generate_kwargs={"max_new_tokens": 1024}, device=dno, return_timestamps=timestamp ) if lang: self.model.config.forced_decoder_ids = self.processor.get_decoder_prompt_ids( task="transcribe", language=lang ) if timestamp: results = pipe(a_file.name) timestamp_info = [{"text": x["text"], "start": x["timestamp"][0],"end": x["timestamp"][1]} for x in results["chunks"]] return {"text": results["text"], "timestamps": timestamp_info} else: return pipe(a_file.name) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "audio_file", type=str, help="Input the path to audio file you want to transcribe", ) parser.add_argument( "--language", type=str, help="Input the language", ) args = parser.parse_args() kwargs = vars(args) b64 = convert_to_base64(kwargs["audio_file"]) if kwargs["language"]: payload = {"wav_base64": b64, "language": kwargs["language"]} payload = {"wav_base64": b64, "timestamps": "true"} model = Whisper() print(model.predict_raw(payload)) payload = {"wav_base64": b64} print(model.predict_raw(payload)) ``` 2. When I am using model weights of `base` and `large-v2`. The code is getting an error as below: ``` Traceback (most recent call last): File "/root/whisper_project/microservice-ai-whisper/deploy/debug.py", line 102, in <module> print(model.predict_raw(payload)) File "/root/whisper_project/microservice-ai-whisper/deploy/debug.py", line 75, in predict_raw results = pipe(a_file.name) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__ return super().__call__(inputs, **kwargs) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1101, in __call__ return next( File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1015, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 445, in _forward tokens = self.model.generate( File "/root/mambaforge/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1543, in generate return super().generate( File "/root/mambaforge/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/root/mambaforge/lib/python3.10/site-packages/transformers/generation/utils.py", line 1406, in generate return self.greedy_search( File "/root/mambaforge/lib/python3.10/site-packages/transformers/generation/utils.py", line 2211, in greedy_search next_token_logits = outputs.logits[:, -1, :] IndexError: index -1 is out of bounds for dimension 1 with size 0 ``` 3. I noticed this issue only with some model weights. The exact same code works with medium and large-v1 model weights ### Expected behavior Get the transcribed output along with corresponding timestamps
03-03-2023 10:45:25
03-03-2023 10:45:25
Do you have the sample audio that triggers the bug ? I couldn't reproduce locally. Aslo, is there any particular reason for having such a complex script ? I can see several things that could slow down this code more than it should. `pipe = pipeline(task="automatic-speech-recognition", model=model_dir, device=device)` should work out of the box. Here for instance the device is set during prediction, which is fcalled repetively could slow down things a bit (once the device is set it shouldn't move afterwards. Even the pipeline creation, is not big, but still could be done ahead of time. Cheers ! <|||||>@Narsil my apologies. I got this error with specific weights in transformers library v4.26.1. You can reproduce this error with the file in the below link. https://drive.google.com/file/d/1ubw9PLlo5NwB7xwTwfVnuRWKH8L5XPIx/view?usp=sharing I am using whisper models for offline inference. So simply passing the pipeline directly won't work: ``` pipe = pipeline(task="automatic-speech-recognition", model=model_dir, device=device) ``` That's why I have to manually pass the pipeline: ``` pipe = pipeline( task="automatic-speech-recognition", model=self.model, tokenizer=self.tokenizer, feature_extractor=self.feature_extractor, framework="pt", chunk_length_s=30, generate_kwargs={"max_new_tokens": 1024}, device=dno, return_timestamps=timestamp ) ``` Thanks for pointing the issue in device_no, I will set it in such a way it won't always be called each time when method is called. <|||||>While in the latest version in master I am getting the error: v4.27.0-dev for all model weights, when I use the above code: ``` Traceback (most recent call last): File "/root/whisper_project/microservice-ai-whisper/deploy/Whisper.py", line 99, in <module> print(model.predict_raw(payload)) ile "/root/whisper_project/microservice-ai-whisper/deploy/Whisper.py", line 72, in predict_raw results = pipe(a_file.name) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__ return super().__call__(inputs, **kwargs) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1101, in __call__ return next( File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1015, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 445, in _forward tokens = self.model.generate( File "/root/mambaforge/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1534, in generate logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] File "/root/mambaforge/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 935, in __init__ self.no_timestamps_token_id = generate_config.no_timestamps_token_id AttributeError: 'GenerationConfig' object has no attribute 'no_timestamps_token_id' ``` ![image](https://user-images.githubusercontent.com/101088788/223036652-fa889119-fbe1-41ed-87b1-91db3cfd29ee.png) <|||||>Ok this works: ```python from transformers import pipeline pipe = pipeline( task="automatic-speech-recognition", model="openai/whisper-large-v2", chunk_length_s=30, device=0, return_timestamps=True, ) out = pipe("sample.wav") print(out) ``` Then you can definitely use the same simplicity with local values. Just save the pipeline `pipe.save_pretrained("whisper-local")` And then ```python from transformers import pipeline pipe = pipeline( task="automatic-speech-recognition", model="./whisper_local", chunk_length_s=30, device=0, return_timestamps=True, ) ``` Should work. No for the latest error, I'm guessing this has to do with recent changes in the configuration of whisper. Is that possible @ArthurZucker ?<|||||>Thank you @Narsil for techniques to reduce complexity. I didn't we can save pipeline also locally.<|||||>Yes, if you are using main, and want to use timestamps, you should make sure that the model has the `no_timestamp_id`. This is because this is a new feature, and for proper generation, we recommend setting a `generation_config`. Also, if you check both models, they have a different `configuration.forced_decoder_ids`. This is probably what was causing the difference in behaviour between the `large` and `base` (for example). <|||||>@ArthurZucker how can I make sure that model has the `no_timestamp_id`?<|||||>You should save/push the update `generation_config`. The simplest you can do is the following: ```python from transformers import GenerationConfig, WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("your_pretrained_checkpoint") generation_config = GenerationConfig.from_pretrained("openai/whisper-base") # if you are using a multilingual model model.generation_config = generation_config model.push_to_hub("your_pretrained_checkpoint", use_auth_token = "your_token_if_not_logged_in", create_pr = True) ```<|||||>Thanks for your answer. Hope you will resolve this issue before taking it in next stable version. I am closing this issue for now.
transformers
21,924
closed
[WIP] [Flax] Improving Docs
# What does this PR do? Adding relevant Jax/Flax code in `<frameworkcontent>` in Transformers Docs. Issues: - Since Flax has no `Trainer` class, no code exist for `Train` section in the tasks guide ([example](https://huggingface.co/docs/transformers/tasks/token_classification#train)). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. *Not mentioning due to currently in WIP* - Documentation: sgugger, stevhliu and MKhalusova - Flax: sanchit-gandhi
03-03-2023 10:32:13
03-03-2023 10:32:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21924). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,923
closed
Fix `AlignModelTest` tests
# What does this PR do? - Fix `AlignModelTest` torchscript tests. This model has non-persistent buffer, and need a bit more care (same as in common test) ```python self.register_buffer( "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False ) ``` - Fix `AlignModelTest.test_multi_gpu_data_parallel_forward`: same as CLIP, we need a even number for `batch_size`, as this model have `logits_per_image` and `logits_per_text` which is of shape `(batch_size, batch_size)`, and in order to gather across devices, we need the second dim being the same.
03-03-2023 09:58:15
03-03-2023 09:58:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,922
closed
update model_split_percents
# What does this PR do? #21883 changed `WhisperModelTest`'s `model_split_percents ` from `[0.5, 0.7, 0.9]` to `= [0.8, 0.9]` to make `test_model_parallelism` work. But it fails `test_disk_offload` which uses `self.model_split_percents[0]`. This PR adds back `0.5`. With `model_split_percents = [0.5, 0.8, 0.9]`, the relevant Whisper tests using `model_split_percents` all pass.
03-03-2023 09:30:40
03-03-2023 09:30:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,921
closed
Fix gradient checkpointing megatron bert
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 09:22:23
03-03-2023 09:22:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,920
closed
Fix gradient checkpointing bug in mvp
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 09:22:17
03-03-2023 09:22:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,919
closed
Fix wrong documentation about DataCollator padding defaults
# What does this PR do? Fixes some documation for DataCollatorForTokenClassification and DataCollatorForSeq2Seq, which previously mentioned no padding is the default, which is false. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR (@sgugger).
03-03-2023 07:17:22
03-03-2023 07:17:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,918
closed
Fix gradient checkpointing bug in MBart
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
03-03-2023 05:33:56
03-03-2023 05:33:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,917
closed
Add FLAN-UL2
### Model description UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. FLAN-UL2 has the same configuration as the original UL2 20B model, except that it has been instruction tuned with Flan. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The model architecture (UL2) is already in Huggingface Transformers. The 20B model weights are here: https://github.com/google-research/google-research/tree/master/ul2#checkpoints
03-03-2023 05:03:59
03-03-2023 05:03:59
@DanielHesslow (since you ported the original UL2 weights). I would like to contribute, but I'm not too sure how to convert the weights from JAX to PyTorch.<|||||>I had a dirty dirty script which unfortunately lives on my old dev machine that I don't have with me at the moment 😅 I basically just loaded the t5 weights and went through and renamed every thing to match the HF format. <|||||>Hey! Thanks for opening, they will be available on the hub soon! We are converting them with @younesbelkada <|||||>The model is already out! (https://huggingface.co/google/flan-ul2) @younesbelkada has a space comparing Flan-T5-XXL and Flan-UL2 here: https://huggingface.co/spaces/ybelkada/i-like-flan-ul2
transformers
21,916
closed
[In progress] Add warning padding attention mask
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16136 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-03-2023 02:48:57
03-03-2023 02:48:57
cc @gante <|||||>Thank you for the comment! Based on my understanding, [this line of code](https://github.com/huggingface/transformers/pull/21916/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1054) enables the checking process only once during the forward pass, so it should not significantly impact performance. The current warning method only issue warnings when `attention_mask` is necessary (due to the presence of padding tokens in the input), but no `attention_mask` is provided. In other cases where `attention_mask` is not required, no warning is issued. The additional checking on special tokens allows a more detailed warning message. I agree that your suggested method is more concise and efficient, but it may generate warnings when `attention_mask` is not needed. Since it's my first time contributing to the community, I don't have a strong opinion towards either solution. The original work is by @ydshieh and @patrickvonplaten. Perhaps they have additional insights and can suggest a more effective solution.<|||||>> Why not a simple logger.warning_once() This is recently introduced :-)<|||||>@anruijian It checks `input_ids` until there is a batch in which a `pad_token_id` exists. If a user is working on a problem where they have no `pad_token_id` on their data and they don't pass the `attention_mask`, there is a check made every forward pass. I'd strongly advocate for a simple warning when the `attention_mask` is not passed 🤗 As a side note, we have related problems at other points in the code base. Getting into the habit of passing the `attention_mask` would really make everyone happier!<|||||>@gante Just to confirm before updating the PR, we are going to remove `warn_if_pad_token_in_input_ids_no_attention_mask` method and use `logger.warning_once` in `forward()`: ```python def forward(...): ... if not attention_mask: logger.warning_once( "\nWe strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the" " attention weights. " ) ... ```<|||||>@anruijian correct :) I would add a short example in the warning, such as `(e.g. to correctly mask the pad tokens)`, but I'll leave that up to you!<|||||>@gante ```python def forward(...): ... if not attention_mask: logger.warning_once( "\nWe strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the" " attention weights. Example to correctly mask the pad tokens: model(input_ids, attention_mask=attention_mask)." " See https://huggingface.co/docs/transformers/v4.23.1/en/troubleshooting#incorrect-output-when-padding-tokens-arent-masked for more details." ) ... ``` Does this example look good to you? I also link the official doc on the issue. Not sure if it's too long. Let me know what you think about this. Thanks! <|||||>@anruijian sounds good to me! (A minor nit: the link is for v4.23 of the docs, should be `https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked` instead)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,915
closed
Mask2Former ImageProcessor produces different results on Mac vs Windows.
### System Info >>> transformers.__version__ '4.27.0.dev0' >>> Python 3.10.6 Windows vs Mac ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-instance", reduce_labels=False, ignore_index=255, do_resize=True, size=dict(width=500, height=500), do_normalize=True, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225]) device = torch.device("cpu") image = Image.open(filename1) image = image.convert('RGB') image = np.array(image) image = image.astype(np.float32) image = image.transpose(2,0,1) print(image.dtype, image.shape, image.mean((1, 2))) # float32 (3, 1000, 1000) [156.41327 149.47672 137.97989] ret = processor([image], return_tensors="pt") pixel_values = ret["pixel_values"].to(device) print(pixel_values.dtype, pixel_values.shape, pixel_values[0].mean((1, 2)), pixel_values[0].std((1, 2))) ``` Windows ``` float32 (3, 1000, 1000) [156.41327 149.47672 137.97989] mean = [-0.4228946 -0.17078026 0.25235963] std = [0.81622934 0.699496 0.71027416] ``` Mac ``` float32 (3, 1000, 1000) [156.41327 149.47672 137.97989] mean = [-1.229962 -1.1720737 -0.6407509] std = [1.5912648 1.5453817 1.7506045] ``` ### Expected behavior Same result on Windows and Mac
03-03-2023 02:38:54
03-03-2023 02:38:54
Here is the image I used. ![0023](https://user-images.githubusercontent.com/590151/222617740-0088ded3-cd49-46df-aa23-0c2a30605729.jpg) <|||||>Also cc @alaradirik <|||||>Thanks for raising this issue @nickponline and for all the details! Could you give details on how you're reading in the image e.g. through torchvision and the format the image is saved in? If I download the image in the comment above I get different results than in the snippet. ``` import torchvision # Load in downloaded image image = torchvision.io.read_image('222617740-0088ded3-cd49-46df-aa23-0c2a30605729.jpg') image = image.numpy() print(image.dtype, image.shape, image.sum()) # uint8 (3, 1000, 1000) 443861838 ```<|||||>@amyeroberts @sgugger I'm reading the image with PIL ``` from PIL import Image image = Image.open(filename) image = image.convert('RGB') image = np.array(image) image = image.astype(np.float32) image = image.transpose(2,0,1) ``` At that point I have confirmed the the `image` is identical on both Windows and Mac. Also after inference further in the code the Mac result is the worse than the windows result if that help. But it's the image processor that is generating a different result for identical inputs. <|||||>@amyeroberts @sgugger the means and stds of the input image are different on Windows and Mac after `ImageProcessor` forward call: Windows ``` mean = [-0.4228946 -0.17078026 0.25235963] std = [0.81622934 0.699496 0.71027416] ``` Mac ``` mean = [-1.229962 -1.1720737 -0.6407509] std = [1.5912648 1.5453817 1.7506045] ```<|||||>@amyeroberts @sgugger I updated the repro snippet above to make it easier to confirm.<|||||>@nickponline - thank you very much for extra details! I'll dig into this and try to figure out what's happening 🕵️‍♀️ <|||||>@amyeroberts @sgugger I feel the issue is here: https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L159 The image is already in the range `[0..255]` and after the rescale and then `image.astype(np.uint8)` the arrays are different on Windows and Mac. <|||||>Calling in backup here: https://stackoverflow.com/questions/75632469/why-does-np-astypeuint8-give-different-results-on-windows-versus-mac 😀<|||||>Confirming that this works with `Python 3.10.6+ (Mac) Numpy 1.24.2+`. ShruggingFace 🤷‍♂️. It must be a bug or change of behavior in Numpy or Python. Can close. <|||||>@nickponline Thanks for the updates and all the work digging into this! Looking at the line you highlighted and conversation on stackoverflow, it seems there's two things happening, resulting in this issue: * Rescaling the pixel values by multiplying by 255 if the input image is of type `float32`. Resulting in pixel values between 0 and 65,025. Then casting to `uint8` [here](https://github.com/huggingface/transformers/blob/fcf813417aa34f3a0ea7d283f7d4f6b0834cf098/src/transformers/image_transforms.py#L162) * Different overflow behaviour in numpy - as highlighted in [the stackoverflow comment](https://stackoverflow.com/a/75632979) In this case, updating numpy will give consistent results between the OS's, however the resulting pixel_values from the image processor may not be sensible or produce good predictions from the model, depending on how the values are cast when overflow occurs. The first issue is tricky to handle - the logic is partly there for backwards compatibility as resizing was handled by the PIL library and, when converting to PIL images, whether to rescale the pixel values was inferred by the type. The assumption is that raw pixel values are of an int type and between 0-255; unnormalized float type pixel values have values between 0-1. I think there's two possible things we can do to address these issues in the future: * Add an additional check on pixel values before rescaling * Raise a warning when casting to uint8 if overflow is going to occur I'll open a PR for these. As a side note, you don't need to convert your images to float before feeding into the image processor. You can pass in the PIL images directly. p.s. thanks for coining 'Shrugging Face' - I shall be using it in the future!
transformers
21,914
closed
feat: filter try/except when looking at custom code
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21912 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-03-2023 02:30:36
03-03-2023 02:30:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again!<|||||>Happy to help! Thanks for the great packages! On Fri, Mar 3, 2023 at 8:44 AM Sylvain Gugger ***@***.***> wrote: > Thanks again! > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/pull/21914#issuecomment-1453553255>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AIBFIPJCABTGQJDB3PCJKPDW2HYTDANCNFSM6AAAAAAVOB6YRQ> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
transformers
21,913
closed
[performance] from_pretrained is still much slower than torch.load and seems to be initializing weights
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 2.0.0.dev20230224+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @stas00, @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Loading a model with `from_pretrained` takes much longer than the underlying torch.load. For example, for the `Salesforce/codegen-6B-mono` model, `CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')` takes ~38 seconds, whereas `torch.load()` on its `pytorch_model.bin` takes just ~5.4 seconds. This is very similar to #9205, but is happening with the latest transformers from pip (4.26.1), so possibly a regression? Short repro: ```python import time import torch from transformers import CodeGenForCausalLM t1 = time.time() CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono') t2 = time.time() print("Load took", t2-t1, "seconds") ``` Prints **Load took 37.78910255432129 seconds** ```python import time import torch from transformers.utils import cached_file torch.load(cached_file('Salesforce/codegen-6B-mono', 'pytorch_model.bin')) ``` Prints **Load took 5.443041801452637 seconds** Based on profiling the HF from_pretrained script, it seems like ~75% of the time is being spent doing random initialization of weights that are about to be overwritten. This is the same problem that was fixed in PR #11471 so I'm not sure what's going on here. Here's the cProfile output and output from gprof2dot: [loadmodel_profile.txt](https://github.com/huggingface/transformers/files/10877225/loadmodel_profile.txt) [hf_loadmodel_new.pdf](https://github.com/huggingface/transformers/files/10877227/hf_loadmodel_new.pdf) ### Expected behavior `from_pretrained` should skip weight initialization when loading a pretrained model.
03-03-2023 01:24:02
03-03-2023 01:24:02
Thank you for trying to analyse this, @moyix and for wanting to make things faster. I dug into it and here is what I have to share with you. # What's happening for real It's pretty clear from your profiler report that the diff comes from weights init which as you said get overwritten with weights. Indeed this is what's happening here. Except you are mixing 2 things. As you discovered lazy model init was implemented here https://github.com/huggingface/transformers/pull/11471 and it later was improved upon in multiple PRs. This was done only for `_init_weights` functions defined in the modeling code of `transformers`. Now you're forgetting about calls like https://github.com/huggingface/transformers/blob/37e0974afcbccdc85da59d51b44e1437b6b3caea/src/transformers/models/codegen/modeling_codegen.py#L117-L119 which of course by default call their init functions: ``` File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/codegen/modeling_codegen.py", line 117, in __init__ self.qkv_proj = nn.Linear(self.embed_dim, self.embed_dim * 3, bias=False) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 101, in __init__ self.reset_parameters() File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 107, in reset_parameters init.kaiming_uniform_(self.weight, a=math.sqrt(5)) File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/init.py", line 396, in kaiming_uniform_ ``` So that overhead all comes from pytorch `nn.Module` submodules and not `_init_weights` defined in the modeling code of `transformers`. You're wanting to use a huge 14GB model and it surely adds some 30sec to init it. The problem is that you're comparing loading the weights only with instantiating the model plus loading the weights, so of course they aren't the same thing. But we agree that it's a pointless waste of compute and time to init weights that are going to be overwritten moments later. To test I changed pytorch's `kaiming_uniform_` to be: ``` def kaiming_uniform_( tensor: Tensor, a: float = 0, mode: str = 'fan_in', nonlinearity: str = 'leaky_relu' ): return tensor ``` and the same for `uniform_` and `from_pretrained` was as fast as you wanted it to be. hint: perhaps you can use it as a hack until a better solution is provided - simply monkey patch the init functions with a no-op (I hope I covered the ones that are used here). ``` from transformers import CodeGenForCausalLM import torch.nn.init torch.nn.init.kaiming_uniform_ = lambda x, *args, **kwargs: x torch.nn.init.uniform_ = lambda x, *args, **kwargs: x CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono') ``` of course, I assume you are either doing inference or you have all weights in the distributed file - so no important init is missed. this I think should give you the speed closer to `torch.load` # What can be done But why you'd say can't you skip those inits? We actually are able to do so since pytorch-1.10 where special functionality was added. - https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html#torch.nn.utils.skip_init - https://pytorch.org/tutorials/prototype/skip_param_init.html Looking at the requirements it actually appears to be possible despite needing to support pytorch<1.10 as well. The modules will have to be adapted to meet 2 requirements: https://pytorch.org/tutorials/prototype/skip_param_init.html#updating-modules-to-support-skipping-initialization I will repaste them here: 1. The module must accept a device kwarg in its constructor that is passed to any parameters or buffers created during construction. 2. The module must not perform any computation on parameters or buffers in its constructor except initialization (i.e. functions from torch.nn.init). The first one is certainly possible since doing: ``` - def __init__(self, foo, bar): + def __init__(self, foo, bar, device=None): ``` should be backward compatible. I think the 2nd requirement should be somewhat possible, but I can't speak for the multitude of models we have. Once this is done, the rest of the `from_pretrained` will need to be adapted to use the `device` argument as in the example of the tutorial, ``` m = nn.Linear(10, 5, device='meta') ``` but of course it will be `m = ModelName(..., device='meta')` I think this needs to happen sooner than later as it'd greatly simplify the various juggling we have during the loading process (after updating all the models, e.g. like `low_cpu_mem_usage` functionality). But needing to support torch<1.10 might make this somewhat messy. I'm not sure. So now let me bring here @sgugger and @patrickvonplaten to take over as I'm currently working on a different project, and they can decide on whether the project is ready for this major change or not quite yet and then you can use my hack ;) p.s. BTW, while studying your report I have invalidated your suggestion that there was a general `from_pretrained` regression, but to do that I had to use a different class since `CodeGenForCausalLM` was added only recently. I went all the way back to `transformers==4.14` and `t5-large` loads with the same speed as the latest version. **edit** Additional solutions are added in: - https://github.com/huggingface/transformers/issues/21913#issuecomment-1453482689 - https://github.com/huggingface/transformers/issues/21913#issuecomment-1453858274<|||||>I'm curious, are you doing inference or finetuning? Because for the latter usually the init overhead is usually irrelevant. Fast loading is also important for debug. I think I'm going to propose to pytorch this new feature: ``` with torch.inference: m = MyModel(...) ``` and it would just work and be really fast w/o the overhead of init'ing weights which will be overloaded from pretrained weights. <|||||>Thanks for the very comprehensive answer! That makes perfect sense :) I am indeed doing inference and trying to get the batch size correct – so having to wait a long time for the model load each attempt (only to get a CUDA out of memory error) was a bit painful. That hack helps a lot for now, thanks!<|||||>Using `low_cpu_mem_usage=True` will initialize the model on the meta device (requires Accelerate as an extra dep) and should speed up the initialization as a result. This will become the default mid-term but we need some more preparation work by making the tests more robust for `from_pretrained` to make sure we absolutely don't break anything.<|||||>Some additional solutions coming from pytorch-slack where I asked [this question](https://pytorch.slack.com/archives/C3PDTEV8E/p1677813090248699): 1. install pytorch-nightly from instructions at https://pytorch.org/get-started/locally/ (or if you read this later when pytorch==2.0 is released any 2.0 and higher version will do). now you can do: ``` with torch.device("cuda"): model = CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono') ``` so it instantiates the model directly on your gpu and all the inits are run much faster. This solution is just a bit slower than cancelling out the init functions. plus your model will already be on gpu, so no copying overhead from cpu. Instead of using the context manager you can just set the default device like so: ``` torch.set_default_device('cuda') ``` and you no longer need to indent your existing code. 1b. Using materialization on the `meta` device will be really fast as it will cancel out the init functions and won't even waste time on allocating memory for the weights: ``` with torch.device("meta"): model = CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono') ``` but the resulting model isn't usable right away and requires additional manipulations to materialize it on the target device with the preloaded weights. This most likely have to be done by `transformers` unless pytorch comes up with a magical method a user could do themselves. credits: @alband and @stephenroller 2. Another solution comes from https://pytorch.org/torchdistx/latest/deferred_init.html, but it requires tweaking `from_pretrained` to support `from torchdistx.deferred_init import deferred_init, materialize_module` and this experimental package isn't easy to install since it requires CUDA extensions building (though not for this functionality), so we can't make `transformers` depend on it. It will have to be upstreamed into pytorch first. credits: @cbalioglu<|||||>In extension of @stas00 's number one, one might enhance the context manager solution with a diversion of the `init` functions. I wrote up [a bit more detail on my blog](http://lernapparat.de/faster-model-init). <|||||>@stas00 your solution is great, tested it a bit. Is there any timeline for this feature and could one help with integration? Would be interested to know what are the team's thoughts on integrating this feature within the `Trainer` but also `pipelines`? Happy to help if I can!<|||||>For the timeline questions we need to ask @sgugger <|||||>The `low_cpu_mem_usage=True` option is already there in Transformers and usable today. Changing the default will take more time to ensure backward compatibility.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I know this issue is closed but just some relevant feedback: I'm also facing extremely slow performance with the `from_pretrained` method, this time in a conda environment. I tried the `low_cpu_mem_usage=True` solution, but this requires a more recent version of `transformers` than is available in the conda repos so I can't. Reported already on [Stack Overflow](https://stackoverflow.com/questions/76059654/getting-a-error-when-running-gptneoxforcausallm-from-transformers-library-namee). TLDR: for a chunk of users (anyone who has to use a conda environment) the `low_cpu_mem_usage=True` parameter is not available or usable.<|||||>Hey @tomwagstaff-opml, thanks for reporting. I believe you're using the `transformers` version from the main channel of anaconda, but we don't (and none of the open-source project maintainers do) maintain this version. This is maintained by the anaconda team. In our README we indicate that you should use the [huggingface channel in order to install the package](https://github.com/huggingface/transformers#with-conda). Please install it as such: ``` conda install -c huggingface transformers ``` or, alternatively, use the conda-forge channel which is also the latest version: ``` conda install -c conda-forge transformers ```<|||||>Thanks for your help @LysandreJik - installing `transformers` from the Hugging Face channel has worked and allowed me to try out the `low_cpu_mem_usage` parameter
transformers
21,912
closed
Allow for try/except imports for custom code
### Feature request When uploading a model with custom code, I wanted to try and use Flash Attention in one of the modules. However, to get around a case where people might use it on CPU, I added `try/except` block for the import. However, I get an error when downloading locally like `This modeling file requires the following packages that were not found in your environment` which seems to come from [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/dynamic_module_utils.py#L112). ### Motivation I want to be able to write custom model code that allows for optional imports if possible (like FlashAttention) ### Your contribution I could try a crack at a PR?
03-02-2023 23:13:29
03-02-2023 23:13:29
Indeed that sounds like a nice feature to have. I think the easiest way to deal with it would be to remove all try/except blocks from the content [here](https://github.com/huggingface/transformers/blob/37e0974afcbccdc85da59d51b44e1437b6b3caea/src/transformers/dynamic_module_utils.py#L117) before the tests of the imports. If you want to take a stab at it, happy to review a PR!<|||||>Yes I'll give it a go and send a PR out in a few!
transformers
21,911
closed
Avoid modeling tests run in pipeline CI jobs
# What does this PR do? Avoid modeling tests run in pipeline CI jobs. PR #21887 applied ``` @is_pipeline_test class PipelineTesterMixin: ``` Together the changes in #21516 ``` class BertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): ``` these 2 make all modeling test methods become pipeline tests, therefore CircleCI pipeline jobs run all pipeline tests + all model tests. This causes OOM as pipeline jobs are run with `-n 8`. This PR applies `@is_pipeline_test` to each test methods instead of the test class to avoid such issue. ### Effect Run ```python python -m pytest -m is_pipeline_test -v tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model ``` #### On main test pass #### on this PR test deselected
03-02-2023 19:49:26
03-02-2023 19:49:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21911). All of your documentation changes will be reflected on that endpoint.
transformers
21,910
closed
Fix doctests for TFVisionTextDualEncoder
So I might have just copy-pasted all the PyTorch doctests into the TF class and made the CI angry. But it's fixed now!
03-02-2023 19:45:04
03-02-2023 19:45:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21910). All of your documentation changes will be reflected on that endpoint.
transformers
21,909
closed
Cleanup more auto mapping names
# What does this PR do? Just a follow-up PR of #21903. Rebase on main once #21911 (or a fix from your side) being merged will be OK.
03-02-2023 19:31:39
03-02-2023 19:31:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,908
closed
Temporarily skip 3 tests in `BridgeTowerModelTest`
# What does this PR do? skip for now
03-02-2023 18:04:13
03-02-2023 18:04:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,907
closed
Update modeling_funnel.py
Just for check, don't merge # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-02-2023 17:23:51
03-02-2023 17:23:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,906
closed
faster forward following what is done for images
# What does this PR do? Follow up of #21897 Keeps batch_size active, but restricts it to the imagepart. Calculates candidate_labels features only once (there's no way with this current pipeline to send different candidate_labels so we can take the optimization. Ultra narrowed for CLIP but the pipeline has existed for 4 months now with no new models. Given the popularity of CLIP for diffusion models, I think this is ok to overspecify. We can always relax later when new models come in. This allows to downgrade from ChunkPipeline to regular PIpeline
03-02-2023 17:17:26
03-02-2023 17:17:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failure seems unrelated
transformers
21,905
closed
Adds "causal-lm-with-past" to codegen
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-02-2023 16:36:11
03-02-2023 16:36:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR, but ONNX support is now in the optimum library and we don't accept new PRs in Transformers.<|||||>Okay, thanks, closing then!
transformers
21,904
closed
Add Blip and Blip2 for pipeline tests
# What does this PR do? A continuation of (not merged) #21802. @NielsRogge I will add you as the contributor. @Narsil Just in case you want to take a look too, as you reviewed #21904, and asked for adding the tests.
03-02-2023 15:48:03
03-02-2023 15:48:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you ! !!!
transformers
21,903
closed
Clean up auto mapping names
# What does this PR do? Clean up auto mapping names + add a check 🐕‍🦺
03-02-2023 13:29:32
03-02-2023 13:29:32
so far on `main` ```bash `layoutxlm` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`. `wav2vec2_with_lm` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`. `blip_2` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`. `decision_transformer_gpt2` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`. `nllb` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`. ```<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
21,902
closed
how to beam search
The current interface is similar to `output=model.generate(**inputs, num_beams=4, no_repeat_ngram_size=7)` but decoding one by one is too slow. Does beam seach support batch-by-batch decoding? How to write if support。
03-02-2023 13:16:17
03-02-2023 13:16:17
Please use the [forums](https://discuss.huggingface.co/) to ask such questions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,901
closed
[WIP] Creating automated test with release candidate of safetensors.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-02-2023 13:13:18
03-02-2023 13:13:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21901). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,900
closed
Make TFForceTokensLogitsProcessor exportable as tf concrete function.
Previously we were doing an inner conversion from `[[int]]` to `dict[int, int]` but `dict` is not something concrete functions like. This PR provides a fully compatible concrete function export, removing the inner transformation to `dict` tu use `tf.Tensor` only.
03-02-2023 12:50:37
03-02-2023 12:50:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21900). All of your documentation changes will be reflected on that endpoint.<|||||>Also, check CI haha<|||||>Will handle these 👍🏻 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,899
closed
Enable to pass two or model pretrained models to the Trainer.
### Feature request I tried a compound architecture that uses two tower models that get initialized with from_pretrained methods. Then I wrapped the Custom model class with the Pretrained model class. It works well in a single GPU. But fails in multiple GPU settings. ### Motivation It would be useful to change models like DPR. ### Your contribution I have tested a few methods and can help to improve.
03-02-2023 11:34:47
03-02-2023 11:34:47
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,898
closed
fix typo in Bart's attention
# What does this PR do? Fix a typo in Bart's attention.
03-02-2023 11:15:58
03-02-2023 11:15:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>thank you!
transformers
21,897
closed
Faster zero shot image
# What does this PR do? Should superseed: https://github.com/huggingface/transformers/pull/21861#event-8644623666 - Keeps `batch_size` active, but restricts it to the `image`part. - Calculates `candidate_labels` features only once (there's no way with this current pipeline to send different candidate_labels so we can take the optimization. - Ultra narrowed for `CLIP` but the pipeline has existed for 4 months now with no new models. Given the popularity of CLIP for diffusion models, I think this is ok to overspecify. We can always relax later when new models come in. This allows to downgrade from ChunkPipeline to regular PIpeline <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-02-2023 10:24:52
03-02-2023 10:24:52
@yessenzhar<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Does it also do a cross product ? Then possibly yes.<|||||>I mostly copy pasted the whole sequence. Will open a PR <|||||>> I mostly copy pasted the whole sequence. Will open a PR The cross product is in the model itself (image_batch_size x text_batch_size).<|||||>CLAP has the same implementation for loss and etc. So we should see a similar behaviour <|||||>@Narsil Thank you for sorting this out this quick.
transformers
21,896
closed
[T5 doc] Fix confusing documentation about `d_kv`
# What does this PR do? Fixes #21641. The documentation mentioned that `d_kv` must be equal to `d_model//num_heads` while this does not hold in the code. The code states that `d_kv = inner_dim//num_heads`, but inner dim can be different from the d_model.
03-02-2023 09:13:45
03-02-2023 09:13:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,895
closed
Make error message more informative
# What does this PR do? Make error message more descriptive by adding one word, changes the message `The hidden size ({config.hidden_size}) is not a multiple of the number of attention` to `The hidden size ({config.hidden_size}) is not a multiple of the number of attention heads` in the previous error message, the word "head" was missing <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-02-2023 08:15:55
03-02-2023 08:15:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,894
closed
Add FlaxWhisperForAudioClassification model
# What does this PR do? Fix : #21779 Please review and let me know changes @sanchit-gandhi
03-02-2023 07:53:14
03-02-2023 07:53:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21894). All of your documentation changes will be reflected on that endpoint.<|||||>> Modelling code looks good @raghavanone! Nice one on getting this working so quickly 🙌 Do you want to have a go at adding the encoder-only tests? See the PyTorch WhisperForAudioClassficiation PR for details, think you can also add these quite quickly :) I have added the Encoder tests, But some test are failing, The FlaxWhisperForAudioClassification class extends FlaxWhisperPreTrainedModel . Due to this inheritance, the call method expects decoder related params. Should the FlaxWhisperForAudioClassification not extend FlaxWhisperPreTrainedModel instead create a new pretrainedclass ? <|||||>Hey @raghavanone! The PyTorch model has just been merged (https://github.com/huggingface/transformers/pull/21754), so you can rebase onto main to get the required config changes: ``` git fetch upstream git rebase upstream/main ``` This will fix the failing Flax tests we're getting here: https://app.circleci.com/pipelines/github/huggingface/transformers/58972/workflows/2388bd70-553e-412f-9ee7-0599cace5639/jobs/719829 The only thing to make sure is that the first time you push after rebasing, you **force push** to origin: ``` git add . git commit -m "Some new changes after rebase" git push -f origin fix_issue_21779 ``` You only have to force push once, the next time you can just regular push: ``` git add . git commit -m "Some more changes" git push -u origin fix_issue_21779 ```<|||||>@sanchit-gandhi There are 2 test failing here, I am unable to get the same failure locally in my machine. Any pointers on how to replicate failing test and fix it ? <|||||>Hey @raghavanone! Would you mind going through the previous review comments and marking them as resolved where you've addressed them? I'll then get you a final review asap! Thanks!<|||||>Hey @raghavanone - I think the commit history has been corrupted for this PR? Gentle reminder that one must force push after rebasing: https://github.com/huggingface/transformers/pull/21894#issuecomment-1458359220 Think this is probably the culprit for the 250 extra commits! In this instance, it's probably best to close this PR in favour of a new one that only contains the new changes you with to merge. Sorry about that!<|||||>Closing in favour of #22883
transformers
21,893
closed
[ZAC] fix ci daily
# What does this PR do? Fixes the loading of zero shot audio classification pipeline by providing a correct revision. The previous one was destroyed when checkpoints were overwritten at some point. (that's also when the model card dissapeared).
03-02-2023 07:33:44
03-02-2023 07:33:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated
transformers
21,892
closed
[i18n-<languageCode>] Translating docs to <languageName>
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
03-02-2023 07:21:04
03-02-2023 07:21:04
transformers
21,891
closed
[Time-Series] Autoformer model
# What does this PR do? Adding Time Series Autoformer model https://arxiv.org/abs/2106.13008 Related issue: #21890 @kashif :) ## Differences between the vanilla transformer <img width="484" alt="image" src="https://user-images.githubusercontent.com/17675462/229438415-6a5bba78-c3bf-47b1-966a-f3664a0921e0.png"> * Introduced Series Decomposition in encoder & decoder --- done, waiting for review * Replaced canonical self-attention with autocorrelation block --- done, waiting for review * Added seasonal and trend inputs for the decoder --- added todo places in the code * trend seasonal pseudo code: ``` mean_data = mean(enc_input) zeros = zeros() # size: x_dec.size(0), prediction_length, x_dec.size(2) seasonal_init, trend_init = decomp_layer(enc_input) trend_init = concat(trend_init, mean_data) seasonal_init = concat(seasonal_init, zeros) ```
03-02-2023 03:17:21
03-02-2023 03:17:21
One small open issue left, is adding the series decomposition to the decoder with the trend input. Will do after the initial review :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>some of the TF tests are failing and I believe they are unrelated<|||||>PR is green <|||||>thank you @amyeroberts will get it fixed!<|||||>@amyeroberts, thank you for the comprehensive CR! I sincerely appreciate the effort and time you dedicated to thoroughly assessing this pull request. Will be fixed!<|||||>CR changes I did: * Added `layer_norm_eps` * Model layers are now take the config, except the `AutoformerAttention` which I wasn't sure about * Better variables names * Addressed questions I could answer `fix-copies` is falling because diffs with the time-series-transformer. It's about to decide if to change time-series-transformer here, or to remove "copied from..." @kashif @amyeroberts
transformers
21,890
closed
[Time-Series] Autoformer - Transformer For Time-Series Forecasting
# Model Description Following #20903 and #21099, Autoformer is the next Transformer in the series, published in NIPS 21. * Paper: [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting ](https://arxiv.org/abs/2106.13008) * Model implementation: https://github.com/thuml/Autoformer I would like to implement the model :) Thank you, Eli ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation @wuhaixu2016 - repository creator @NielsRogge @kashif
03-02-2023 03:11:17
03-02-2023 03:11:17
transformers
21,889
closed
Add `inputs_embeds` functionality when generating with BioGPT
# What does this PR do? This PR extends https://github.com/huggingface/transformers/pull/21405 by @gante to BioGPT, making it accept inputs_embeds when generating. ``` import torch from transformers import BioGptTokenizer, BioGptForCausalLM model = BioGptForCausalLM.from_pretrained("microsoft/biogpt") tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt") inputs_embeds = torch.rand((1, 1, 1024)) # 1 dummy soft-prompt token embeddings attention_mask = torch.ones(inputs_embeds.shape[:2], dtype=torch.long) filler_input_ids = torch.LongTensor([[model.config.bos_token_id]]) model.generate(filler_input_ids,attention_mask = attention_mask, inputs_embeds=inputs_embeds, max_new_tokens=300, num_beams=4) ``` # Who can Review? @gante
03-01-2023 21:57:22
03-01-2023 21:57:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>review request: @gante <|||||>Thanks for your contribution!
transformers
21,888
closed
`shuffle` argument when initializing Sampler-related classes in `trainer.py`
### Feature request Add support for an additional keyword argument (`shuffle`) in `DistributedSampler` (https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L839). ### Motivation It samples data in a deterministic way across multiple processes. However, when using a contiguous array of pre-encoded data, using a shuffler in the sampler could avoid sequential input_ids for extreme long files and possibly diminish the number of spikes in the loss. ### Your contribution Would you consider worth in adding it? In my use case, I have to re-write the whole `_get_train_sampler()` just to set `shuffle=True`. If necessary, I can contribute with a PR. Thanks for your attention and best regards, Gustavo.
03-01-2023 21:07:36
03-01-2023 21:07:36
`shuffle=True` is the default for this DistributedSampler (see [doc](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler)). There is no need to pass it.<|||||>You are right, my bad. I overlooked the documentation.
transformers
21,887
closed
Mark pipeline tests to skip them easily
# What does this PR do? This PR re-introduces the `is_pipeline_test` marker to easily flag pipeline tests. The problem is that since #21516 pipeline tests aren't isolated in the pipelines folder anymore (it looks like it but there is an inheritance of the pipeline tester in all model test classes). Thus the `is_pipeline_test` will allow us to flag those tests. By setting the RUN_PIPELINE_TESTS environment variable to False, we can skip all pipeline tests. Contrarily to other similar env variables, this one defaults to True. This is because it's very annoying to have to remember to add a `RUN_PIPELINE_TESTS=yes` before the pytest command when trying to debug things locally. I'll actually propose to switch the defaults of all other env variables in a follow-up PR. The main thing is to set it to False when running test jobs unrelated to pipelines (which this PR does).
03-01-2023 19:50:44
03-01-2023 19:50:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,886
closed
Fix pipe comm test
# What does this PR do? PR #21600 added task `zero-shot-audio-classification`: ```python "zero-shot-audio-classification": { "impl": ZeroShotAudioClassificationPipeline, "tf": (TFAutoModel,) if is_tf_available() else (), "pt": (AutoModel,) if is_torch_available() else (), "default": { "model": { "pt": ("laion/clap-htsat-fused", "f39917b"), } }, "type": "multimodal", }, ``` But this fails the test `tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf` as there is `tf` key under the `task`, but not under `task/default`. There is a check in that test method: ```python if len(relevant_auto_classes) == 0: # task has no default logger.debug(f"{task} in {framework} has no default") return ``` So I decide to set `"tf": (),` for now. (We have no `TFClap` yet)
03-01-2023 19:48:09
03-01-2023 19:48:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,885
closed
Text-classification example does not work as is on 4.27.0.dev
### System Info Transformer : 4.27.0.dev Example in : examples/tensorflow/text-classification/run_glue.py fails when running example given in README. """ python run_glue.py \ --model_name_or_path distilbert-base-cased \ --task_name mnli \ --do_train \ --do_eval \ --do_predict \ --output_dir outdir \ --predict_file data_to_predict.json """ Issues : 1. missing data_to_predict.json> 2. output_dir not mentioned in README. 2. Fails with following error without --predict_file + --output_dir ![image](https://user-images.githubusercontent.com/14943401/222235813-1fb96343-9e3d-461a-bd83-039ecbf814e5.png) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the script given in description ### Expected behavior The task run without issues.
03-01-2023 18:49:42
03-01-2023 18:49:42
cc @Rocketknight1 <|||||>My bad :(. It was trying to use for GPT-J instead of distilbert-base-cased<|||||>Ah yes - CLM models generally need some modifications to transfer to text classification tasks, including adding a padding token. Their performance is also usually worse. Using a MLM (masked language model, like BERT/RoBERTa/DistilBERT/DeBERTa) base instead will work much better!
transformers
21,884
closed
Add finetuning task support for GPT-J to 4.27.0.dev
### Feature request I have tried running fine-tuning tasks including QA, Summarization and text classification with GPT-J. As of now only QA could be made working with a minor hack to use distillbert tokenizer. Not sure if this is best. Other tasks do not work for GPT-J. Any help appreciated. Thanks ### Motivation Fine-tuning for LLMs (GPT-J) ### Your contribution I need to understand the HF transformer code to make any contribution. Will submit PRs if I see possible ways to improve.
03-01-2023 18:42:49
03-01-2023 18:42:49
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,883
closed
Fix `WhisperModelTest`
# What does this PR do? ### Fix `test_model_parallelism` for `Whisper` A merged PR `Add WhisperTokenizerFast` (#21222) (02/21) started failing `test_model_parallelism` . That PR didn't change relevant test/modeling file, but just changed the model tester's vocab size from `99` to `200`. When I traced it, I found at this places inside `WhisperDecoder` ```python inputs_embeds = self.embed_tokens(input_ids) positions = self.embed_positions(input_ids, ...) hidden_states = inputs_embeds + positions ``` - in previous commit, `embed_tokens` and `embed_positions` weight matrices are on the same GPU `1`. - After that PR, one is on GPU `0` another being on GPU `1`. I fixed the issue by adding ```python # Needs higher percentages after model tester's vocab_size is changed to 200 (PR #21222) model_split_percents = [0.8, 0.9] ``` ### Fix `test_torchscript_*` for `Whisper` PR #21298 added an optional argument `attention_mask` to `WhisperModel` model classes. `torchscript` tests need a bit change to make it work.
03-01-2023 17:55:11
03-01-2023 17:55:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,882
closed
Fix Gradient checkpointing bug BigBird
# What does this PR do? This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes issue https://github.com/huggingface/transformers/issues/21737 for BigBird. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.(#21737 ) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @gante, @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-01-2023 17:24:32
03-01-2023 17:24:32
Hey @saswatmeher -- looking at the diff and our CI, I'd say something went wrong with `make fixup`. My recommendation would be to update the installation on your end (`pip install -e .[dev] --upgrade`), for instance `ruff` got a recent update. And then run `make fixup` again :D<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
21,881
closed
Add check for different embedding types in examples
This PR fixes an issue that occurred because we did two things at the same time: We started transitioning our models to use `keras.Embedding` layers, but also added code to the examples to only resize embeddings when necessary by checking `model.get_input_embeddings().weight.shape`. However, because of the transition, the relevant variable became `model.get_input_embeddings().embeddings` in some cases! To fix this, I added a check for both types of embeddings. When the transition is complete, we can remove the `.weight` code path and only use `.embeddings`. I've checked that all the affected examples run without errors using the command supplied in the README following this PR. Fixes #21865
03-01-2023 16:22:53
03-01-2023 16:22:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,880
closed
[Refactor] Relative imports wherever we can
# What does this PR do? Cleans up our code. Mostly use relative imports with submodules instead of global imports.
03-01-2023 16:19:52
03-01-2023 16:19:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,879
closed
Make loading of pretrained gpt2 faster by avoiding initialization of Conv1D's weights
# What does this PR do? Currently, pytorch_util's Conv1D always computes `normal_` to initialize weights regardless of `init_empty_weights`. It makes model loading time of gpt2 longer. This PR fixes this by reordering the initialization of Conv1D to apply `normal_` after assigning weight as `nn.Parameter` to avoid unnecessary initialization computation. gpt2-xl.py ```py from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( "gpt2-xl", torch_dtype=torch.half, low_cpu_mem_usage=True) ``` before ``` $ python -m cProfile -s tottime gpt2-xl.py | head 1815860 function calls (1739023 primitive calls) in 25.421 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 243 19.131 0.079 19.131 0.079 {method 'normal_' of 'torch._C._TensorBase' objects} 1256 1.820 0.001 1.820 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects} 3045 1.427 0.000 1.427 0.000 {method 'to' of 'torch._C._TensorBase' objects} 2 0.677 0.339 0.677 0.339 {method 'do_handshake' of '_ssl._SSLSocket' objects} 2 0.380 0.190 0.380 0.190 {method 'read' of '_ssl._SSLSocket' objects} ``` after ``` $ python -m cProfile -s tottime gpt2-xl.py | head 1816052 function calls (1739215 primitive calls) in 5.691 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1256 1.791 0.001 1.791 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects} 3045 0.892 0.000 0.892 0.000 {method 'to' of 'torch._C._TensorBase' objects} 2 0.676 0.338 0.676 0.338 {method 'do_handshake' of '_ssl._SSLSocket' objects} 2 0.402 0.201 0.402 0.201 {method 'connect' of '_socket.socket' objects} 2 0.373 0.186 0.373 0.186 {method 'read' of '_ssl._SSLSocket' objects} ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21863 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
03-01-2023 16:19:15
03-01-2023 16:19:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,878
closed
Make whisper-event checkpoints compliant to support `return_timestamp`
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @sanchit-gandhi @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Inferring a Whisper checkpoint fine-tuned before the `TimestampstampProcessor` was introduced into transformers returns a rather un-informed error message `AttributeError: 'GenerationConfig' object has no attribute 'no_timestamps_token_id'` Minimum steps to reproduce this: ```python from transformers.pipelines import AutomaticSpeechRecognitionPipeline, pipeline from datasets import load_dataset cv11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="test", streaming=True) pipe = pipeline(model="sanchit-gandhi/whisper-small-hi", return_timestamps=True) test_sample = {"raw": next(iter(cv11))["audio"]["array"], "sampling_rate": next(iter(cv11))["audio"]["sampling_rate"]} pipe(test_sample) ``` Colab/ notebook: [here](https://github.com/Vaibhavs10/scratchpad/blob/main/pipeline_backward_compatability_test.ipynb) The above snippet throws an error as mentioned above. This problem effects the majority (727) of the checkpoints fine-tuned during the Whisper Event. P.S. This has been reported by multiple community members, so not just me. ### Expected behavior We should ideally make the `return_timestamp` functionality backwards compatible or throw a more informative error message. Sorry if there already is a way to do this and I am just misinformed.
03-01-2023 15:15:23
03-01-2023 15:15:23
Well if you are using `return_timestamps = True` you are asking for it 😅 This functionality was introduced after. Let's tell our users that they have to set it in the Generation config (when we pop). Otherwise the `generate` function should be able to set a default value if multilingual or not<|||||>Hey hey! - Sorry I did not do a good job at explaining the intent. For a typical developer who doesn't have any clue of how these checkpoints were fine-tuned and just wants to use a checkpoint on the hub for downstream inference only, this poses a challenge. For them, they'd typically just take a checkpoint throw it into the pipe and expect the pipeline to do its magic - transcribe and provide the timestamps. So my ask here is the following: 1. Is there a way to make the checkpoints trained during the Whisper event compliant with the most recent changes? 2. Can we add a more informative Error message so that an average developer knows what to do next? IMO point 1 is really important as our library of fine-tuned models is one of the distinguishing factors for us. It'd be less than ideal if we ask the community to have to fine-tune their checkpoints again to be able to get timestamps. Hope this makes more sense!<|||||>For 1. I think we can open a PR on all of the whisper models that are from the event to add the required generation config WDYT? 2. This can of course be done on either `generate` in whisper modelling or in the logits processor! Makes a lot of sense thanks for reporting! 👍🏻 <|||||>> 1. I think we can open a PR on all of the whisper models that are from the event to add the required generation config WDYT? Just to be clear, if I add the `no_timestamps_token_id` to config, it would work with timestamps with re-finetuning?<|||||>The model should already be able to produce timestamps without finetuning (as it is knowledge from the pretrained model) but might not be as good as the original pretrained model. You need more than just `no_timestamps_token_id`. You have to use the `generation_config` in full that is available on the openai checkpoints. This is required as it is a new behaviour <|||||>Hey @ArthurZucker -> Can you maybe provide the steps one needs to take to make the checkpoints compatible? We can then potentially run autoPR on all the Whisper checkpoints produced during the whisper-event.<|||||>You can just do something like ```python from transformers import GenerationConfig, WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("your_pretrained_checkpoint") generation_config = GenerationConfig.from_pretrained("openai/whisper-base") # if you are using a multilingual model model.generation_config = generation_config model.push_to_hub("your_pretrained_checkpoint", use_auth_token = "your_token_if_not_logged_in", create_pr = True) ``` <|||||>Would it not be easier to make changes in the codebase to make it robust to the changes we made to generate (switching to generate config and adding timestamp prediction)? What we have is currently backwards breaking 🚨 and something we want to avoid<|||||>That makes sense, then I'll refrain from the Auto-PR and wait for these changes to be merged into `main`. Thank you @sanchit-gandhi & @ArthurZucker <3<|||||>The main issue is that the `generation_config.no_timestamps_token_id` is kind of linked to the model (english or not). We are lucky that all the models are multilingual, but we can't default 2 values, and breaking changes it is, but we kind of have to. <|||||>I will add it to the `config` of whisper, will be easier to deal with that! <|||||>Edit: I think opening PR to the relevant repositories will help (easier to generate the `generation_config`. Also this is not a problem for backward compatibility, as timestamps is a new feature, and is not part of any release yet. However #21937 is indeed a problem and will be fixed by #21965. In the mean time, will also add a warning in case `return_timestamps` is used when the generation config is not properly setup, that will refer to the solution I shared here!
transformers
21,877
closed
__init__() got an unexpected keyword argument 'int8_threshold'
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-4.15.0-177-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no also tried on Colab: - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Taken from Notebook named: HuggingFace meets bitsandbytes for lighter models on GPU for inference ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch name = "bigscience/bloom-3b" model_8bit_thresh_4 = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True, int8_threshold=4.0) ``` Error is: TypeError: __init__() got an unexpected keyword argument 'int8_threshold' ### Expected behavior No error setting threshold
03-01-2023 15:04:41
03-01-2023 15:04:41
You need to use the `quantization_config` to set any argument relative to 8bit loading, cc @younesbelkada <|||||>Thanks, I tried and get : dispatch_model() got an unexpected keyword argument 'offload_index'... I thought offloading to cpu is default false? i use `quantization_config = BitsAndBytesConfig(llm_int8_threshold=4.0)` and ``` model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) ``` device_map is auto<|||||>You probably need an upgrade in your Accelerate lib to fix this error.<|||||>Mea culpa, I used one of my old Containers on my GPU Server, upgraded accelerate and it works as expected... Thanks a lot for your fast support<|||||>Even i upgraded the accelerate, it is throwing me the same error @DHOFM can you please tell me what version of accelerate are you using in your container?
transformers
21,876
closed
[WIP] Flax EfficientNet
# What does this PR do? Following the PR https://github.com/huggingface/transformers/pull/21563 by [alaradirik](https://github.com/alaradirik) to add the corresponding flax model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - Flax: sanchit-gandhi
03-01-2023 15:02:38
03-01-2023 15:02:38
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21876). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sanchit-gandhi was not properly tagged.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Feel free to re-open if you want to finish this one off @Shubhamai! Otherwise leaving closed for now.
transformers
21,875
open
Add SpikeGPT model
### Model description **Abstract:** >As the size of large language models continue to scale, so does the computational resources required to run it. Spiking neural networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the RWKV language model, we successfully implement `SpikeGPT', a generative language model with pure binary, event-driven spiking activation units. We train the proposed model on three model variants: 45M, 125M and 260M parameters. To the best of our knowledge, this is 4x larger than any functional backprop-trained SNN to date. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity to linear with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 5x less energy consumption when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Concretely, it is a GPT model using Receptance Weighted Key Value (RWKV) instead of regular attention, and an adapted FFN layer. ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation [Paper](https://arxiv.org/abs/2302.13939) | [Code](https://github.com/ridgerchu/SpikeGPT) Author: @ridgerchu
03-01-2023 13:43:38
03-01-2023 13:43:38
Thanks for your interest to our work! The checkpoint weights of 120M spike GPT has available [now](https://huggingface.co/ridger/SpikeGPT-BookCorpus/blob/main/BookCorpus-SpikeGPT.pth), but just for debug and playing with the model.<|||||>I've read the paper, this model looks really cool 👍