repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
24,198
closed
Generate: detect special architectures when loaded from PEFT
# What does this PR do? Fixes #23686 As identified in [this comment](https://github.com/huggingface/transformers/issues/23686#issuecomment-1587285715), a PEFT-loaded BLOOM can't be used as an assistant with assisted generation. BLOOM (and GPTBigCode) need special handling due to their different cache API, and the architecture detection code was incompatible with PEFT models. This PR adds the logic to detect these special architectures when loaded with PEFT.
06-12-2023 13:55:44
06-12-2023 13:55:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24198). All of your documentation changes will be reflected on that endpoint.
transformers
24,197
closed
Fix steps bugs in no trainer examples
# What does this PR do? Fixes #24186 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger
06-12-2023 13:33:52
06-12-2023 13:33:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,195
closed
Fix device issue in `OpenLlamaModelTest::test_model_parallelism`
# What does this PR do? See the comments in the changes. Currently, CI has a failure ```bash src/transformers/models/open_llama/modeling_open_llama.py:740: in forward logits = torch.einsum("blh,vh->blv", hidden_states, self.model.embed_tokens.weight) ... ... RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_bmm) ```
06-12-2023 12:24:31
06-12-2023 12:24:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,194
closed
ONNX model conversion error
### System Info - `transformers` version: 4.30.1 - Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.10.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no I additionally have: - onnx==1.12.0 - protobuf==3.19.6 ### Who can help? @lewtun ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is a follow up to the task #19320 where I tried to export the model mdeberta model to onnx format. I saw that there is an issue with similar model configurations #16841, but still does not work for me. Please can someone review this? This is a sample code ``` from collections import OrderedDict from typing import Mapping from pathlib import Path from transformers.onnx import export from transformers.onnx import OnnxConfig from transformers import AutoTokenizer, AutoModel, AutoConfig config = AutoConfig.from_pretrained('microsoft/mdeberta-v3-base') base_model = AutoModel.from_pretrained('microsoft/mdeberta-v3-base') tokenizer = AutoTokenizer.from_pretrained('microsoft/mdeberta-v3-base') class DebertaConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: 'sequ_length'}), ("attention_mask", {0: "batch", 1: 'sequ_length'}), ("token_lengths", {0: 'sent-count'}), ("word_ids", {0: "batch", 1: 'sequ_length'}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("token_embeddings", {0: 'sent-count', 1: 'max_token_count', 2: 'token_embedding_size'}), ] ) onnx_config = DebertaConfig(config) onnx_path = Path('mdeberta.onxx') onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, 13, onnx_path) ``` The code raise the following exception: > File "/.../site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 136, in symbolic > g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(self.type().dtype()).min)) > AttributeError: 'torch._C.TensorType' object has no attribute 'dtype' ### Expected behavior Serialized model in onnx format
06-12-2023 11:19:33
06-12-2023 11:19:33
cc @michaelbenayoun <|||||>I ran this script and didn't get any error, maybe because I don't have a GPU. I put a breakpoint before the relevant line and found this: ``` (Pdb) self attention_scores defined in (%attention_scores : Float(*, *, *, *, strides=[768, 64, 8, 1], requires_grad=0, device=cpu) = onnx::Reshape(%612, %629), scope: transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Model::/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Encoder::encoder/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Layer::layer.0/transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Attention::attention/transformers.models.deberta_v2.modeling_deberta_v2.DisentangledSelfAttention::self # /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:735:0 ) (Pdb) self.type().dtype() torch.float32 (Pdb) type(self.type()) is torch._C.TensorType is torch.TensorType True (Pdb) torch._C <module 'torch._C' from '/home/alex/work/transformers/venv/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so'> ``` <details> <summary>Click to show all the warnings I got which may or may not be relevant</summary> 2023-06-20 18:59:48.309993: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2023-06-20 18:59:48.310119: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-06-20 18:59:48.310134: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. /home/alex/work/transformers/src/transformers/convert_slow_tokenizer.py:457: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text. warnings.warn( Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:561: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. torch.tensor(mid - 1).type_as(relative_pos), /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. torch.ceil(torch.log(abs_pos / mid) / torch.log(torch.tensor((max_position - 1) / mid)) * (mid - 1)) + mid /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:724: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:724: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:803: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. scale = torch.sqrt(torch.tensor(pos_key_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:803: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). scale = torch.sqrt(torch.tensor(pos_key_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:815: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. scale = torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:815: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). scale = torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor) /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:816: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if key_layer.size(-2) != query_layer.size(-2): /home/alex/work/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:112: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min)) </details><|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,193
closed
Update `GPTNeoXLanguageGenerationTest`
# What does this PR do? Due to the changes in [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped/commits/main), we have to update the output value, despite the new value doesn't look great 😭 (The previous revisions seem to disappear ...I guess they rewrite the commit history on their Hub repo)
06-12-2023 10:34:35
06-12-2023 10:34:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24193). All of your documentation changes will be reflected on that endpoint.
transformers
24,190
closed
Fix `Wav2Vec2` CI OOM
# What does this PR do? After #23813, we get OOM for `tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool` (when running the whole `wav2vec2` test suite) Just doing some cleaning up and no more OOM.
06-12-2023 09:18:15
06-12-2023 09:18:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,189
closed
Problems while Running ImageGPT
### System Info - `transformers` version: 4.28.1 - Platform: Linux-4.15.0-197-generic-x86_64-with-glibc2.27 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I tried to run the code from the official example: https://huggingface.co/docs/transformers/model_doc/imagegpt#transformers.ImageGPTForCausalImageModeling ```python from transformers import AutoImageProcessor, ImageGPTForCausalImageModeling import torch import matplotlib.pyplot as plt import numpy as np image_processor = AutoImageProcessor.from_pretrained("openai/imagegpt-small") model = ImageGPTForCausalImageModeling.from_pretrained("openai/imagegpt-small") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) # unconditional generation of 8 images batch_size = 8 context = torch.full((batch_size, 1), model.config.vocab_size - 1) # initialize with SOS token context = torch.tensor(context).to(device) output = model.generate( input_ids=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40 ) clusters = image_processor.clusters height = image_processor.size["height"] width = image_processor.size["width"] samples = output[:, 1:].cpu().detach().numpy() #Error line below samples_img = [ np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [height, width, 3]).astype(np.uint8) for s in samples ] # convert color cluster tokens back to pixels f, axes = plt.subplots(1, batch_size, dpi=300) for img, ax in zip(samples_img, axes): ax.axis("off") ax.imshow(img) ``` Error on line: ```samples_img = [...]``` on the part ```clusters[s]```. This is because: ```samples.shape = (8,1024)``` At that point, ```s.shape = (1024,)```. So, ```Clusters[s]``` cannot index properly. **Error Message**: TypeError: only integer scalar arrays can be converted to a scalar index ### Expected behavior It should plot 8 different predictions from the ImageGPT Model. ![image](https://github.com/huggingface/transformers/assets/33897366/504978be-f708-4b67-b28a-5a82a01f0c50)
06-12-2023 09:15:56
06-12-2023 09:15:56
@chinge55 Thanks for reporting! Indeed, the example was assuming the clusters were a numpy array, but were stored as a list of lists in the image processor. I've opened a PR to resolve
transformers
24,188
closed
Update `WhisperForAudioClassification` doc example
# What does this PR do? After the config file change in Hub commit `a7a63ecc2bd1015783dead844fced2af7531edd2` on `sanchit-gandhi/whisper-medium-fleurs-lang-id`, the doc example for `WhisperForAudioClassification` has to be updated. Currently the test fails due to a different expected output value.
06-12-2023 08:09:54
06-12-2023 08:09:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,187
closed
Fix push to hub
# What does this PR do? Previous PR #23920 wasn't correct, line 715 should not have been included. This PR fixes that.
06-12-2023 08:05:11
06-12-2023 08:05:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,186
closed
In examples, complete steps not correct when load from checkpoint
### System Info https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/examples/pytorch/language-modeling/run_clm_no_trainer.py#L575 Should be: ```python completed_steps = resume_step // args.gradient_accumulation_steps ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Enable grad acc. 1. Save the checkpoint. 1. Load from it. ### Expected behavior The `completed_steps` should be divided by grad acc steps.
06-12-2023 07:29:24
06-12-2023 07:29:24
Would you like to make a PR with a fix?<|||||>> Would you like to make a PR with a fix? Done! https://github.com/huggingface/transformers/pull/24197
transformers
24,185
closed
GPT2 jit trace ERROR
### System Info version transformers: 4.29.1 pytorch: 2.0.0+cu117 python: 3.8.8 code: ``` from transformers import GPT2Tokenizer, GPT2Model import torch import transformers print(transformers.__version__) print(torch.__version__) model_name = 'gpt2' input_text = ["nice to meet you " * 63 + "hello gpt."] tokenizer = GPT2Tokenizer.from_pretrained(model_name) inputs = tokenizer(input_text, return_tensors="pt") input_tuple = (inputs['input_ids'], inputs['attention_mask']) model = GPT2Model.from_pretrained(model_name, torchscript=True).eval() traced_model = torch.jit.trace(model, input_tuple) print(traced_model.graph) ``` ERROR: ``` Traceback (most recent call last): File "test_gpt.py", line 17, in <module> traced_model = torch.jit.trace(model, input_tuple) File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace return trace_module( File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module module._c._create_method_from_trace( File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/hjl/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 800, in forward past_length = past_key_values[0][0].size(-2) IndexError: Dimension specified as -2 but tensor has no dimensions ``` ### Who can help? @ArthurZucker @youn ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code: ```python from transformers import GPT2Tokenizer, GPT2Model import torch import transformers print(transformers.__version__) print(torch.__version__) model_name = 'gpt2' input_text = ["nice to meet you " * 63 + "hello gpt."] tokenizer = GPT2Tokenizer.from_pretrained(model_name) inputs = tokenizer(input_text, return_tensors="pt") input_tuple = (inputs['input_ids'], inputs['attention_mask']) model = GPT2Model.from_pretrained(model_name, torchscript=True).eval() traced_model = torch.jit.trace(model, input_tuple) print(traced_model.graph) ``` ERROR: ```python Traceback (most recent call last): File "test_gpt.py", line 17, in <module> traced_model = torch.jit.trace(model, input_tuple) File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 794, in trace return trace_module( File "/home/hjl/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1056, in trace_module module._c._create_method_from_trace( File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/hjl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/hjl/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 800, in forward past_length = past_key_values[0][0].size(-2) IndexError: Dimension specified as -2 but tensor has no dimensions ``` ### Expected behavior torch.jit.trace(gpt2model)
06-12-2023 06:51:55
06-12-2023 06:51:55
input_tuple = (inputs['input_ids'], inputs['attention_mask']) updata input_tuple = inputs['input_ids']
transformers
24,184
closed
typo: fix typos in CONTRIBUTING.md and deepspeed.mdx
# What does this PR do? Fix the following two typos + Missing line break after the third item in the Pull Request Checklist section of the CONTRIBUTING.md. + A typo error in the deepspeed.mdx file ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
06-12-2023 04:12:05
06-12-2023 04:12:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24184). All of your documentation changes will be reflected on that endpoint.
transformers
24,183
closed
Question in load datasets of train seq2seq model
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I Was change official example scripts data load to my own code: ` if data_args.data_path is not None: print(data_args.data_path) raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir) raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)` The processing process is as follows: 1、Resolving data files 2、Downloading data files 3、Computing checksums 4、Downloading data files 5、Extracting data files 6、Generating train split What caused the significant difference in step six? ![image](https://github.com/huggingface/transformers/assets/19569322/557c99cd-21fa-4805-a1a7-83fe8ecd8530) ### Expected behavior load fast,need at least 1000+ > Generating train split: 388773 examples [32:24:45, 1574.04 examples/s]
06-12-2023 03:44:40
06-12-2023 03:44:40
I think this is just CPU workers spinning up, you can verify by asking on the datasets repo where you can get dedicated datasets help: https://github.com/huggingface/datasets<|||||>Yes,I have already asking on the datasets repo but no response yet<|||||>Feel free to gently ping `mariosasko` (he's usually very responsive)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,182
closed
About finetuning Whisper by multi-GPUs
### Feature request I want to finetune Whisper by 4 GPUs, what should I do? ### Motivation finetuning Whisper by multi-GPUs ### Your contribution finetuning Whisper by multi-GPUs
06-12-2023 03:20:22
06-12-2023 03:20:22
Hi @LYPinASR, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,181
closed
Update README_zh-hans.md
update document link
06-12-2023 02:28:03
06-12-2023 02:28:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,180
closed
Bring back Transformer-encoder, LSTM-decoder models
### Feature request HuggingFace used to support (albeit, with bugs), a "Model2LSTM" class that was closed with [this ticket](https://github.com/huggingface/transformers/issues/2849) / [this PR](https://github.com/huggingface/transformers/pull/2968). While [the code](https://github.com/huggingface/transformers/blob/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1/src/transformers/modeling_encoder_decoder.py#L335) was buggy, I don't think deleting it was the right decision. ### Motivation Ultimately, I guess, the motivation is... this is a real thing that exists and it seems to be within the scope of the `transformers` project. Concretely, this came up because I wanted to try and implement such a model, and discovered that `transformers` used to support it, but doesn't anymore. ### Your contribution I'm willing to make code contributions towards this, and already began an exploratory analysis of the code base to see if this is possible, but as a team member I'm not fully aware of the reasons this was dropped in the first place, and what limits there might be on its feasibility.
06-12-2023 02:05:23
06-12-2023 02:05:23
Hi @Ubadub, There's many reasons a model might be removed or code deleted. Every piece of code in the library requires maintenance from us, and so it's not possible to support everything. We decide what to keep, remove or deprecate based on maintenance burden and how impactful it is for the community. The linked PRs and issues are from over 3 years ago, and to the best of my knowledge, the LSTM decoder models haven't been requested by other users since their deletion. The great thing about open source is that anyone can build upon this library! If you're interested in adding this capability, you're welcome to develop in your own fork and share it here or on the forums for other users to find and use. It's now also possible to [share models directly on the hub](https://huggingface.co/docs/transformers/custom_models). <|||||>Hi @amyeroberts , Thanks for your reply! > The great thing about open source is that anyone can build upon this library! If you're interested in adding this capability, you're welcome to develop in your own fork and share it here or on the forums for other users to find and use. As mentioned, I have been looking into making the requisite changes myself with the hope of opening a PR for it. I opened this ticket first partly to see if others might express support/interest, and also in case there was some obvious or overt reason to not include such a model that might come up. > It's now also possible to [share models directly on the hub](https://huggingface.co/docs/transformers/custom_models). Also a helpful suggestion, thank you.<|||||>@Ubadub At the moment, it's unlikely we'll merge this into the library unless we see a lot of demand from the community (which we'll measure through 👍's on this issue). In this case, it's still possible to develop on a fork and share your work by linking to it here, but it's not necessary to open a PR. You might also be interested in our recently [added RWKV model](https://huggingface.co/docs/transformers/model_doc/rwkv), which rework the traditional transformer attention so that it can be used as an RNN. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,179
closed
Loading a tokenizer from the Tokenizers library doesn't transfer over padding/truncation behavior correctly
### System Info Not especially relevant, but included for completeness: - `transformers` version: 4.29.2 - Platform: macOS-13.2-x86_64-i386-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (n.b.: originally posted a similar query in the [transformers forum](https://discuss.huggingface.co/t/padding-not-working-when-loading-a-tokenizer-trained-via-the-tokenizers-library-into-transformers/42326/1) but got no answer there.) I trained a simple WhitespaceSplit/WordLevel tokenizer using the `tokenizers` library. I added padding by calling `enable_padding(pad_token="<pad>")` on the Tokenizer instance. Then I saved it to a JSON file and then loaded it into transformers using [the instructions here](https://huggingface.co/docs/transformers/fast_tokenizers): ```py fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` When using the `tokenizers.Tokenizer` object directly, `encode` correctly adds the padding tokens. However, if I try padding when tokenizing using the `PreTrainedTokenizerFast` instance, I get the exception: ```py ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` Sure enough, if I follow the instructions and add the pad token as a special token, it works. Alternatively, I can pass the argument `pad_token="<pad>"` to the `PreTrainedTokenizerFast` constructor call, to the same effect. To reproduce the problem, you can use the code below. Most of it is from the [tokenizers Quicktour](https://huggingface.co/docs/tokenizers/quicktour), so you'll need to download the data files as per the instructions there (or modify `files` if using your own files). The rest is from the official transformers docs on [how to load a tokenizer from `tokenizers` into `transformers`](https://huggingface.co/docs/transformers/fast_tokenizers): ```py from tokenizers import BpeTrainer, Tokenizer from tokenizers.models import BPE from tokenizers.pre_tokenizers import Whitespace from transformers import PreTrainedTokenizerFast files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]] sentences = ["Hello, y'all!", "How are you 😁 ?"] tokenizer = Tokenizer(BPE(unk_token="[UNK]")) trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.pre_tokenizer = Whitespace() tokenizer.train(files, trainer) # Enable padding tokenizer.enable_padding(pad_id=3, pad_token="[PAD]") # Now use this tokenizer to tokenize a couple of sentences. output = tokenizer.encode_batch(sentences) # The output is padded, as it should be: print(output[0].tokens) # ['Hello', ',', 'y', "'", 'all', '!'] print(output[1].tokens) # ['How', 'are', 'you', '[UNK]', '?', '[PAD]'] # But now let's say we load the tokenizer into transformers- let's try loading it directly from the tokenizer object: fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) # Tokenize two strings of different token length with padding fast_output = fast_tokenizer(sentences, padding=True) ``` This gives us the error: ``` Using pad_token, but it is not set yet. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2548, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2634, in _call_one return self.batch_encode_plus( File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2816, in batch_encode_plus padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( File "/Users/apatil/anaconda3/envs/lm-training/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2453, in _get_padding_truncation_strategies raise ValueError( ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` We can resolve the issue by explicitly specifying the special tokens when initializing the `PreTrainedTokenizerFast`: ```py fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer, pad_token="[PAD]", unk_token="[UNK]") # Now padding works as expected fast_output = fast_tokenizer(sentences, padding=True) print(fast_output[0].tokens) # ['Hello', ',', 'y', "'", 'all', '!'] print(fast_output[1].tokens) # ['How', 'are', 'you', '[UNK]', '?', '[PAD]'] ``` The code above uses the `tokenizer_object` parameter to load the fast tokenizer as a `PreTrainedTokenizerFast` instance, but as you can confirm for yourselves, the same behavior occurs if you first save the tokenizer to file, then load it into `PreTrainedTokenizerFast` using the `tokenizer_file` parameter instead. **First, I wanted to check- am I doing something wrong/missing something? Or is this just how it works?** If the latter, as follows, an explanation of how I feel it should work and why. ### Expected behavior I understand that I can get the desired behavior by either: 1. Add the pad token as a special token i.e. `fast_tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. 2. Alternatively, I can pass the argument `pad_token='[PAD]'` to the `PreTrainedTokenizerFast` constructor call, to the same effect. But I want the tokenizer to work *out of the box identically as the `tokenizer.Tokenizer` instance does* (to the extent that is reasonably possible), including in terms of padding behavior I find it confusing and awkward that I have to enable padding for the `tokenizer.Tokenizer` instance, and then *again* for the `PreTrainedTokenizerFast` instance. Imagine if your system architecture/workflow has two entirely different processes for tokenizing a document vs. training a model on it using `transformers` (as I imagine is often the case for people). Then you would need to hardcode the pad token in both locations, and if for some reason you wanted to change it, also update it in both locations. On the other hand, if `PreTrainedTokenizerFast` really behaved exactly like the fast tokenizer it was created from, the training code could be entirely agnostic to how the tokenizer was created. All it would need was a path to the saved tokenizer config, and it could proceed without needing to know anything else. This is the behavior I think most people would naturally expect. It could make sense to keep the `pad_token` parameter in the `PreTrainedTokenizerFast` *as an optional override*, or for cases where the fast tokenizer didn't have a padding token set, but the default should be to copy over the padding behavior as-is. Put another way, the tokenizer object/config file should uniquely determine the tokenization behavior of a tokenizer, whether it is a `tokenizers.Tokenizer` instance or its equivalent `PreTrainedTokenizerFast` (to the extent it can; I understand some misalignment is probably inevitable, but this seems not to be one of those cases). **Bottom line:** If the padding information is already in the tokenizer (or in the saved tokenizer config file), you should not need to explicitly specify the padding token again when transferring the tokenizer. This introduces a lot of totally unnecessary friction and leads to brittle code. The tokenizer object/config should be self-contained (i.e. I should not need to hardcode the pad token in two places), and information already encapsulated in the tokenizer object or its saved config file should be preserved on transfer. EDIT: I later observed that the same behavior is true of truncation. See my followup comment for what I believe to be the responsible section of code.
06-12-2023 00:24:24
06-12-2023 00:24:24
The responsible bit of code seems to be [in the `set_truncation_and_padding` function](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_fast.py#L319) of `PreTrainedTokenizerFast`. And it affects truncation, too, not padding. Basically, this function erases the backend fast tokenizer's truncation/padding strategy, and then adds in the user-supplied overrides (e.g. as passed via `encode`). This seems to me to be the opposite of what we want. The truncation/padding strategy should *start* with the backend fast tokenizer's truncation/padding strategy as-is, and if override arguments are provided, then those and only those arguments should be selectively overriden. Do the devs have thoughts on this? I am working on implementing the changes that I am proposing, but curious if there is a reason it was done this way. Also, the function docstring for that function says > The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section. What does "managed section" refer to?<|||||>Hey! Thanks for reporting, I’ll have a look asap <|||||>I should be able to get to this in the coming weeks, I don't think the fix is complicated as you isolated well. <|||||>Hey! Started working on this, the problem is in the initialization. This is tricky because potentially breaking. Will be adding tests to make sure this is fixed<|||||>Fixed it! Make sure to use the latest version of transformers
transformers
24,177
closed
Generate: force caching on the main model, in assisted generation
# What does this PR do? Fixes #23686 Caching is a requirement in assisted generation -- we even check for it ([here](https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/src/transformers/generation/utils.py#L1485)). However, it was still possible for the main model to run without cache. This PR fixes it.
06-11-2023 18:04:31
06-11-2023 18:04:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,171
open
self_attention_mask
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.4.17-2136.318.7.1.el7uek.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sg ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I call the `bigcode/starcoderbase` model, the output comes `nan`. I debugged the code and found that it happens in the `modeling_gpt_bigcode.py` where it gets the attention output in `GPTBigCodeAttention._attn()`. Moving back to find the root of the issue I find that the `attention-mask` is edited in `GPTBigCodeModel.forward()` function as shown below. I could not understand the reason behind this modification of the `attention_mask` so though better I ask it here to see if it is intended or if there is any bug in that. ``` # Self-attention mask. query_length = input_shape[-1] key_length = past_length + query_length self_attention_mask = self.bias[None, key_length - query_length : key_length, :key_length] if attention_mask is not None: self_attention_mask = self_attention_mask * attention_mask.view(batch_size, 1, -1).to( dtype=torch.bool, device=self_attention_mask.device ) # MQA models: (batch_size, query_length, n_heads, key_length) # MHA models: (batch_size, n_heads, query_length, key_length) attention_mask = self_attention_mask.unsqueeze(2 if self.multi_query else 1) ``` Note that this modification results in attention mask like `[False, False, ...., False]` which using it in the `_attn()` functions results in: ``` attn_weights = torch.where(attention_mask, attn_weights, mask_value) attn_weights[1,1,1,:].max() = -\infty attn_weights = softmax(attn_weights, dim=-1) attn_weights[1,1,1,:].max() = [nan, ..., nan] (size of 2048) ``` ### Expected behavior As above.
06-11-2023 16:42:39
06-11-2023 16:42:39
cc @ArthurZucker @younesbelkada <|||||>Hey @oroojlooy thanks for opening an issue. Sorry didn't really have time to dive into this, could you try with the latest version of `transformers`. Could you also share you script on how you called the model! 🤗
transformers
24,170
closed
AttributeError:` 'str' object `has no attribute 'dtype'
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.31 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) Versions of relevant libraries: [pip3] numpy==1.23.0 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [pip3] torchvision==0.15.2 [conda] blas 1.0 mkl [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2023.1.0 h6d00ec8_46342 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] mkl_fft 1.3.6 py311ha02d727_1 [conda] mkl_random 1.2.2 py311ha02d727_1 [conda] numpy 1.23.0 pypi_0 pypi [conda] pytorch 2.0.1 py3.11_cuda11.8_cudnn8.7.0_0 pytorch [conda] pytorch-cuda 11.8 h7e8668a_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.0.2 py311_cu118 pytorch [conda] torchtriton 2.0.0 py311 pytorch [conda] torchvision 0.15.2 py311_cu118 pytorch ### Who can help? @sgugger @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## ERROR: train_result = `trainer.train`(resume_from_checkpoint=checkpoint) ... python3.11/site-packages/transformers/`feature_extraction_sequence_utils.py`", line 220, in pad if value.dtype is np.dtype(np.float64): ^^^^^^^^^^^ AttributeError:` 'str' object `has no attribute 'dtype' I am not sure which element of the dataset is read as 'str' ### 1. OFFICIAL SCRIPT: [transformers/examples/pytorch/audio-classification/run_audio_classification.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md) ### 2. LOADED DATASET: DatasetDict({ train: Dataset({ features: [`'audio',` `'label'`], num_rows: 1280 }) validation: Dataset({ features: ['audio', 'label'], num_rows: 160 }) test: Dataset({ features: ['audio', 'label'], num_rows: 160 }) ### 3. logger.info(raw_datasets['train'][0]) {'audio': {`'path'`: '/transformers/examples/pytorch/audio-classification/s/data/s/s/train/audio1.wav', `'array'`: array([0.02072144, 0.02767944, 0.03274536, ..., 0.00079346, 0.00088501, 0.00149536]), 'sampling_rate': 16000}, `'label'`: 'happy'} ### Expected behavior load the dataset to model for training in train_result = `trainer.train`(resume_from_checkpoint=checkpoint)
06-11-2023 13:18:30
06-11-2023 13:18:30
Hi @flckv, In the issue info above, you mention using the official script, however it appears a custom dataset is being used. So that we can best help you, could you share a reproducible code snippet and full traceback of the error encountered? As mentioned in a previous issue - #24143 - for general questions on how to adapt a script to a custom use case, please use the [forums](https://discuss.huggingface.co/). We try to reserve github issues for bugs and feature requests. <|||||>@amyeroberts thanks for the reply ## 1. Reproducible code snippet is [here](https://gist.github.com/flckv/0e01c9ee1167f2d1af18b811e98194c6#file-run_audio_classification-py-L258), the highlighted line shows the approach I used to load audio files with metadata csvs **Dataset structure:** ![image](https://github.com/huggingface/transformers/assets/103381497/ceb7daf1-5216-44e8-a66c-7ebab8a35410) command: > python run_audio_classification.py \ > --model_name_or_path facebook/wav2vec2-base \ > --output_dir l/users/flck/outputs/wav2vec2-base-s \ > --overwrite_output_dir \ > --remove_unused_columns False \ > --do_train \ > --do_eval \ > --fp16 \ > --learning_rate 3e-5 \ > --max_length_seconds 1 \ > --attention_mask False \ > --warmup_ratio 0.1 \ > --num_train_epochs 5 \ > --per_device_train_batch_size 32 \ > --gradient_accumulation_steps 4 \ > --per_device_eval_batch_size 32 \ > --dataloader_num_workers 4 \ > --logging_strategy steps \ > --logging_steps 10 \ > --evaluation_strategy epoch \ > --save_strategy epoch \ > --load_best_model_at_end True \ > --metric_for_best_model accuracy \ > --save_total_limit 3 \ > --seed 0 \ > --push_to_hub \ > --use_auth_token True ## 2. Full traceback of the error encountered is [here](https://gist.github.com/flckv/3cb6179b15e3571ea7421002ab65c5c2#file-error-py-L206 ), the highlighted lines show the log of the loaded dataset example <|||||>Hi @flckv, thanks for sharing that information. Looking at this it doesn't look like a bug in our code, but rather the dataset creation. As mentioned above, this is really a question for our forums. I'll give a suggestion below of where to begin, but unfortunately we don't have time to help you debug your own custom code. To get things working I suggest two things: * Checking that the unaltered script runs with the default values as shown [in the README](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu). If this doesn't work, then there might be an issue on our side. In which case, report that in a new issue please. * Check that your loaded dataset and the dataset used in the example are comparable e.g. ```python from datasets import load_dataset # Load in the default dataset from the example example_dataset = load_dataset("superb", "asr", split="train") # Load in my dataset my_dataset = load_dataset("audiofolder", data_dir="/home/flck/hf38/transformers/examples/pytorch/audio-classification/s/data/s/s/", split="train") # Inspect the datasets print(example_dataset) print(my_dataset) # Check the values for a specific column that will be used during training print(example_dataset['audio']) print(my_dataset['audio']) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,169
closed
NLLB trunc translation
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.2 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` #!/bin/python3 import sys,os; from transformers import AutoTokenizer, AutoModelForSeq2SeqLM; import torch; #NLLB_MODEL="facebook/nllb-200-3.3B"; #NLLB_MODEL="facebook/nllb-200-distilled-600M"; NLLB_MODEL="facebook/nllb-200-distilled-1.3B"; tokenizer = AutoTokenizer.from_pretrained(NLLB_MODEL); model = AutoModelForSeq2SeqLM.from_pretrained(NLLB_MODEL); device = torch.device("cuda" if torch.cuda.is_available() else "cpu"); model = model.to(device); def translate(lang:str, text:str): inputs = tokenizer(text,return_tensors="pt").to(device); tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id[lang], max_new_tokens=4096); texts=tokenizer.batch_decode(tokens, skip_special_tokens=True); return texts[0]; if __name__=="__main__": import readline; text="""La Voz de Galicia es el cuarto periódico generalista de España. Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020. En el territorio gallego es la cabecera hegemónica. Su edición digital es la primera web informativa de la comunidad."""; text=translate("eng_Latn", text); print(text); ``` Its output is "La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios of 2020." witch is only the first and second line. The same output when remove the carriage returns. ### Expected behavior Translate all the text, not only the first and second line, for example: "_**La Voz de Galicia is the fourth largest generalist newspaper in Spain. It has a readership of 492,000 readers throughout the country, according to data from the first wave of the General Media Study 2020.** In Galicia, it is the leading newspaper. Its digital edition is the leading news website in the region._"
06-11-2023 12:31:16
06-11-2023 12:31:16
Hi @FranPuentes I tried to play a bit with the model and its behavior is quite interesting. If you managed to fit everything in a single line (without `.`) the model seems to successfully translate the entire sentence but it seems that the model stops generating (in your case) after the second sentence. Also I advise you to run the generation in lower precision such as in 4bit so that you can use the largest model (if you run the script under a GPU device) Below is the script I played with (4bit model after installing bitsandbytes) ```python # pip install bitsandbytes import sys,os; from transformers import AutoTokenizer, AutoModelForSeq2SeqLM; import torch; NLLB_MODEL="facebook/nllb-200-3.3B"; #NLLB_MODEL="facebook/nllb-200-distilled-600M"; # NLLB_MODEL="facebook/nllb-200-distilled-1.3B"; tokenizer = AutoTokenizer.from_pretrained(NLLB_MODEL); model = AutoModelForSeq2SeqLM.from_pretrained(NLLB_MODEL, torch_dtype=torch.float16, load_in_4bit=True); device = torch.device("cuda" if torch.cuda.is_available() else "cpu"); # model = model.to(device); def translate(lang:str, text:str): inputs = tokenizer(text,return_tensors="pt").to(device); tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id[lang], max_new_tokens=4096); texts=tokenizer.batch_decode(tokens, skip_special_tokens=True); return texts[0]; if __name__=="__main__": import readline; text="""La Voz de Galicia es el cuarto periódico generalista de España. Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020. En el territorio gallego es la cabecera hegemónica. Su edición digital es la primera web informativa de la comunidad."""; text=translate("eng_Latn", text); print(text); >>> La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios de 2020. text="""La Voz de Galicia es el cuarto periódico generalista de España. Posee una audiencia de 492.000 lectores en todo el país, según datos de la primera oleada del Estudio General de Medios de 2020, en el territorio gallego es la cabecera hegemónica, su edición digital es la primera web informativa de la comunidad."""; text=translate("eng_Latn", text); print(text); >>> La Voz de Galicia is the fourth generalist newspaper in Spain. It has an audience of 492,000 readers in the whole country, according to data from the first wave of the Estudio General de Medios de 2020, in the Galician territory it is the hegemonic headquarters, its digital edition is the first informative web of the community. ``` I am really not sure about this behaviour, maybe it is related to the way NLLB models have been trained <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,168
closed
Update the RWKV documentation with fixes to spelling and wording
# What does this PR do? This PR fixes and performs the following in the RWKV documentation: - Various words spelt incorrectly - Grammatical issues - Remove unnecessary verbosity in places <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-11-2023 12:10:42
06-11-2023 12:10:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24168). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,167
closed
Transformers
null
06-11-2023 11:46:20
06-11-2023 11:46:20
transformers
24,166
closed
Decoding with skip_special_tokens=True doesn't remove pad token
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: - ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction collab nb - https://colab.research.google.com/drive/1VH-ZVITqJ5k6umvMXvQc2Suxzf6KEXyX?usp=sharing ### Expected behavior I expected the <pad> token to not be present in the output, just the processed text
06-11-2023 10:25:13
06-11-2023 10:25:13
cc @ArthurZucker <|||||>@ArthurZucker can confirm, but I suspect the pad token of the tokenizer is probably not defined correctly, in the notebook you have: ```python print(tokenizer.pad_token,tokenizer.pad_token_id) >>> [PAD] 32100 ``` and in the generated text: ```python print(decoded_outputs) >>> ['<pad> bhubaneswar</s>', '<pad> Hippos are large animals</s><pad>'] ``` You probably need to update the tokenizer with the correct pad token and eos tokens, in your case respectively `<pad>` and `</s>`<|||||>As mentioned by Younes, the padding token is not correct, `tokenizer.additional_special_tokens` does not have the `<pad>` token, and the `tokenizier.pad_token` is not set to `<pad>`. Thus the token is an `AddedToken` which is why you can see it in the `tokenizer.added_tokens_encoder` for example, but it is not a special token, this it is skipped. <|||||>Thanks for the prompt reply, cool So this is an issue with the model's tokenizer not having the right initial configuration? - and fixing it either by the [author on the model repo](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) or doing the following step by the user is the immediate remediation for this: `tokenizer.add_special_tokens({'pad_token' : "<pad>"})` I can confirm that this fixes the default behaviour
transformers
24,161
closed
DetrAttention `is_decoder` is not defined
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/detr/modeling_detr.py#LL502C9-L502C19 The `is_decoder` parameter is in the init function, but it is not reference in the body. While in `DetrDecoderLayer`, this parameter is initialized as `True`. Very confused, not sure whether there's something missing in `DetrAttention`.
06-11-2023 07:00:02
06-11-2023 07:00:02
cc @NielsRogge <|||||>Yes, seems like a left-over from copying from another model. Feel free to open a PR to remove the `is_decoder` parameter
transformers
24,159
closed
GPT2ForQuestionAnswering: how to use?
https://github.com/huggingface/transformers/blob/8f093fb799246f7dd9104ff44728da0c53a9f67a/docs/source/en/model_doc/gpt2.mdx?plain=1#L116 The documentation generated for this question answering model is complete nonsense: https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2ForQuestionAnswering.forward.example It does not produce any meaningful results and the tensors are hardcoded: ``` # target is "nice puppet" target_start_index = torch.tensor([14]) target_end_index = torch.tensor([15]) ```
06-10-2023 19:48:56
06-10-2023 19:48:56
Hi @mkschreder, thanks for raising this issue. In terms of the issue title - how to use - there's a more in-depth guide about question-answering in the [task documentation](https://huggingface.co/docs/transformers/v4.30.0/en/tasks/question_answering#question-answering) and [NLP course](https://huggingface.co/learn/nlp-course/chapter7/7?fw=pt). The snippets in the documentation are meant to be minimal examples so that users can get started and understand the model's API. Sometimes values are hardcoded to keep the example short, however we certainly don't want them to be incorrect or confusing. If there's particular improvements or fixes you'd like to see, we're always happy to review a PR :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,156
closed
🌐 [i18n-KO] Fixed `tutorial/preprocessing.mdx`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixed some words and expressions to align with the latest translation works. This PR is a revision to apply key terms and phrases that have been established over the course of translating several documents. Here are the main changes: - `dataset` : `데이터셋` -> `데이터 세트` - `truncation` : `생략` -> `잘라내기` - `train` : `학습` -> `훈련` Some other sentences have also been modified to read more naturally. Thank you in advance for your review! Fixes #22578 Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (번역 누락/중복 검사) - [x] Grammar Check (맞춤법 검사) - [x] Review or Add new terms to glossary (용어 확인 및 추가) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview로 정상작동 확인) ## Who can review? (Initial) Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-10-2023 16:41:03
06-10-2023 16:41:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,155
closed
TypeError: Repository.__init__() got an unexpected keyword argument 'private'
### System Info env: ``` ❯ conda list List of packages in environment: "/home/john/micromamba/envs/nlpcourse" Name Version Build Channel ────────────────────────────────────────────────────────────────────────────────────────────── _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge abseil-cpp 20211102.0 hd4dd3e8_0 anaconda/pkgs/main absl-py 1.3.0 py310h06a4308_0 anaconda/pkgs/main accelerate 0.19.0 pyhd8ed1ab_0 anaconda/cloud/conda-forge aiohttp 3.8.3 py310h5eee18b_0 anaconda/pkgs/main aiosignal 1.2.0 pyhd3eb1b0_0 anaconda/pkgs/main appdirs 1.4.4 pyhd3eb1b0_0 anaconda/pkgs/main arrow 1.2.3 py310h06a4308_1 anaconda/pkgs/main arrow-cpp 8.0.0 py310h3098874_1 anaconda/pkgs/main asttokens 2.0.5 pyhd3eb1b0_0 anaconda/pkgs/main async-timeout 4.0.2 py310h06a4308_0 anaconda/pkgs/main attrs 22.1.0 py310h06a4308_0 anaconda/pkgs/main aws-c-common 0.4.57 he6710b0_1 anaconda/pkgs/main aws-c-event-stream 0.1.6 h2531618_5 anaconda/pkgs/main aws-checksums 0.1.9 he6710b0_0 anaconda/pkgs/main aws-sdk-cpp 1.8.185 hce553d0_0 anaconda/pkgs/main backcall 0.2.0 pyhd3eb1b0_0 anaconda/pkgs/main beautifulsoup4 4.12.2 py310h06a4308_0 anaconda/pkgs/main binaryornot 0.4.4 pyhd3eb1b0_1 anaconda/pkgs/main blas 1.0 mkl anaconda/pkgs/main boost-cpp 1.73.0 h7f8727e_12 anaconda/pkgs/main brotlipy 0.7.0 py310h7f8727e_1002 anaconda/pkgs/main bzip2 1.0.8 h7b6447c_0 anaconda/pkgs/main c-ares 1.19.1 hd590300_0 conda-forge ca-certificates 2023.05.30 h06a4308_0 anaconda/pkgs/main certifi 2023.5.7 py310h06a4308_0 anaconda/pkgs/main cffi 1.15.1 py310h5eee18b_3 anaconda/pkgs/main chardet 4.0.0 py310h06a4308_1003 anaconda/pkgs/main charset-normalizer 2.0.4 pyhd3eb1b0_0 anaconda/pkgs/main click 8.1.3 unix_pyhd8ed1ab_2 conda-forge comm 0.1.2 py310h06a4308_0 anaconda/pkgs/main cookiecutter 1.7.3 pyhd3eb1b0_0 anaconda/pkgs/main cryptography 39.0.1 py310h9ce1e76_0 anaconda/pkgs/main cuda-cudart 11.8.89 0 nvidia cuda-cupti 11.8.87 0 nvidia cuda-libraries 11.8.0 0 nvidia cuda-nvrtc 11.8.89 0 nvidia cuda-nvtx 11.8.86 0 nvidia cuda-runtime 11.8.0 0 nvidia dataclasses 0.8 pyh6d0b6a4_7 anaconda/pkgs/main datasets 2.12.0 py310h06a4308_0 anaconda/pkgs/main debugpy 1.5.1 py310h295c915_0 anaconda/pkgs/main decorator 5.1.1 pyhd3eb1b0_0 anaconda/pkgs/main dill 0.3.6 pyhd8ed1ab_1 conda-forge evaluate 0.4.0 py310h06a4308_0 anaconda/pkgs/main executing 0.8.3 pyhd3eb1b0_0 anaconda/pkgs/main ffmpeg 4.3 hf484d3e_0 pytorch filelock 3.9.0 py310h06a4308_0 anaconda/pkgs/main freetype 2.12.1 h4a9f257_0 anaconda/pkgs/main frozenlist 1.3.3 py310h5eee18b_0 anaconda/pkgs/main fsspec 2023.5.0 pyh1a96a4e_0 conda-forge gflags 2.2.2 he1b5a44_1004 conda-forge giflib 5.2.1 h5eee18b_3 anaconda/pkgs/main glog 0.6.0 h6f12383_0 conda-forge gmp 6.2.1 h295c915_3 anaconda/pkgs/main gmpy2 2.1.2 py310heeb90bb_0 anaconda/pkgs/main gnutls 3.6.15 he1e5248_0 anaconda/pkgs/main grpc-cpp 1.46.1 h33aed49_1 anaconda/pkgs/main huggingface_hub 0.14.1 py310h06a4308_0 anaconda/pkgs/main icu 58.2 he6710b0_3 anaconda/pkgs/main idna 3.4 py310h06a4308_0 anaconda/pkgs/main importlib-metadata 6.6.0 pyha770c72_0 conda-forge importlib_metadata 6.6.0 hd8ed1ab_0 conda-forge intel-openmp 2023.1.0 hdb19cb5_46305 anaconda/pkgs/main ipykernel 6.19.2 py310h2f386ee_0 anaconda/pkgs/main ipython 8.12.0 py310h06a4308_0 anaconda/pkgs/main ipywidgets 8.0.4 py310h06a4308_0 anaconda/pkgs/main jedi 0.18.1 py310h06a4308_1 anaconda/pkgs/main jinja2 3.1.2 py310h06a4308_0 anaconda/pkgs/main jinja2-time 0.2.0 pyhd3eb1b0_3 anaconda/pkgs/main joblib 1.2.0 pyhd8ed1ab_0 conda-forge jpeg 9e h5eee18b_1 anaconda/pkgs/main jupyter_client 8.1.0 py310h06a4308_0 anaconda/pkgs/main jupyter_core 5.3.0 py310h06a4308_0 anaconda/pkgs/main jupyterlab_widgets 3.0.5 py310h06a4308_0 anaconda/pkgs/main keyutils 1.6.1 h166bdaf_0 conda-forge krb5 1.19.4 h568e23c_0 anaconda/pkgs/main lame 3.100 h7b6447c_0 anaconda/pkgs/main lcms2 2.12 h3be6417_0 anaconda/pkgs/main ld_impl_linux-64 2.38 h1181459_1 anaconda/pkgs/main lerc 3.0 h295c915_0 anaconda/pkgs/main libabseil 20211102.0 cxx17_h48a1fff_3 anaconda/cloud/conda-forge libboost 1.73.0 h28710b8_12 anaconda/pkgs/main libbrotlicommon 1.0.9 h166bdaf_8 conda-forge libbrotlidec 1.0.9 h166bdaf_8 conda-forge libbrotlienc 1.0.9 h166bdaf_8 conda-forge libcrc32c 1.1.2 h9c3ff4c_0 conda-forge libcublas 11.11.3.6 0 nvidia libcufft 10.9.0.58 0 nvidia libcufile 1.6.1.9 0 nvidia libcurand 10.3.2.106 0 nvidia libcurl 7.88.1 h91b91d3_0 anaconda/pkgs/main libcusolver 11.4.1.48 0 nvidia libcusparse 11.7.5.86 0 nvidia libdeflate 1.17 h5eee18b_0 anaconda/pkgs/main libedit 3.1.20221030 h5eee18b_0 anaconda/pkgs/main libev 4.33 h516909a_1 conda-forge libevent 2.1.12 h8f2d780_0 anaconda/pkgs/main libffi 3.4.4 h6a678d5_0 anaconda/pkgs/main libgcc-ng 12.2.0 h65d4601_19 conda-forge libgfortran-ng 11.2.0 h00389a5_1 anaconda/pkgs/main libgfortran5 11.2.0 h1234567_1 anaconda/pkgs/main libgomp 12.2.0 h65d4601_19 conda-forge libiconv 1.16 h7f8727e_2 anaconda/pkgs/main libidn2 2.3.4 h5eee18b_0 anaconda/pkgs/main libnghttp2 1.46.0 hce63b2e_0 anaconda/pkgs/main libnpp 11.8.0.86 0 nvidia libnsl 2.0.0 h7f98852_0 conda-forge libnuma 2.0.16 h0b41bf4_1 conda-forge libnvjpeg 11.9.0.86 0 nvidia libpng 1.6.39 h5eee18b_0 anaconda/pkgs/main libprotobuf 3.20.3 he621ea3_0 anaconda/pkgs/main libsodium 1.0.18 h7b6447c_0 anaconda/pkgs/main libsqlite 3.42.0 h2797004_0 conda-forge libssh2 1.10.0 h8f2d780_0 anaconda/pkgs/main libstdcxx-ng 12.2.0 h46fd767_19 conda-forge libtasn1 4.19.0 h5eee18b_0 anaconda/pkgs/main libthrift 0.15.0 hcc01f38_0 anaconda/pkgs/main libtiff 4.5.0 h6a678d5_2 anaconda/pkgs/main libunistring 0.9.10 h27cfd23_0 anaconda/pkgs/main libutf8proc 2.8.0 h166bdaf_0 conda-forge libuuid 1.41.5 h5eee18b_0 anaconda/pkgs/main libwebp 1.2.4 h11a3e52_1 anaconda/pkgs/main libwebp-base 1.2.4 h5eee18b_1 anaconda/pkgs/main libzlib 1.2.13 h166bdaf_4 conda-forge lz4-c 1.9.4 h6a678d5_0 anaconda/pkgs/main markupsafe 2.1.1 py310h7f8727e_0 anaconda/pkgs/main matplotlib-inline 0.1.6 py310h06a4308_0 anaconda/pkgs/main mkl 2023.1.0 h6d00ec8_46342 anaconda/pkgs/main mkl-service 2.4.0 py310h5eee18b_1 anaconda/pkgs/main mkl_fft 1.3.6 py310h1128e8f_1 anaconda/pkgs/main mkl_random 1.2.2 py310h1128e8f_1 anaconda/pkgs/main mpc 1.1.0 h10f8cd9_1 anaconda/pkgs/main mpfr 4.0.2 hb69a4c5_1 anaconda/pkgs/main mpmath 1.2.1 py310h06a4308_0 anaconda/pkgs/main multidict 6.0.2 py310h5eee18b_0 anaconda/pkgs/main multiprocess 0.70.14 py310h5764c6d_3 conda-forge ncurses 6.4 h6a678d5_0 anaconda/pkgs/main nest-asyncio 1.5.6 py310h06a4308_0 anaconda/pkgs/main nettle 3.7.3 hbbd107a_1 anaconda/pkgs/main networkx 2.8.4 py310h06a4308_1 anaconda/pkgs/main nltk 3.7 pyhd3eb1b0_0 anaconda/pkgs/main numpy 1.24.3 py310h5f9d8c6_1 anaconda/pkgs/main numpy-base 1.24.3 py310hb5e798b_1 anaconda/pkgs/main openh264 2.1.1 h4ff587b_0 anaconda/pkgs/main openssl 1.1.1t h7f8727e_0 anaconda/pkgs/main orc 1.7.4 hb3bc3d3_1 anaconda/pkgs/main packaging 23.1 pyhd8ed1ab_0 conda-forge pandas 2.0.2 py310h7cbd5c2_0 conda-forge parso 0.8.3 pyhd3eb1b0_0 anaconda/pkgs/main pexpect 4.8.0 pyhd3eb1b0_3 anaconda/pkgs/main pickleshare 0.7.5 pyhd3eb1b0_1003 anaconda/pkgs/main pillow 9.4.0 py310h6a678d5_0 anaconda/pkgs/main pip 23.0.1 py310h06a4308_0 anaconda/pkgs/main platformdirs 2.5.2 py310h06a4308_0 anaconda/pkgs/main pooch 1.4.0 pyhd3eb1b0_0 anaconda/pkgs/main poyo 0.5.0 pyhd3eb1b0_0 anaconda/pkgs/main prompt-toolkit 3.0.36 py310h06a4308_0 anaconda/pkgs/main protobuf 3.20.3 py310h6a678d5_0 anaconda/pkgs/main psutil 5.9.0 py310h5eee18b_0 anaconda/pkgs/main ptyprocess 0.7.0 pyhd3eb1b0_2 anaconda/pkgs/main pure_eval 0.2.2 pyhd3eb1b0_0 anaconda/pkgs/main pyarrow 8.0.0 py310h468efa6_0 anaconda/pkgs/main pycparser 2.21 pyhd3eb1b0_0 anaconda/pkgs/main pygments 2.15.1 py310h06a4308_1 anaconda/pkgs/main pyopenssl 23.0.0 py310h06a4308_0 anaconda/pkgs/main pysocks 1.7.1 py310h06a4308_0 anaconda/pkgs/main python 3.10.11 h7a1cb2a_2 anaconda/pkgs/main python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-slugify 5.0.2 pyhd3eb1b0_0 anaconda/pkgs/main python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py310h1fa729e_0 conda-forge python_abi 3.10 2_cp310 anaconda/cloud/conda-forge pytorch 2.0.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch pytorch-cuda 11.8 h7e8668a_5 pytorch pytorch-mutex 1.0 cuda pytorch pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py310h5764c6d_5 conda-forge pyzmq 25.0.2 py310h6a678d5_0 anaconda/pkgs/main rdma-core 28.9 h59595ed_1 conda-forge re2 2022.04.01 h295c915_0 anaconda/pkgs/main readline 8.2 h5eee18b_0 anaconda/pkgs/main regex 2023.5.5 py310h2372a71_0 conda-forge requests 2.29.0 py310h06a4308_0 anaconda/pkgs/main responses 0.13.3 pyhd3eb1b0_0 anaconda/pkgs/main rouge-score 0.1.2 pyhd8ed1ab_0 anaconda/cloud/conda-forge s2n 1.3.33 hae46d1a_0 anaconda/cloud/conda-forge sacremoses 0.0.53 pyhd8ed1ab_0 conda-forge scikit-learn 1.2.2 py310h6a678d5_1 anaconda/pkgs/main scipy 1.10.1 py310h5f9d8c6_1 anaconda/pkgs/main sentencepiece 0.1.99 py310hdb19cb5_0 anaconda/pkgs/main setuptools 67.8.0 py310h06a4308_0 anaconda/pkgs/main six 1.16.0 pyhd3eb1b0_1 anaconda/pkgs/main snappy 1.1.10 h9fff704_0 conda-forge soupsieve 2.4 py310h06a4308_0 anaconda/pkgs/main sqlite 3.41.2 h5eee18b_0 anaconda/pkgs/main stack_data 0.2.0 pyhd3eb1b0_0 anaconda/pkgs/main sympy 1.11.1 py310h06a4308_0 anaconda/pkgs/main tbb 2021.8.0 hdb19cb5_0 anaconda/pkgs/main text-unidecode 1.3 pyhd3eb1b0_0 anaconda/pkgs/main threadpoolctl 2.2.0 pyh0d69192_0 anaconda/pkgs/main tk 8.6.12 h1ccaba5_0 anaconda/pkgs/main tokenizers 0.11.4 py310h3dcd8bd_1 anaconda/pkgs/main torchaudio 2.0.2 py310_cu118 pytorch torchtriton 2.0.0 py310 pytorch torchvision 0.15.2 py310_cu118 pytorch tornado 6.2 py310h5eee18b_0 anaconda/pkgs/main tqdm 4.65.0 py310h2f386ee_0 anaconda/pkgs/main traitlets 5.7.1 py310h06a4308_0 anaconda/pkgs/main transformers 4.24.0 py310h06a4308_0 anaconda/pkgs/main typing-extensions 4.5.0 py310h06a4308_0 anaconda/pkgs/main typing_extensions 4.5.0 py310h06a4308_0 anaconda/pkgs/main tzdata 2023c h04d1e81_0 anaconda/pkgs/main ucx 1.14.1 hf587318_2 anaconda/cloud/conda-forge unidecode 1.2.0 pyhd3eb1b0_0 anaconda/pkgs/main urllib3 1.26.16 py310h06a4308_0 anaconda/pkgs/main utf8proc 2.6.1 h27cfd23_0 anaconda/pkgs/main wcwidth 0.2.5 pyhd3eb1b0_0 anaconda/pkgs/main wheel 0.38.4 py310h06a4308_0 anaconda/pkgs/main widgetsnbextension 4.0.5 py310h06a4308_0 anaconda/pkgs/main xxhash 0.8.1 h0b41bf4_0 conda-forge xz 5.4.2 h5eee18b_0 anaconda/pkgs/main yaml 0.2.5 h7f98852_2 conda-forge yarl 1.8.1 py310h5eee18b_0 anaconda/pkgs/main zeromq 4.3.4 h2531618_0 anaconda/pkgs/main zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 h166bdaf_4 conda-forge zstd 1.5.5 hc292b87_0 anaconda/pkgs/main ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I have the following problem when practicing Summarization section of chapter 7 in NLP Course [Summarization - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter7/5?fw=pt#summarization) ```py args = Seq2SeqTrainingArguments( output_dir=f"{model_name}-finetuned-amazon-en-es", evaluation_strategy="epoch", learning_rate=5.6e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=num_train_epochs, predict_with_generate=True, logging_steps=logging_steps, push_to_hub=True, ) from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[32], line 3 1 from transformers import Seq2SeqTrainer ----> 3 trainer = Seq2SeqTrainer( 4 model, 5 args, 6 train_dataset=tokenized_datasets["train"], 7 eval_dataset=tokenized_datasets["validation"], 8 data_collator=data_collator, 9 tokenizer=tokenizer, 10 compute_metrics=compute_metrics, 11 ) File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/transformers/trainer.py:489, in Trainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) 487 # Create clone of distant repo and output directory if needed 488 if self.args.push_to_hub: --> 489 self.init_git_repo(at_init=True) 490 # In case of pull, we need to make sure every process has the latest. 491 if is_torch_tpu_available(): File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/transformers/trainer.py:3284, in Trainer.init_git_repo(self, at_init) 3281 repo_name = get_full_repo_name(repo_name, token=self.args.hub_token) 3283 try: -> 3284 self.repo = Repository( 3285 self.args.output_dir, 3286 clone_from=repo_name, 3287 use_auth_token=use_auth_token, 3288 private=self.args.hub_private_repo, 3289 ) 3290 except EnvironmentError: 3291 if self.args.overwrite_output_dir and at_init: 3292 # Try again after wiping output_dir File ~/micromamba/envs/nlpcourse/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.._inner_fn(*args, **kwargs) 117 if check_use_auth_token: 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --> 120 return fn(*args, **kwargs) TypeError: Repository.__init__() got an unexpected keyword argument 'private' ``` anyone could give a help to fix it ? ### Expected behavior normal
06-10-2023 10:19:23
06-10-2023 10:19:23
Hi @heavenkiller2018, Could you try updating the version of transformers in your environment to the latest release? `pip install -U transformers` It seems there's a mismatch between the huggingface_hub and transformers packages in your environment. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,152
closed
/gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number
### System Info /usr/local/lib/python3.8/dist-packages/pandas/core/computation/expressions.py:20: UserWarning: Pandas requires version '2.7.3' or newer of 'numexpr' (version '2.7.2' currently installed). from pandas.core.computation.check import NUMEXPR_INSTALLED Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.29.2 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0a0+1767026 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi @ArthurZucker @younesbelkada @sgugger, Environment ``` enroot 3.4.1 pyxis 0.7.0 slurm slurm-wlm 19.05.5 Ubuntu 20.04 NeMo docker image: nvcr.io+ea-bignlp+nemofw-training+23.04.1-py3.sqsh ``` When I do the pile dataset preprocessing. The following error occurs. ``` Traceback (most recent call last): File "/opt/NeMo/nemo/collections/common/tokenizers/huggingface/auto_tokenizer.py", line 74, in __init__ self.tokenizer = AUTOTOKENIZER.from_pretrained( File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 643, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 487, in get_tokenizer_config resolved_config_file = cached_file( File "/usr/local/lib/python3.8/dist-packages/transformers/utils/hub.py", line 417, in cached_file resolved_file = hf_hub_download( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download metadata = get_hf_file_metadata( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 1532, in get_hf_file_metadata r = _request_wrapper( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper response = _request_wrapper( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper return http_backoff( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_http.py", line 212, in http_backoff response = session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.8/dist-packages/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:2635)'))) ``` I've tried downgrade request==2.19.1 and certifi==2018.8.13 but still failed. When I run a script below: test_ssl.py ``` import ssl print(ssl.OPENSSL_VERSION) ``` Then, ``` # python test_ssl.py OpenSSL 1.1.1f 31 Mar 2020 ``` Then execute below ``` python -c "from transformers import AutoTokenizer; tok_gpt=AutoTokenizer.from_pretrained('gpt2');" ``` It will download the gpt2 tokenizer like this: ``` root@nf5688m7-1:/workspace# tree ~/.cache/huggingface/hub/ /root/.cache/huggingface/hub/ ├── models--gpt2 │   ├── blobs │   │   ├── 10c66461e4c109db5a2196bff4bb59be30396ed8 │   │   ├── 1f1d9aaca301414e7f6c9396df506798ff4eb9a6 │   │   ├── 226b0752cac7789c48f0cb3ec53eda48b7be36cc │   │   └── 4b988bccc9dc5adacd403c00b4704976196548f8 │   ├── refs │   │   └── main │   └── snapshots │   └── e7da7f221d5bf496a48136c0cd264e630fe9fcc8 │   ├── config.json -> ../../blobs/10c66461e4c109db5a2196bff4bb59be30396ed8 │   ├── merges.txt -> ../../blobs/226b0752cac7789c48f0cb3ec53eda48b7be36cc │   ├── tokenizer.json -> ../../blobs/4b988bccc9dc5adacd403c00b4704976196548f8 │   └── vocab.json -> ../../blobs/1f1d9aaca301414e7f6c9396df506798ff4eb9a6 └── version.txt ``` How can I solve the SSL problem when download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json automatically? Thanks Aaron ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Environment ``` enroot 3.4.1 pyxis 0.7.0 slurm slurm-wlm 19.05.5 Ubuntu 20.04 NeMo docker image: nvcr.io+ea-bignlp+nemofw-training+23.04.1-py3.sqsh ``` 2. python -c "from transformers import AutoTokenizer; tok_gpt=AutoTokenizer.from_pretrained('gpt2');" 3. How can I solve the SSL problem when download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json automatically? ### Expected behavior download gpt2 tokenizer.json and make it download to /gpt2/resolve/main/tokenizer_config.json
06-10-2023 03:30:17
06-10-2023 03:30:17
The error seems like a temporary failure on the Hub, your code does not error on my side. As for downloading in another folder you need to use the `cache_dir` argument to change the cache location.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,151
closed
[tests] fix bitsandbytes import issue
this test has been failing when `peft` library was installed since it tries to access `bitsandbytes.nn` ``` $ pip install accelerate peft bitsandbytes==0.39.0 $ RUN_SLOW="yes" pytest -sv tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb [...] RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its traceback): E module 'bitsandbytes' has no attribute 'nn' ``` here is why it's happening: 1. We push `transformers/tests` into `sys.path` when running the subprocess-based tests [here](https://github.com/huggingface/transformers/blob/deff5979fee1f989d26e4946c92a5c35ce695af8/src/transformers/testing_utils.py#L1226) 2. but we have `transformers/tests/bitsandbytes` dir under `transformers/tests` 3. so when you do import `bitsandbytes.nn` it finds the wrong `bitsandbytes` dir, which is not the bnb library and it breaks So this PR renames `transformers/tests/bitsandbytes` to `transformers/tests/bnb` which removes the conflicts and the failing test.
06-09-2023 23:00:29
06-09-2023 23:00:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,150
closed
Training auto cancelling
### System Info Transformers version: 4.31.0.dev0 Platform: Google Colab Environment configuration: TPU, GPU(V100) Python version: Python 3.10.12 ### Who can help? @patil-suraj @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ! python ./transformers/examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-large \ --do_train \ --do_eval \ --source_lang fr \ --target_lang en \ --source_prefix "translate French to English: " \ --dataset_name wmt14 \ --dataset_config_name fr-en \ --output_dir ./tmp/T5-large-fr-en \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --push_to_hub=True ### Expected behavior The training script should continue but it auto cancel itself like someone pressed Ctrl+C button. Here is the output ``` {'loss': 1.6703, 'learning_rate': 4.9998367482177885e-05, 'epoch': 0.0} 0% 1000/30627537 [1:00:01<26918:05:16, 3.16s/it][INFO|trainer.py:2926] 2023-06-09 19:06:01,861 >> Saving model checkpoint to ./tmp/tst-translation/checkpoint-1000 [INFO|configuration_utils.py:458] 2023-06-09 19:06:01,863 >> Configuration saved in ./tmp/tst-translation/checkpoint-1000/config.json [INFO|configuration_utils.py:364] 2023-06-09 19:06:01,863 >> Configuration saved in ./tmp/tst-translation/checkpoint-1000/generation_config.json [INFO|modeling_utils.py:1853] 2023-06-09 19:06:09,160 >> Model weights saved in ./tmp/tst-translation/checkpoint-1000/pytorch_model.bin [INFO|tokenization_utils_base.py:2194] 2023-06-09 19:06:09,162 >> tokenizer config file saved in ./tmp/tst-translation/checkpoint-1000/tokenizer_config.json [INFO|tokenization_utils_base.py:2201] 2023-06-09 19:06:09,162 >> Special tokens file saved in ./tmp/tst-translation/checkpoint-1000/special_tokens_map.json [INFO|tokenization_t5_fast.py:186] 2023-06-09 19:06:09,234 >> Copy vocab file to ./tmp/tst-translation/checkpoint-1000/spiece.model [INFO|tokenization_utils_base.py:2194] 2023-06-09 19:06:54,528 >> tokenizer config file saved in ./tmp/tst-translation/tokenizer_config.json [INFO|tokenization_utils_base.py:2201] 2023-06-09 19:06:54,528 >> Special tokens file saved in ./tmp/tst-translation/special_tokens_map.json [INFO|tokenization_t5_fast.py:186] 2023-06-09 19:06:54,599 >> Copy vocab file to ./tmp/tst-translation/spiece.model 0% 1228/30627537 [1:17:07<29988:35:31, 3.53s/it]^C ``` No matter which configuration I use this is the behiavor I have. Does it related to the fact that there is specific task param for from French to English(see below) ``` "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } ```
06-09-2023 19:58:00
06-09-2023 19:58:00
Hey! Without a traceback, there's nothing we can really do to help your with this. It seems that the las line has ` 0% 1228/30627537 [1:17:07<29988:35:31, 3.53s/it]^C` with `^C` is similar to when you actually press ctrl+c. I suggest to post this on the [forum](https://discuss.huggingface.co/) and see if someone already had this issue<|||||>Actually there is no traceback it just do it like the last line as described above(like someone pressed Ctrl+C) and stop like the cell have been executed without error nothing else<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,149
closed
Official Example - BeamSearchScorer always returns just one beam, even I specified 3
### System Info google colab latest ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import ( AutoTokenizer, AutoModelForSeq2SeqLM, LogitsProcessorList, MinLengthLogitsProcessor, BeamSearchScorer, ) import torch tokenizer = AutoTokenizer.from_pretrained("t5-base") model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") encoder_input_str = "translate English to German: How old are you?" encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids # lets run beam search using 3 beams num_beams = 3 # define decoder start token ids input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) input_ids = input_ids * model.config.decoder_start_token_id # add encoder_outputs to model keyword arguments model_kwargs = { "encoder_outputs": model.get_encoder()( encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ) } # instantiate beam scorer beam_scorer = BeamSearchScorer( batch_size=1, num_beams=num_beams, device=model.device, ) # instantiate logits processors logits_processor = LogitsProcessorList( [ MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ] ) outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs) out = tokenizer.batch_decode(outputs, skip_special_tokens=True) print("out",out) ### Expected behavior out is just one line, even when i put: num_return_sequences=2 at various places, for example: outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs, num_return_sequences=2)
06-09-2023 18:32:24
06-09-2023 18:32:24
@ArthurZucker @younesbelkada @Narsil<|||||>Hi @Oxi84, thanks for raising this issue! Could you provide information about the environment packages? Just run `! transformers-cli env` in a colab cell and copy paste the output. cc @gante <|||||>Sure: - `transformers` version: 4.30.1 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <|||||>I have found the solution actually: just add: num_beam_hyps_to_keep = 5 or number you want, then scorer just chooses 5 best hypothesis. beam_scorer = BeamSearchScorer( batch_size=1, do_early_stopping=True, num_beams=num_beams, num_beam_hyps_to_keep = 5, device=model.device )<|||||>I only need to make this use multiple beams. With just one beam it is too slow.<|||||>This can be done by using this: input_ids = torch.ones((num_beams*the_batch_size, 1), device=model.device, dtype=torch.long) I guess something related to encoder/decoder<|||||>Hey @Oxi84 👋 The `.generate()` method does a lot of preprocessing, input preparation, and instance initialization for you. I'd recommend using it, as we don't have the bandwidth to provide usage support for lower-level APIs :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,148
closed
RWKV cuda kernel loading
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? cc @younesbelkada @ArthurZucker @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction my `demo.py` ``` from transformers import AutoTokenizer, RwkvModel import torch device = torch.device("cuda:5") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile") model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile").to(device) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt").to(device) outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` I commented out try except in `modeling_rwkv.py` to force it to load cuda kernel for RWKV attention. And i got this: ``` python demo.py Traceback (most recent call last): File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build subprocess.run( File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "demo.py", line 6, in <module> model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile").to(device) File "/data/chengxin/rwkv/transformers/src/transformers/modeling_utils.py", line 2675, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 604, in __init__ self.blocks = nn.ModuleList([RwkvBlock(config, layer_id=idx) for idx in range(config.num_hidden_layers)]) File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 604, in <listcomp> self.blocks = nn.ModuleList([RwkvBlock(config, layer_id=idx) for idx in range(config.num_hidden_layers)]) File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 378, in __init__ self.attention = RwkvSelfAttention(config, layer_id) File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 263, in __init__ load_wkv_cuda_kernel(config.context_length) File "/data/chengxin/rwkv/transformers/src/transformers/models/rwkv/modeling_rwkv.py", line 87, in load_wkv_cuda_kernel rwkv_cuda_kernel = load_kernel( File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1284, in load return _jit_compile( File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile _write_ninja_file_and_build_library( File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library _run_ninja_build( File "/data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'wkv_1024': [1/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda.cu -o wkv_cuda.cuda.o FAILED: wkv_cuda.cuda.o /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda.cu -o wkv_cuda.cuda.o nvcc fatal : Unknown option '-extra-device-vectorization' [2/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o FAILED: wkv_cuda_bf16.cuda.o /usr/bin/nvcc -DTORCH_EXTENSION_NAME=wkv_1024 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/TH -isystem /data/chengxin/anaconda3/envs/rwkv/lib/python3.8/site-packages/torch/include/THC -isystem /data/chengxin/anaconda3/envs/rwkv/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -res-usage --maxrregcount 60 --use_fast_math -O3 -Xptxas -O3 --extra-device-vectorization -DTmax=1024 -std=c++17 -c /data/chengxin/rwkv/transformers/src/transformers/kernels/rwkv/wkv_cuda_bf16.cu -o wkv_cuda_bf16.cuda.o nvcc fatal : Unknown option '-extra-device-vectorization' ninja: build stopped: subcommand failed. ``` ### Expected behavior Load cuda kernel successfully.
06-09-2023 18:26:48
06-09-2023 18:26:48
I found the problem was about the `nvcc` and `cuda` mismatch. The installation of `cudatoolkit` from `conda` doesn't necessarily download all relevant cuda things(e.g. `nvcc`). So I solve it on my side by `conda install cuda -c nvidia`<|||||>Awesome thanks for sharing the solution!
transformers
24,147
closed
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
### System Info I'm running the example code from https://huggingface.co/docs/transformers/training on a Colab and it's failing with [/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in _setup_devices(self) 1670 if not is_sagemaker_mp_enabled(): 1671 if not is_accelerate_available(min_version="0.20.1"): -> 1672 raise ImportError( 1673 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" 1674 ) ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U` However, it still gives the error even after I pip install the recommended libraries. Also, pip freeze shows that accelerate is accelerate==0.20.3 which should satisfy the requirement of `accelerate>=0.20.1` - so I'm not sure why the Trainer is throwing the error. Thanks for taking a look @sgugger ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Expected behavior To run training
06-09-2023 17:39:47
06-09-2023 17:39:47
You need to restart your colab environment after updating the library.<|||||>thanks @sgugger for the quick fix! I was having the same issue as mentioned by @cssndrx . <|||||>Thank you @sgugger, that was the issue! <|||||>Thanks, I was having the same issue! <|||||>restart virtalenv slove my same issue,thanks<|||||>wow! Thanks alot @sgugger<|||||>Restarting the notebook and the kernel resolve it also for me<|||||>> You need to restart your colab environment after updating the library. Thank you.
transformers
24,146
closed
Stop storing references to bound methods via tf.function
This is (hopefully!) the end of a long saga this week. @ydshieh noticed that our tests runners were going OOM, after a couple of PRs I made to dummy inputs. I thought the problem was just that the new dummy inputs were too large, but eventually we figured out that the problem was actually quite complicated! tl;dr **A circular reference exists, which is caused by us calling tf.function() on a model method and then storing the result as a model attribute. Because this reference exists, our TF models are not cleaned up immediately when they are deleted, but only after the next Python garbage collection.** I believe the PRs triggered the issue by eliminating unneccessary calls and making TF model building much faster. This left less time for garbage collection to happen, and as a result our test suites started a second test before the first test had been cleaned up, which caused the test runner to go OOM. We tried resolving this problem by manually calling `gc.collect()` before each test, but this made some of the test suites much slower! Obviously the real solution had to be to resolve the circular reference that was slowing down model cleanup. ~The solution is to replace `model.eager_serving` with a method `model._get_eager_serving_fn()`. This returns a function that TensorFlow can compile, but which doesn't create a hard reference to a model method in the returned `tf.function`. I confirmed through manual inspection with `gc.get_referrers` that the reference is removed and models are cleaned up immediately once they go out of scope now.~ See the update below for a full description of the solution I finally went with!
06-09-2023 16:55:43
06-09-2023 16:55:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>no OOM with this PR (for the models involved), but we have some errors regarding `TypeError: Binding inputs to tf.function `eager_serving` failed due to `missing a required argument: 'inputs'`` popping up for several model/tokenizer tests. One example is ```bash self = <tests.models.gpt2.test_modeling_tf_gpt2.TFGPT2ModelTest testMethod=test_saved_model_creation> @slow def test_saved_model_creation(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = False config.output_attentions = False if hasattr(config, "use_cache"): config.use_cache = False model_class = self.all_model_classes[0] class_inputs_dict = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) model(class_inputs_dict) with tempfile.TemporaryDirectory() as tmpdirname: > model.save_pretrained(tmpdirname, saved_model=True) tests/test_modeling_tf_common.py:268: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/modeling_tf_utils.py:2427: in save_pretrained self.save(saved_model_dir, include_optimizer=False, signatures=signatures) /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler raise e.with_traceback(filtered_tb) from None _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <tensorflow.python.eager.polymorphic_function.function_spec.FunctionSpec object at 0x7f6e1c4db7f0> args = ({'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'input_ids': TensorSpec(sha...tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')},), kwargs = {} def bind_function_inputs(self, args, kwargs): """Bind `args` and `kwargs` into a canonicalized signature args, kwargs.""" sanitized_kwargs = { function_type_lib.sanitize_arg_name(k): v for k, v in kwargs.items() } if len(kwargs) != len(sanitized_kwargs): raise ValueError(f"Name collision after sanitization. Please rename " f"tf.function input parameters. Original: " f"{sorted(kwargs.keys())}, Sanitized: " f"{sorted(sanitized_kwargs.keys())}") try: bound_arguments = self.function_type.bind_with_defaults( args, sanitized_kwargs, self.default_values) except Exception as e: > raise TypeError( f"Binding inputs to tf.function `{self._name}` failed due to `{e}`." f"Received args: {args} and kwargs: {sanitized_kwargs} for signature:" f" {self.function_type}." ) from e E TypeError: Binding inputs to tf.function `eager_serving` failed due to `missing a required argument: 'inputs'`.Received args: ({'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')},) and kwargs: {} for signature: (self, inputs: Dict(mapping={'input_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask'), 'token_type_ids': TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids')})). ```<|||||>Looks like the way I'm handling the methods fails when we try to save the model with those signatures. I'll figure it out on Monday!<|||||>This should be ready for review now! The changes are pretty small, but it took me a while to figure out the details. It turns out anything that looks like `self.serving = tf.function(self.eager_serving)` will create a circular reference between `self.serving` and `self` and inhibit cleanup. This does not apply to methods defined at the class (rather than instance) level. Something like this is fine and does not block cleanup: ``` @tf.function(input_signature=...) def serving(self, inputs): ... ``` The problem with the construction above, though, is that the `tf.function` decorator has to be called with all of its arguments at the class level, before the model has been initialized with a config. This means it can't read any shapes or details from the config, which means its signature has to be very **very** general. This is why we transitioned to `self.serving = ...` in the first place. The solution I found is the following: - Get rid of all helper methods like `self.eager_serving`. These were only used internally anyway, to allow us to compile multiple serving signatures. - Decorate the base `serving` method with `tf.function` and no signature at all. - Rely on our control of `self.save_spec` to ensure that base TF methods like `model.save()` will save with the right signature even when we aren't manually defining it (I checked this and it works!) - When we want to manually specify signatures, we just call `self.serving.get_concrete_signature` with different signatures. No need to keep `eager_serving` around anymore! This should totally preserve functionality and backward compatibility, while resolving the memory cleanup issue and keeping the specific save signatures. The only potentially noticeable change is that `self.serving.input_signature` is no longer defined. We read that value in a couple of tests as a shortcut to find the model input names, so I just replaced it with `self.input_signature` instead. I don't think anyone outside of Hugging Face was using it, and it certainly wasn't part of our public API, so I don't expect any issues!<|||||>Thanks to @ydshieh for his patience with the tests and to @gante for digging out the old PRs that let me finally understand why a lot of this stuff was ever here in the first place!<|||||>OK, I will run a few tests and let you know @Rocketknight1 Thank you for trying trying!<|||||>@ydshieh actually, you're right - I thought it wasn't doing anything anymore, but it's still useful in some cases when we define a broad signature that gets inherited. Let me rework that so we keep it!<|||||>No warning sign after running tests for 4 involved models. You are very good at TF!<|||||>@ydshieh I finished rebasing and I removed your `gc.collect()` change to the doctests. Are you okay for me to merge now, or do you want to run any further tests? Either way, I think we've finally resolved this one!<|||||>It's ok, go ahead. If doctest starts to fail, I will `call` you.<|||||>Also pinging @amyeroberts for core maintainer review<|||||>> LGTM 🔥 > > To be safe, can you trigger the slow CI on this branch? Most of the TF serialization tests are slow tests :D Hello @gante ! Do you mean enable slow tests but for all the models ..? Or anything else? Can't run slow tests on CircleCI however, so need to run on a specific VM.<|||||>Good point, actually - this PR isn't specific to any one model, so we'd need to run slow tests for all models. Since it's a long time to the next release, let's just merge this PR (after review) and see if anything fails overnight?<|||||>@Rocketknight1 @ydshieh Could we run slow tests on just a handful of models ~5 popular ones from different modalities to make sure any obvious issues have been caught? <|||||>@amyeroberts I've been running them locally on the core models - BERT and GPT-2 look good! Are you okay if I try a few more and then merge if there are no issues?<|||||>Tested BERT, GPT-2, BART, ViT, CLIP and Wav2Vec2 without issues. Merging!
transformers
24,145
closed
Adds LILT to models exportable with ONNX
# What does this PR do? Adds LILT to models exportable with ONNX <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-09-2023 15:37:20
06-09-2023 15:37:20
Hi @mariababich , the ONNX export is now supported in Optimum, I merged your PR there: https://github.com/huggingface/optimum/pull/1098
transformers
24,144
closed
Fix typo in streamers.py
# What does this PR do? Fixes a typo in `transformers/generation/streamers.py`. Caught it while browsing some of the streaming code 😃 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-09-2023 14:53:58
06-09-2023 14:53:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24144). All of your documentation changes will be reflected on that endpoint.
transformers
24,143
closed
audio classification official script on local own dataset
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [conda] numpy 1.24.3 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi ### Who can help? @sanchit-gandhi @sgugger @albertvillanova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. I want to run this model but not on superb dataset: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md 2. I want to load a dataset from local: > - here is the local data structure for splits: > ![image](https://github.com/huggingface/transformers/assets/103381497/8aa61e21-83c1-45fd-8ca7-fdb84c66b984) > > - here is the csv file structure containing the path to the audio file and the audio label: > ![image](https://github.com/huggingface/transformers/assets/103381497/34e0b859-20eb-4b91-8eff-aab791e7d444) > with command: I don't specify the superb dataset: > python `run_audio_classification.py` \ > --model_name_or_path facebook/wav2vec2-base \ > --output_dir wav2vec2-base-s \ > --overwrite_output_dir \ > --remove_unused_columns False \ > --do_train \ > --do_eval \ > --fp16 \ > --learning_rate 3e-5 \ > --max_length_seconds 1 \ > --attention_mask False \ > --warmup_ratio 0.1 \ > --num_train_epochs 5 \ > --per_device_train_batch_size 32 \ > --gradient_accumulation_steps 4 \ > --per_device_eval_batch_size 32 \ > --dataloader_num_workers 4 \ > --logging_strategy steps \ > --logging_steps 10 \ > --evaluation_strategy epoch \ > --save_strategy epoch \ > --load_best_model_at_end True \ > --metric_for_best_model accuracy \ > --save_total_limit 3 \ > --seed 0 \ > --push_to_hub \ > --use_auth_token True 3. Changes I made in the `run_audio_classification.py` script to load audio from csv file: 3.1 I specify the location of the csv files : > so I replace lines [249 - 261](https://github.com/huggingface/transformers/blob/b8fe259f163c48a18c9b27428b72b2ac104de346/examples/pytorch/audio-classification/run_audio_classification.py#LL247C1-L260C6) with: > > data_files = {'train': 'train.csv', 'test': 'test.csv', 'valid': 'valid.csv'} > > raw_datasets["train"] = load_dataset('s/data/s/s/train', data_files=data_files["train"]) > raw_datasets["test"] = load_dataset('s/data/s/s/test', data_files=data_files["test"]) > raw_datasets["valid"] = load_dataset('s/data/s/s/valid', data_files=data_files["valid"]) > > ### It seems that loading the csv files is successful. I get message: "Dataset csv downloaded and prepared ". # But these are the errors: 4. I comment out lines [262 -274 ](https://github.com/huggingface/transformers/blob/b8fe259f163c48a18c9b27428b72b2ac104de346/examples/pytorch/audio-classification/run_audio_classification.py#L262) > > Because no matter how I change the audio path in csv files to audio, file_name, train/test/valid it still gives me error: > `ValueError: --audio_column_name audio not found in dataset 'None'. Make sure to set `--audio_column_name` to the correct audio column - one of train.` > > even though I successfully load the csv file with `'audio' `and 'label' headers. (also tried: 'filne_name' instead of 'audio'). The csv files are **"Dataset csv downloaded and prepared ".** However, the error says that the _--audio_column_name `audio` is not found_ 5. Then I receive error: > on raw_datasets = raw_datasets.cast_column( > python3.8/site-packages/datasets/dataset_dict.py line 309, in cast_column self._check_values_type() > line 45, in _check_values_type > raise TypeError(f"Values in `DatasetDict` should be of type `Dataset` but got type '{type(dataset)}'") > > > TypeError: Values in `DatasetDict` should be of type `Dataset` but got type '<class 'datasets.dataset_dict.DatasetDict'>' > (I am loading it locally because I have not received a reply on how to load private hub datasets when I raised the issue: https://github.com/huggingface/datasets/issues/5930 ) @albertvillanova ### Expected behavior I want to be able to run[ the official example script run_audio_classification.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md ) instead of predefined dataset superb, but on my own local dataset to train the model on my dataset.
06-09-2023 14:02:07
06-09-2023 14:02:07
Hi @flckv, thanks for raising an issue! The error messages are telling you what the issues are. 1. The feature `audio` isn't in the csv. The csv has two column names: `train` and `label`. You should either update the csv to have `audio` as a column name, or passing in `--audio_column_name train` when you run the script 2. The dataset created is a `DatasetDict` with `DatasetDict` objects as its keys rather than the expected `Dataset` instance. This should be resolved by doing: ```python data_files = {'train': 'train/train.csv', 'test': 'test/test.csv', 'valid': 'valid/valid.csv'} raw_datasets = load_dataset("s/data/s/s", data_files=data_files) ``` For further questions about how to customise a script, please ask in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. <|||||>Thank you, @amyeroberts <|||||>See related: https://discuss.huggingface.co/t/custom-local-data-loading-generating-split-with-load-dataset-not-working-values-in-datasetdict-should-be-of-type-dataset-but-got-type-class-datasets-dataset-dict-datasetdict/42740/2?u=sanchit-gandhi
transformers
24,142
closed
Add MQTTS
### Model description MQTTS is a Text to Speech model which was introduced in the paper [A Vector Quantized Approach for Text to Speech Synthesis on Real-World Spontaneous Speech](https://arxiv.org/pdf/2302.04215.pdf). Their work explore the use of more abundant real-world data for building speech synthesizers. It's architecture is designed for multiple code generation and monotonic alignment, along with the use of a clean silence prompt to improve synthesis quality.They show that MQTTS outperforms existing TTS systems in several objective and subjective measures. I would like to add this model to HF. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Implementation - https://github.com/b04901014/MQTTS Checkpoints - 1. Config - https://cmu.box.com/s/hvv06w3yr8mob4csjjaigu5szq2qcjab 2. Quantize - https://cmu.box.com/s/966rcxkyjps80p7thu0r6lo2udk1ezdm 3. Transformer model - https://cmu.box.com/s/xuen9o8wxsmyaz32a65fu25cz92a2jei
06-09-2023 13:47:28
06-09-2023 13:47:28
cc: @sanchit-gandhi and @ArthurZucker <|||||>I think this is a cool model - whether it outperforms Bark (#24086) is up for debate. My only concerns are: 1. The NC license which is not super permissive 2. The low-visibility of the original repo: with only 130 GH stars, it seems like the community is not super excited by the model (and thus are unlikely to use it in the library) While the voice prompting feature would be cool and inference much faster than a hierarchical transformer model like Bark, I think the lack of visibility / excitement around the model means it would be a big effort to add with maybe little usage as a result cc @Vaibhavs10 who has had more experience with MQTTS, @ylacombe who's adding Bark and @hollance who's adding VITS MMS What do you all think?<|||||>IMO for MQTTS - doesn't make as much sense, purely from a licensing standpoint. Plus it uses a non-standard quantizer, which makes it difficult to maintain (primarily because it'll be used only for MQTTS). I think a more ambitious idea would be to add tortoise-tts - https://github.com/neonbjb/tortoise-tts (Was released a while back but still is the king) - the original repo is not as optimised so with the transformers bells and whistles we can make sure that it works faster and better? Another idea would be to add StyleTTS - https://github.com/yl4579/StyleTTS, the results are quite promising and given there is training code as well, it opens up the opportunity to train a bigger model.<|||||>Tortoise TTS would probably go in the [`diffusers`](https://github.com/huggingface/diffusers) repo (since we could build it as a diffusion pipeline with a transformer encoder) - since the purpose of `diffusers` is more pure performance (which is not the objective of `transformers`) it would be a good fit here Would you like to open a feature request for Tortoise TTS on the diffusers repo and tag myself and @Vaibhavs10? We can then discuss how feasible a new pipeline addition would be!<|||||>thanks a lot for all the insights! Also I opened an issue for Tortoise TTS on the diffusers repo. It is [here](https://github.com/huggingface/diffusers/issues/3891) <|||||>Perfect, thanks @susnato! Going to close this then since we're in agreement that MQTTS is not a good addition for transformers. Tortoise TTS issue in diffusers: https://github.com/huggingface/diffusers/issues/3891
transformers
24,141
closed
[documentation] grammatical fixes in image_classification.mdx
The changes made are grammatical and do not affect the ideas communicated in the file.
06-09-2023 13:35:22
06-09-2023 13:35:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24141). All of your documentation changes will be reflected on that endpoint.
transformers
24,140
closed
[`SAM`] Fix sam slow test
# What does this PR do? Fixes SAM slow test, link to failing job: https://github.com/huggingface/transformers/actions/runs/5206799863 * Why this fix is relevant? Before the PR, it seems I was initalizing the pipeline in the wrong way. Passing a string to `device` argument of pipeline leads to an error. To reproduce: ```python from transformers import pipeline pipe = pipeline("text-generation", device="cuda") pipe("Hello") >>> ValueError: Expected a torch.device with a specified index or an integer, but got:cuda ``` That is yield here: https://github.com/huggingface/transformers/blob/847b47c0eed4e6ab904f584fb415e3d3a397867f/src/transformers/pipelines/base.py#L905 Whereas the typehint of pipeline says: https://github.com/huggingface/transformers/blob/847b47c0eed4e6ab904f584fb415e3d3a397867f/src/transformers/pipelines/base.py#L763 The fix seems to be to pass an int for the device if cuda is available and -1 if not cc @ydshieh
06-09-2023 13:24:43
06-09-2023 13:24:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada Could you update the type hint for the pipeline too? nb: from pytorch it seems [we shouldn't use torch.cude.set_device()](https://pytorch.org/docs/stable/generated/torch.cuda.set_device.html) cc @Narsil <|||||>Sure yes just updated it, we probably need also to address a proper fix for that in a separate PR, not sure though <|||||>Code is from 3 years ago (and I'm pretty sure I just moved it from somewhere else). Happy to refactor to something more up-to-date. Accepting `strings` as device should be supported imo.<|||||>@Narsil I also agree we should support str for pipeline, let me know if you want me to work on this, I am happy to have a look and ping you once something is ready
transformers
24,139
closed
Avoid OOM in doctest CI
# What does this PR do? This is done by inject `gc.collect()` to the source code during pytest/doctest collecting the test to run.
06-09-2023 12:59:46
06-09-2023 12:59:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Update: after a correction, only 30 minutes longer or a bit more. The job runs 2x slower ...<|||||>Running the whole doctest: No more OOM. Only one test failure (lucky!) Tag @sgugger so we can have doctest run until @Rocketknight1 have a complete solution in #24146. (We haven't received any doctest report for 2 weeks)
transformers
24,138
closed
Correctly build models and import call_context for older TF versions
Our import for `call_context` was wrong for older TF versions - this unfortunately makes it quite hard to load models on older TF versions! This PR fixes it, and sorry about the issue! Fixes #24133
06-09-2023 11:17:08
06-09-2023 11:17:08
@amyeroberts The code is in a conditional block that checks TF versions. I'm currently testing all TF versions since 2.4 to make sure this works for all of them - give me a few minutes to finish that before we merge the PR!<|||||>Version testing looks good!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Merging this now and will begin discussions about a patch release
transformers
24,137
closed
[`bnb`] Fix bnb config json serialization
# What does this PR do? Fixes #24131 Fixes https://github.com/huggingface/peft/issues/558 Replaces the PR: https://github.com/huggingface/transformers/pull/24094 To reproduce: ```python import torch from transformers import BitsAndBytesConfig, AutoModelForVision2Seq bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForVision2Seq.from_pretrained("Salesforce/blip2-opt-2.7b", quantization_config=bnb_config, device_map='auto') print(model.config.to_json_string()) ``` (or use any causal LM/ text models) Adds also a nice test cc @amyeroberts
06-09-2023 10:26:48
06-09-2023 10:26:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,136
closed
Update urls in warnings for rich rendering
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-09-2023 10:06:44
06-09-2023 10:06:44
Hi @IvanReznikov - thanks for opening a PR. Could you expand a bit on the issue this is addressing? All I'm seeing in the diff is splitting the `")"` bracket onto a new line. <|||||>@amyeroberts, the bracket was part of the URL, what leads to 404 obviously.<|||||>@IvanReznikov The bracket is just the closing bracket the opened with `"(see..."` in the line above, it's not part of the url. Splitting like this won't change the evaluated string because of python's implicit line continuation behaviour. <|||||>![image](https://github.com/huggingface/transformers/assets/25007854/fbe4bc4f-dc64-4db6-8189-9ae8fb9ae004) <|||||>@amyeroberts yep, fixed it above<|||||>@amyeroberts , sure<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,135
closed
Invalid hyperlink error 404 in Transformer Doc for RayTune
Greetings, Small issue here: following the HuggingFace Transformer Docs on Hyperparameter Search using Trainer API, for raytune the hyperlink for 'object_parameter' is now invalid and should be updated. The other backends (sigopt, optuna, wandb) have correctly working hyperlinks for the 'object_parameter'. Link to doc page: https://huggingface.co/docs/transformers/hpo_train#how-to-enable-hyperparameter-search-in-example Link to git markdown file: https://github.com/huggingface/transformers/blob/main/docs/source/en/hpo_train.mdx
06-09-2023 10:00:36
06-09-2023 10:00:36
feel free to open a PR to fix if you can:)
transformers
24,134
closed
fix bugs with trainer
# What does this PR do? Context: 1. Issue 1 - Currently, when the lr_scheduler is specified in the deepspeed config file, we leverage a DummyScheduler to pass to the `accelerator.prepare` to get the correct scheduler post prepare call. A prior PR removed preparation of the lr_scheduler leading to a lot of DeepSpeed tests failing. 2. Issue 2 - when using apex we shouldn't be preparing optimizer else we get `AttributeError: 'AcceleratedOptimizer' object has no attribute '_amp_stash'` 3. Issue 3 - FSDP ckpt logic should create ckpt dir if not present. Fixes https://github.com/huggingface/transformers/issues/24130 This PR fixes the above issues.
06-09-2023 09:17:04
06-09-2023 09:17:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,133
closed
Transformers can not load dependency of tensorflow - `cannot import name 'call_context' from 'tensorflow.***.keras.engine'`
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.19.0-43-generic-x86_64-with-glibc2.34 - Python version: 3.8.16 - Tensorflow version (GPU?): 2.8.2 (False) ### Who can help? @gante and @Rocketknight1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The problem arises due to this [PR](https://github.com/huggingface/transformers/pull/23760) and the [case](https://github.com/huggingface/transformers/blame/main/src/transformers/modeling_tf_utils.py#L84) for tensorflow import of `call_context` according to minor version. In tf < 2.11, the import should be `from tensorflow.python.keras.engine.base_layer_utils import call_context` Steps to reproduce: 1. Install the System Info dependencies mentionned above 2. `from transformers import DistilBertTokenizerFast, TFDistilBertModel` 3. Then you should see ```bash File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 37, in <module> from ...modeling_tf_utils import ( File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/transformers/modeling_tf_utils.py", line 84, in <module> from tensorflow.***.keras.engine import call_context ImportError: cannot import name 'call_context' from 'tensorflow.***.keras.engine' (/opt/hostedtoolcache/Python/3.8.16/x64/lib/***3.8/site-packages/tensorflow/***/keras/engine/__init__.py) ``` ### Expected behavior Update the case where the import is [located](https://github.com/huggingface/transformers/blame/main/src/transformers/modeling_tf_utils.py#L84) to the correct naming inside tensorflow `from tensorflow.python.keras.engine.base_layer_utils import call_context`
06-09-2023 08:33:17
06-09-2023 08:33:17
Agh, sorry about that. PR to fix it is open at #24138 <|||||>thanks feel free to close it when it is merged, if the PR does not close it automatically<|||||>Fixed on `main`, but we'll need to organize a patch release before you can unpin. Sorry for the trouble!<|||||>@MaximeChurin This should be resolved by the 4.30.1 hotfix release, along with a couple of other release bugs. You should be able to unpin now!
transformers
24,132
closed
[lamaTokenizerFast] Update documentation
# What does this PR do? Updates the documentation for llamaFast. I think that long term-wise it would make more sense that the rust tokenizer updates its internals if the specials tokens are updated as well, this would allow us to remove the python layer that takes care of it. Related to #23889
06-09-2023 07:39:34
06-09-2023 07:39:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,131
closed
Object of type 'BitsAndBytesConfig' is not JSON serializable
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This is the script Im using: ``` import pandas as pd import os from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, BitsAndBytesConfig import torch from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType, prepare_model_for_kbit_training from transformers import DataCollatorForSeq2Seq import evaluate import nltk import numpy as np from nltk.tokenize import sent_tokenize from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments from datasets import Dataset, DatasetDict import argparse import pickle import json parser = argparse.ArgumentParser(description='Options') parser.add_argument('--dataset_dir', default='data', type=str, help="folder in which the dataset is stored") parser.add_argument('--output_dir', default="lora-instructcodet5p", type=str, help="output directory for the model") parser.add_argument('--results_dir', default="results", type=str, help="where the results should be stored") args = parser.parse_args() nltk.download("punkt") tokenized_dataset = DatasetDict.load_from_disk(args.dataset_dir) # Metric metric = evaluate.load("rouge") pad_tok = 50256 token_id="Salesforce/instructcodet5p-16b" tokenizer = AutoTokenizer.from_pretrained(token_id) # helper function to postprocess text def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [label.strip() for label in labels] # rougeLSum expects newline after each sentence preds = ["\n".join(sent_tokenize(pred)) for pred in preds] labels = ["\n".join(sent_tokenize(label)) for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] for idx in range(len(preds)): for idx2 in range(len(preds[idx])): if preds[idx][idx2]==-100: preds[idx][idx2] = 50256 decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != pad_tok, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) result = {k: round(v * 100, 4) for k, v in result.items()} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) return result def get_dict(predicts): d = {} for num in range(len(tokenized_dataset['test'])): pred = tokenizer.decode([n for n in predicts[0][num] if n!=50256 and n!=-100])[1:] d[num+1] = {'Question':tokenizer.decode([n for n in tokenized_dataset['test'][num]['input_ids'] if n!=50256]), 'Ground truth solution':tokenizer.decode([n for n in tokenized_dataset['test'][num]['labels'] if n!=50256]), 'Prediction': pred if pred else None} return d def find_all_linear_names(model): cls = torch.nn.Linear lora_module_names = set() for name, module in model.named_modules(): if isinstance(module, cls): names = name.split('.') lora_module_names.add(names[0] if len(names) == 1 else names[-1]) if 'lm_head' in lora_module_names: lora_module_names.remove('lm_head') return list(lora_module_names) def main(): device = 'cuda' # huggingface hub model id model_id="instructcodet5p-16b" if not os.path.exists(model_id): model_id=token_id bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) # load model from the hub model = AutoModelForSeq2SeqLM.from_pretrained(model_id, # torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, decoder_start_token_id=1, pad_token_id=pad_tok, device_map="auto", quantization_config=bnb_config) modules = find_all_linear_names(model) # Define LoRA Config lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=modules, lora_dropout=0.05, bias="none", task_type=TaskType.SEQ_2_SEQ_LM ) # prepare int-8 model for training model = prepare_model_for_kbit_training(model, False) # add LoRA adaptor model = get_peft_model(model, lora_config) model.print_trainable_parameters() # we want to ignore tokenizer pad token in the loss label_pad_token_id = pad_tok # Data collator data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 ) output_dir=args.output_dir training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=1, per_device_eval_batch_size=1, predict_with_generate=True, weight_decay=0.05, # warmup_steps=200, fp16=False, # Overflows with fp16 learning_rate=1e-3, num_train_epochs=5, # logging & evaluation strategies logging_dir=f"{output_dir}/logs", logging_strategy="epoch", # logging_steps=500, evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=20, # load_best_model_at_end=True, # metric_for_best_model="overall_f1", # push to hub parameters report_to="tensorboard", push_to_hub=False, generation_max_length=200, optim="paged_adamw_8bit" ) # Create Trainer instance trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_dataset["validation"], compute_metrics=compute_metrics, ) # train model trainer.train() # Save our LoRA model & tokenizer results predicts = trainer.predict(tokenized_dataset['test'], max_length=200) with open('predicts.pkl', 'wb') as file: pickle.dump(predicts, file) d = get_dict(predicts) for num in d: print("Question:\n%s"%(d[num]['Question'])) print('Ground Truth Solution:\n') print(d[num]['Ground truth solution']) print() print('Prediction:\n') print(d[num]['Prediction']) print() peft_model_id=args.results_dir trainer.model.save_pretrained(peft_model_id) tokenizer.save_pretrained(peft_model_id) # if you want to save the base model to call # trainer.model.base_model.save_pretrained(peft_model_id) with open('generations.json', "w") as json_file: json.dump(d, json_file) #Evaluate on test data # trainer.evaluate() if __name__ == '__main__': main() ``` ### Expected behavior I'm trying to use QLoRA for fine-tuning on a Seq2Seq Task using [InstructCodeT5+](https://huggingface.co/Salesforce/instructcodet5p-16b) guided by this example [notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=jq0nX33BmfaC). I am getting the following error: ``` Traceback (most recent call last): File "training.py", line 242, in <module> main() File "training.py", line 215, in main trainer.train() File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1853, in _inner_training_loop self.control = self.callback_handler.on_train_begin(args, self.state, self.control) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer_callback.py", line 353, in on_train_begin return self.call_event("on_train_begin", args, state, control) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer_callback.py", line 397, in call_event result = getattr(callback, event)( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/integrations.py", line 640, in on_train_begin model_config_json = model.config.to_json_string() File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 836, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/usr/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.8/json/encoder.py", line 438, in _iterencode o = _default(o) File "/usr/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type BitsAndBytesConfig is not JSON serializable ``` Expecting the model to run and train as per the example notebook referenced above. Any help is appreciated!
06-09-2023 06:09:21
06-09-2023 06:09:21
Thanks for reporting, see the comment here: https://github.com/huggingface/transformers/pull/24094#pullrequestreview-1471475968 That suggestion should solve the issue
transformers
24,130
closed
seems to be a bug related to saving model
### System Info I use pytorch==2.0 fsdp fully-shard If I use transformers==4.29.1, accelerate==0.19.0, things works well: ``` [INFO|trainer.py:2904] 2023-06-09 10:35:25,236 >> Saving model checkpoint to ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4 [INFO|configuration_utils.py:458] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/config.json [INFO|configuration_utils.py:364] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/generation_config.json ``` When I switch to transformers==4.30 accelerate==0.20.0 when I save the model, I got the following error ``` │ 285 │ │ 286 class _open_zipfile_writer_file(_opener): │ │ 287 │ def __init__(self, name) -> None: │ │ ❱ 288 │ │ super().__init__(torch._C.PyTorchFileWriter(str(name))) │ │ 289 │ │ │ 290 │ def __exit__(self, *args) -> None: │ │ 291 │ │ self.file_like.write_end_of_file() │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Parent directory ../outputs/tigerbot-7b/full/2023-06-09-10-21-30/ckpt/checkpoint-4 does not exist. ``` It seems like, when I save fsdp model, transformers/accelerator don't help me to create the parent folder 'xxxx/checkpoint-4'. When I downgrade the transformers and the accelerate's version, it works, and when I manually create the 'xxx/checkpoint-4' before saving, it also works. ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction pytorch==2.0 transformers==4.30.0 accelerate==0.20.3 Trainer using FSDP fully shard, modified from train_clm.py example ``` CUDA_VISIBLE_DEVICES=0,1,2,3,7 $BASE_ENV/torchrun --nproc_per_node 5 --nnodes=1 --node_rank=0 --master_port $MASTER_PORT main_sft.py \ --model_name_or_path $MODEL \ --model_type $MODEL_TYPE \ --dataset_config_file config/data/tiger.yaml \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --do_train \ --do_eval \ --output_dir $OUTPUT_DIR \ --fp16 \ --cutoff_len 2048 \ --save_steps 500 \ --logging_steps 50 \ --max_steps 6000 \ --eval_steps 500 \ --warmup_steps 5 \ --gradient_accumulation_steps 32 \ --lr_scheduler_type "cosine" \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'BloomBlock' \ --gradient_checkpointing True \ --overwrite_cache \ --learning_rate 1e-5 \ | tee $LOG_DIR/train.log \ 2> $LOG_DIR/train.err ``` ### Expected behavior I use pytorch==2.0 fsdp fully-shard If I use transformers==4.29.1, accelerate==0.19.0, things works well: ``` [INFO|trainer.py:2904] 2023-06-09 10:35:25,236 >> Saving model checkpoint to ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4 [INFO|configuration_utils.py:458] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/config.json [INFO|configuration_utils.py:364] 2023-06-09 10:35:25,237 >> Configuration saved in ../outputs/tigerbot-7b/full/2023-06-09-10-33-49/ckpt/checkpoint-4/generation_config.json ``` When I switch to transformers==4.30 accelerate==0.20.0 when I save the model, I got the following error ``` │ 285 │ │ 286 class _open_zipfile_writer_file(_opener): │ │ 287 │ def __init__(self, name) -> None: │ │ ❱ 288 │ │ super().__init__(torch._C.PyTorchFileWriter(str(name))) │ │ 289 │ │ │ 290 │ def __exit__(self, *args) -> None: │ │ 291 │ │ self.file_like.write_end_of_file() │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Parent directory ../outputs/tigerbot-7b/full/2023-06-09-10-21-30/ckpt/checkpoint-4 does not exist. ``` It seems like, when I save fsdp model, transformers/accelerator don't help me to create the parent folder 'xxxx/checkpoint-4'. When I downgrade the transformers and the accelerate's version, it works, and when I manually create the 'xxx/checkpoint-4' before saving, it also works.
06-09-2023 02:48:15
06-09-2023 02:48:15
cc @pacman100 <|||||>Hello @jeffchy, Thank you for the thorough issue, can you please confirm if the above PR resolves your issue?
transformers
24,129
closed
PLAM => PaLM
# What does this PR do? Fixes #24114 (issue) doc fix: PLAM => PaLM @amyeroberts, @sgugger, please review
06-09-2023 02:24:04
06-09-2023 02:24:04
I have no idea is it applicable that mr to main branch?<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,128
closed
Nah
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-08-2023 23:36:45
06-08-2023 23:36:45
transformers
24,127
closed
Adding padding token GPTj config for the tokenizer.
# What does this PR do? This PR adds pad_tokens to GPTJ tokenizer. This was seen as needed when GPTJ uses GPT2 or other tokenizer for GLUE fine-tuning tasks. The changes are in line with ONNX GPTJ Config setting in the same file. If there is an alternate fix that would solve it, that can be added as well. Fixes # (issue) Adding padding for GLUE fine-tuning Tasks. ## Before submitting ## Who can review? Models: GPTJ Library: Integrations: Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-08-2023 23:33:59
06-08-2023 23:33:59
@sgugger , The GPTJOnnxConfig uses it though. Is there a better resolution for this in run_glue.py as it requires padding.<|||||>The `GPTJOnnxConfig` is not used anymore and only there for backward compatibility. Like I said, you can manually add that `pad_token_id` in the config to suit your needs, but it shouldn't be there by default.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,126
closed
Run audio classification example using "facebook/hubert-base-ls960" model got stuck when use deepspeed, but works for wav2vec2
### System Info I was trying to run the audio classification example using "facebook/hubert-base-ls960" with deepspeed on a 3 gpu node. The trainer and the model got stuck at the first training step. However, if I change only the `--model_name_or_path=facebook/hubert-base-ls960` to `--model_name_or_path=facebook/wav2vec2-base`, I am able to run the audio classification example using "facebook/wav2vec2-base" without problem. Not sure if other people also have this issue. ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. clone and install huggingface repo `https://github.com/huggingface/transformers.git` 2. go to `examples/pytorch/audio-classification` folder 3. save the config into config/stage2.json ``` { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": "auto", "contiguous_gradients": true }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` 4. execute the following command line and the scripts stucks the the first step of training ``` deepspeed --num_gpus=3 run_audio_classification.py \ --model_name_or_path facebook/hubert-base-ls960 \ --dataset_name common_language \ --audio_column_name audio \ --label_column_name language \ --output_dir ac-base-lang-id \ --overwrite_output_dir \ --remove_unused_columns False \ --do_train \ --do_eval \ --fp16 \ --learning_rate 3e-4 \ --max_length_seconds 16 \ --attention_mask True \ --warmup_ratio 0.1 \ --num_train_epochs 10 \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 1 \ --per_device_eval_batch_size 1 \ --dataloader_num_workers 8 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --metric_for_best_model accuracy \ --save_total_limit 3 \ --seed 0 \ --cache_dir /nws/user/hertin/.cache/huggingface \ --deepspeed config/stage2.json ``` ### Expected behavior 5. the tail of the output looks like this and the training got stuck: ```No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.0006883144378662109 seconds [INFO|trainer.py:1777] 2023-06-08 15:09:13,470 >> ***** Running training ***** [INFO|trainer.py:1778] 2023-06-08 15:09:13,470 >> Num examples = 22,194 [INFO|trainer.py:1779] 2023-06-08 15:09:13,470 >> Num Epochs = 10 [INFO|trainer.py:1780] 2023-06-08 15:09:13,470 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1781] 2023-06-08 15:09:13,470 >> Total train batch size (w. parallel, distributed & accumulation) = 24 [INFO|trainer.py:1782] 2023-06-08 15:09:13,470 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1783] 2023-06-08 15:09:13,470 >> Total optimization steps = 9,250 [INFO|trainer.py:1784] 2023-06-08 15:09:13,471 >> Number of trainable parameters = 90,379,693 0%| | 0/9250 [00:00<?, ?it/s] ```
06-08-2023 22:29:24
06-08-2023 22:29:24
cc @sanchit-gandhi @pacman100 <|||||>That's super weird - is there any activity on your GPU? Thought it might be an issue with SpecAug + DeepSpeed, but we have the config params set the same in HuBERT and Wav2Vec2. So it must be a DeepSpeed bug. Are you able to see with any more detail where it hands? E.g. maybe with a torch profile, or just killing the programme when it hangs and seeing which line it got to?<|||||>When it gets stuck, the GPU utility goes to 100%. When I Ctrl+C to interrupt, I see the following traceback: ``` 0%| | 0/9250 [00:00<?, ?it/s] C[2023-06-12 10:59:44,481] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234292 2023-06-12 10:59:44,581] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234292 raceback (most recent call last): File "/nws/user/hertin/softwares/miniconda3/envs/slullm/bin/deepspeed", line 6, in <module> main() File "/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/site-packages/deepspeed/launcher/runner py", line 570, in main result.wait() File "/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py", line 1917, in _wait (pid, sts) = self._try_wait(0) File "/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/subprocess.py", line 1875, in _try_wait (pid, sts) = os.waitpid(self.pid, wait_flags) File "/nws/user/hertin/softwares/miniconda3/envs/slullm/lib/python3.9/site-packages/deepspeed/launcher/runner py", line 562, in sigkill_handler result_kill = subprocess.Popen(kill_cmd, env=env) ameError: free variable 'kill_cmd' referenced before assignment in enclosing scope 2023-06-12 10:59:44,867] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234293 [2023-06-12 10:59:45,152] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 1234294 2023-06-12 10:59:45,515] [INFO] [launch.py:323:sigkill_handler] Main process received SIGTERM, exiting ```<|||||>Hey @Hertin, this indeed looks like a DeepSpeed bug (i.e. we see deepspeed hang before launching the process). Not sure why we're getting it for HuBERT and not Wav2Vec2 though 🤔 Could you verify your transformers version + deepspeed version please? ``` transformers-cli env python -c "import deepspeed; print(deepspeed.__version__)" ```<|||||>Thanks for the reply. This is my transformer and deepspeed version ``` transformers-cli env ``` ``` Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ``` python -c "import deepspeed; print(deepspeed.__version__)" ``` ``` Setting ds_accelerator to cuda (auto detect) 0.9.3 ```<|||||>We can try installing to the latest version of deepspeed (0.9.4) but I don't think this is going to fix it... I'm not sure here - I think this is a question for the deepspeed repo to be honest! If you could share with them a reproducible code snippet of the issue they should be able to dive deeper on why deepspeed is hanging in the way it is. Unfortunately this is out of the scope of transformers at this point<|||||>Thanks for your time. I will close this issue as it is more likely a deepspeed issue.<|||||>Thanks @Hertin and sorry we were not able to find a fix! Hope it goes well asking on the DS repo
transformers
24,125
closed
Fix SAM OOM issue on CI
# What does this PR do?
06-08-2023 20:40:10
06-08-2023 20:40:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,124
closed
Fix Pipeline CI OOM issue
# What does this PR do?
06-08-2023 20:39:28
06-08-2023 20:39:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>> 1. Do you think this memory handling is something we should extend to most model tests by default? Is there too much of an overhead repeatedly clearing the cache / GC or other reasons it's not suitable? This cleanup is only necessary for integration tests, but yes, I think it's best to apply this to all model (integration) tests. > 2. Could we create a general utility in testing utils to avoid some of the repeated code e.g. Yes! I am also thinking if we should define a subclass of unittest.TestCase and have a common `def tearDown`. Let's talk this and/or your suggestion above later.
transformers
24,123
closed
Fix XGLM OOM on CI
# What does this PR do? same as in #24122 and #24106
06-08-2023 20:09:47
06-08-2023 20:09:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Going to merge as this is just the same as the other PRs. Don't need to bother too much the core maintainers.
transformers
24,122
closed
Fix TF Rag OOM issue
# What does this PR do? It seems the only thing is to `gc.collect()`. Thanks @Rocketknight1 for continuous trying.
06-08-2023 19:37:43
06-08-2023 19:37:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,121
closed
Slow (i.e., Python) Tokenizer `batch_encode_plus` for Input as `List[PreTokenizedInput]` or `List[PreTokenizedInputPair]`
# What does this PR do? Current `batch_encode_plus()` should support input type including`List[PreTokenizedInput]` and `List[PreTokenizedInputPair]` by doc. However, a simple example would incur the error: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=False) >>> tokenizer([['I', 'love', 'you'], ['I', 'love', 'you']]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2556, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2642, in _call_one return self.batch_encode_plus( File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2833, in batch_encode_plus return self._batch_encode_plus( File "/home/ubuntu/anaconda3/envs/fid-bert/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 731, in _batch_encode_plus ids, pair_ids = ids_or_pair_ids ValueError: too many values to unpack (expected 2) ``` The fixed version would properly tokenize such inputs without errors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=False) >>> tokenizer([['I', 'love', 'you'], ['I', 'love', 'you']]) {'input_ids': [[101, 100, 2293, 2017, 102], [101, 100, 2293, 2017, 102]], 'token_type_ids': [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]} ``` The "fast" tokenizer written by Rust should be fixed accordingly as well. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-08-2023 19:04:24
06-08-2023 19:04:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24121). All of your documentation changes will be reflected on that endpoint.<|||||>> Hey! Thanks a lot for adding this 😉 Would you mind adding a test? To make sure that List of input pairs/list works? (in this state I think a list of pairs is passed as a list instead or a pair of list no? Hi @ArthurZucker, thanks a lot for the quick reply! Sure, I will add a test to this. I am not very sure about the current state/assumption, but will take a look and get back to you on this! (Separately I think the fast tokenizer suffers from the same problem here -- I will try to make another PR regarding that or at least open a feature request so that the community can contribute)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,120
closed
Contrastive Search peak memory reduction
# Contrastive Search memory reduction via sequential topk embedding recovery This PR describes a new feature for contrastive search generation that may be of interest to the community. Problem: Contrastive search is an effective method for LLM text generation, but as currently implemented requires far more maximum memory (tested with VRAM) than comparable methods like nucleus search. Solution: Extra memory (ie more than greedy generation) required for contrastive search primarily comes from two sources: storing of the last hidden layers per token and the parallelized computation of the last hidden layer embeddings for `top_k` tokens. This PR addresses the second source by providing a switch to sequential last hidden layer computation. The result is that far less maximum memory is required during generation: for example, generation max memory usage for Llama 13b using Int4 quantization (@qlora) for 1k tokens reduces from >15GB to ~9GB. The generation process necessarily becomes somewhat slower as well. Code Snippet: Pass the kwarg `low_memory` to the generation as follows: ```python model.generate(input_ids, top_k=4, penalty_alpha=0.6, low_memory=True) ``` No additional modules are required for use. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante Anyone who would like to review is welcome!
06-08-2023 17:46:51
06-08-2023 17:46:51
To make our CI go green, you may need to: 1. rebase with `main` (LMK if you need instructions) 2. run `make fixup` and commit the changes<|||||>Thanks very much for the review and good comments and edits @gante ! I have committed your changes and am adding the tests to `transformers/tests/generation/test_utils.py` and will commit that when done. I might need some pointers for rebasing the PR after:)<|||||>Instructions to rebase: 1. get the latest `main`. If your fork is not synched with upstream, you may need to follow [these instructions](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork#syncing-a-fork-branch-from-the-web-ui) before running the commands below. ``` git checkout main git pull ``` 2. rebase your branch ``` git checkout your_branch git rebase origin/main ``` 3. force push your changes ``` git push origin your_branch -f ```<|||||>Thanks for the rebase instructions! I have rebased and added the required tests. While testing, I found and removed a bug that caused the contrastive search to degenerate into greedy generation. After fixing this issue, however, the low-memory contrastive search still does not yield exactly the same output as batched (normal) contrastive search. After looking into this today, it seems that this issue is caused by numerical errors in forward passes between batched versus unbatched inputs. These errors seem to propegate in the generation process (via saved hidden layers) such that a dozen tokens or so after generation starts, one starts to get different tokens. I am also not sure how the memory test should be performed with many models sequentially. I typically have been measuring the footprint on GPU, and with one model the low-memory version has a smaller footprint for any given model tested. But footprint typically does not decrease predictably between model initializations, so this test will likely fail as written. I'd be happy to go into the former issue in more detail, or I can simply substitute a test to show that the output is approximately the same for batched versus unbatched contrastive search. And would be happy to change the memory test to include only one model too<|||||>Although with some more testing it seems that the numerical errors are not propegating, but there is a problem with the past key value cache in the low memory approach. Will work on fixing this!<|||||>OK all is fixed and ready to go. Selecting `low_memory` does not change the output tokens for contrastive search but reduces the memory footprint for longer (>1k tokens) sequence generation by the amount mentioned above. Generally the longer the sequence length, the larger the difference between low memory and normal contrastive search. I am not sure how to write the memory test in the style of the other tests which iterate through a set of models, as unless the models are sorted by increasing size then the test will fail (memory measurement via `torch.cuda.max_memory_allocated()` does not reliably decrease between model re-initializations or even cache cleaning). Final note, the added code is a bit messy and could be reduced to around 20 lines if https://github.com/huggingface/transformers/issues/17016 were to be implemented. I can clean it somewhat if required.<|||||>> I am not sure how to write the memory test in the style of the other tests which iterate through a set of models, as unless the models are sorted by increasing size then the test will fail (memory measurement via torch.cuda.max_memory_allocated() does not reliably decrease between model re-initializations or even cache cleaning). Could you share a memory measurement script then, so we can keep it here in the PR for future reference? :)<|||||>No problem, for a newly spun up single GPU the following can be used to check the memory reduction. ```python !pip install -q -U git+https://github.com/blbadger/transformers.git import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM # any model compatible with contrastive search tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") input_ids = tokenizer.encode('This is a new prompt', return_tensors='pt').to('cuda') model = model.to('cuda') low_output = model.generate(input_ids, top_k=4, penalty_alpha=0.8, low_memory=True, max_new_tokens=600) low_mem = torch.cuda.max_memory_allocated() high_output = model.generate(input_ids, top_k=4, penalty_alpha=0.8, low_memory=False, max_new_tokens=600) high_mem = torch.cuda.max_memory_allocated() print (torch.all(low_output == high_output)) print (low_mem, high_mem) ``` I went ahead and removed the memory portion of the contrastive search test module, but we can add it back if necessary.<|||||>@blbadger Thank you for the script! (I've confirmed that it works as expected on my end 👍 ) To get the CI green you'll need to run `make fixup` and then commit the changes. You also have errors on the test that's being added on this PR :) As soon as our CI becomes green, I'll tag a core maintainer so we can merge the PR 🙌 <|||||>@gante you are most welcome, thanks for flagging the test failing too. I have moved the `low_memory` kwarg to the config, cleaned up the low memory code, and fixed up the test module. It looks like we are all set with respect to CI requirements except for the code formatting. Is there a way you recommend fixing this so that `black` does not throw an error?<|||||>@blbadger try running `make fixup` and then committing :) If that fails, then try rebasing.<|||||>@gante thanks! `make fixup` completes but there were no changes to commit, same goes for rebasing after syncing the branch but unfortunately `black` still fails with ``` Oh no! 💥 💔 💥 2 files would be reformatted, 2482 files would be left unchanged. Exited with code exit status 1 ``` <|||||>Just heads up: with (much) more testing it appears that the batch versus no-batch numerical errors mentioned above do indeed propegate such that choosing `low_memory` results in divergence from normal contrastive search for long outputs. It seems this divergence is not picked up by the existing test suite because these outputs are limited in length, and because the models tested are relatively small. For an example of this divergence, for Guanaco 33b loaded in `torch.bfloat16` with an input prompt `Write a full Shakespearean sonnet about the virtues of apples` with `low_memory=False` gives ``` When like a ripe and ruddy cherub's face, The apple doth in greenness shine so bright, Its luscious juice and crispness doth embrace, A feast for taste, both sweet and mild of might. Yet more than this, it doth our health impart, With vitamins rich, and fiber to sustain, Our bodies nourished, and our spirits cheered, In vigor and in joy, we're once again regained. Oh, let us praise the apple, in each part, For its delights so pure, so fresh and fair, A gift from Nature's bounty, without art, A treasure true, that ne'er shall disappear. ``` but `low_memory=True` gives ``` When like a ripe and ruddy cherub's face, The apple doth in greenness shine so bright, Its luscious juice and crispness doth embrace, A feast for taste, both sweet and mild of might. Yet more than this, it doth our health impart, Its vitals rich, a cure for many ills, In olden days, 'twas deemed a symbol art, Of love and beauty, a fruit most fulfills. O then, let us rejoice in its bounty great, And praise the Lord who doth such gifts bestow, Whose handiwork doth manifest such treat, A blessed fruit, the apple, let us bow. ``` for the same penalty_alpha and top_k. I would not really classify this as an issue (as the low memory output is likely more numerically accurate than standard contrastive search) but we might want to include this in the documentation for to avoid confusion. <|||||>@gante OK looks like the CI is green, running `make quality` after `make fixup` was able to reformat the code properly.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts quick TL;DR -- This PR adds a new flag that, when turned on, reduces the memory requirements of contrastive search. There are small numerical, which is expected -- the order of operations with floating point may produce minor variations, as usual in this type of changes<|||||>@gante Happy to do so, I appreciate all your help with this PR! Would it be better to keep the low memory flag optional and then cast to a bool, or to enforce the low memory flag as a bool? I can see benefits of both options. @amyeroberts Any time, thanks for the detailed review! 1. Perhaps a better name would be `low_cache_memory` or `single_pkv` as we are here avoiding the generation of past key value caches for all `top_k` tokens. Is that on the right track? 2. Agreed that the code can be refactored, I will work on integrating your suggestions and will try to simplify the logic. <|||||>@amyeroberts I'd rather keep the generic name (`low_memory`) with an unequivocally unset state by default (`None`) unless you strongly oppose :) Here's my reasoning: - Naming: `.generate()` already has a very big number of configuration parameters, to the point of being one of the most challenging problems to manage. It's very hard to add new options without flags, making the flag discoverability process even harder. I'll gladly take any chance to consolidate flag names, even if it comes at the cost of possible extra code logic in the future 🤗 If some new `.generate()`-level memory-reduction technique comes out, and if it is mutually exclusive with the existing techniques, then we can bump the flag complexity by allowing it to be a string. - Default: In the recent transition from `model.config` to `model.generate_config`, all non-`None` defaults were a massive pain -- we can't distinguish a default value from an intentionally set value that matches the default. A `None` default protects us from that :) _____________________________ Regarding complex logic, it won't be a problem as soon as I get my hands on refactoring generate 💪 <|||||>@gante OK, understood :) * For `low_memory` I agree for the generation config that generic is better. For the `contrastive_search` method, I'd still prefer the kwarg to be something clearer e.g. `sequential`. WDYT? * Completely agree - I'd prefer `None` as default! * Complex logic - We're sweeping things under the rug a bit but OK if a refactor is happening soon. Only thing I'd mention is that the code at the moment makes reviewing hard and longer. The sooner it's tidied up, the quicker new features can (safely) be added! <|||||>> For low_memory I agree for the generation config that generic is better. For the contrastive_search method, I'd still prefer the kwarg to be something clearer e.g. sequential. WDYT? Sounds good 👍 <|||||>Sounds great to me too! @gante would you like me to go ahead and change the kwarg in contrastive_search and assign it the value of `low_memory` in the generation config?<|||||>@blbadger yes 👍 <|||||>@gante @amyeroberts just renamed the flag, looks like we are ready to go!<|||||>Awesome, merging the PR 🔥 Thank you for the cool contribution @blbadger! <|||||>My pleasure, and thanks so much for all your help @gante @amyeroberts!
transformers
24,119
open
Different results when using `__call__` and `encode` and `encode_plus` of (fast/slow) bert tokenizers
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the differences: [hf_github_issue.zip](https://github.com/huggingface/transformers/files/11691030/hf_github_issue.zip) contains all the input data used to reproduce the issue. Please download and unzip. 1. I got my customized token2id mapping and stored the three files in one folder let's say "hf_github_issue". 1. `vocab.txt`: my own vocabulary 2. `special_tokens_map.json` 3. `tokenizer_config.json` 2. `raw_transformed_uncased.csv`: each row in this csv is a **pre-tokenized** sequence (i.e., each cell is a word that does not need further tokenization) ``` import pandas as pd from datasets import Dataset raw_transformed_df = pd.read_csv("hf_github_issue/raw_transformed.csv") training_dataset = Dataset.from_pandas(raw_transformed_df, preserve_index=False) ``` 3. Load as a fast BertTokenizer: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("hf_github_issue", use_fast=True) # method 1 print(tokenizer.encode(list(training_dataset[0].values()), is_split_into_words=True)) # method 2 print(tokenizer.encode_plus(list(training_dataset[0].values()), is_split_into_words=True)) # method 3 print(tokenizer(list(training_dataset[0].values()), is_split_into_words=True)) ``` 4. Load as a slow BertTokenizer ``` tokenizer = AutoTokenizer.from_pretrained("hf_github_issue", use_fast=False) # method 4 print(tokenizer.encode(list(training_dataset[0].values()), is_split_into_words=True)) # method 5 print(tokenizer.encode_plus(list(training_dataset[0].values()), is_split_into_words=True)) # method 6 print(tokenizer(list(training_dataset[0].values()), is_split_into_words=True)) # method 7 print(tokenizer.encode(list(training_dataset[0].values()))) ``` ### Expected behavior I doubt this is not a bug but more like my misunderstanding from either creating the custom tokenizer or choosing the right API(s). Any help is appreciated. I thought all seven methods here will give the same output. However, only `method 7` gives my expected output: ``` [3, 8, 16, 29, 32, 48, 55, 61, 77, 84, 90, 103, 111, 115, 131, 139, 145, 161, 167, 174, 187, 194, 201, 215, 225, 234, 244, 255, 263, 276, 285, 288, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 307, 317, 327, 338, 343, 354, 359, 369, 370, 371, 374, 384, 391, 404, 412, 418, 429, 430, 431, 440, 448, 453, 461, 470, 479, 493, 4] ``` Method 1-6 gives the same (unexpected) output: ``` {'input_ids': [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` In particular, I would like to get the `__call__` function work as expected so that I may use the example [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py)
06-08-2023 17:08:01
06-08-2023 17:08:01
It seems that the misunderstanding of `is_split_into_words` from #8217 is the main issue here. But the fix in #24121 should also be helpful to align the expected outputs of slow tokenizer when input is a list of list of pretokenized strings.<|||||>cc @ArthurZucker :D<|||||>Pr reviewed 😉 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,118
closed
Remove decoder_input_ids from RAG dummy inputs
cc @ydshieh The old RAG dummy inputs didn't have `decoder_input_ids` but the new ones do - this seems like the most likely cause of the memory blowup, because RAG probably does a lot of weird retrieval stuff.
06-08-2023 15:48:14
06-08-2023 15:48:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>unfortunately, the problematic situation (GPU usage) I described on Slack channel is still the same. It still takes 5 - 6 G extra memory at some point, compared to the commit before `814de8fa`.<|||||>Closing this because the dummies turned out not to be the problem after all!
transformers
24,117
closed
RuntimeError: CUDA driver error: invalid argument
### System Info - `transformers` version: 4.29.1 - Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.13 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained('gpt2') model = AutoModelForCausalLM.from_pretrained('gpt2', device_map = 'auto') tokenizer.padding_side = "left" tokenizer.truncation_side = "left" if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") if torch.cuda.is_available(): model = model.to(device=device) prompt = "What is the capital of India?" input_ids = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True, add_special_tokens=False).input_ids.to( dtype=torch.long, device=device ) max_new_tokens = 10 model.eval() with torch.no_grad(): generated_ids = model.generate( input_ids, max_new_tokens=max_new_tokens, pad_token_id=tokenizer.eos_token_id, ) preds = [ tokenizer.decode( g, skip_special_tokens=True, clean_up_tokenization_spaces=True ).strip() for g in generated_ids ] ``` Running the above script produces the following error ``` Using pad_token, but it is not set yet. ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/scripts/temp.py:28 in <module> │ │ │ │ 25 │ │ 26 model.eval() │ │ 27 with torch.no_grad(): │ │ ❱ 28 │ generated_ids = model.generate( │ │ 29 │ │ input_ids, │ │ 30 │ │ max_new_tokens=max_new_tokens, │ │ 31 │ │ pad_token_id=tokenizer.eos_token_id, │ │ │ │ /home/rudra/miniconda3/envs/FM/lib/python3.9/site-packages/torch/utils/_c │ │ ontextlib.py:115 in decorate_context │ │ │ │ 112 │ @functools.wraps(func) │ │ 113 │ def decorate_context(*args, **kwargs): │ │ 114 │ │ with ctx_factory(): │ │ ❱ 115 │ │ │ return func(*args, **kwargs) │ │ 116 │ │ │ 117 │ return decorate_context │ │ 118 │ │ │ │ /home/rudra//miniconda3/envs/FM/lib/python3.9/site-packages/transformers/g │ │ eneration/utils.py:1515 in generate │ │ │ │ 1512 │ │ │ │ ) │ │ 1513 │ │ │ │ │ 1514 │ │ │ # 11. run greedy search │ │ ❱ 1515 │ │ │ return self.greedy_search( │ │ 1516 │ │ │ │ input_ids, │ │ 1517 │ │ │ │ logits_processor=logits_processor, │ │ 1518 │ │ │ │ stopping_criteria=stopping_criteria, │ │ │ │ /home/rudra//miniconda3/envs/FM/lib/python3.9/site-packages/transformers/g │ │ eneration/utils.py:2385 in greedy_search │ │ │ │ 2382 │ │ │ # if eos_token was found in one sentence, set sentence to finished │ │ 2383 │ │ │ if eos_token_id_tensor is not None: │ │ 2384 │ │ │ │ unfinished_sequences = unfinished_sequences.mul( │ │ ❱ 2385 │ │ │ │ │ next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_te │ │ 2386 │ │ │ │ ) │ │ 2387 │ │ │ │ │ │ 2388 │ │ │ │ # stop when each sentence is finished │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: CUDA driver error: invalid argument ``` ### Expected behavior No error and some text generated by the model. Works perfectly on CPU.
06-08-2023 15:43:47
06-08-2023 15:43:47
I can reproduce this with the much simpler: ``` import transformers gpt2_generator = transformers.pipeline('text-generation', model='gpt2', device=1) sentences = gpt2_generator("To be honest, neural networks", do_sample=True, top_k=50, temperature=0.6, max_length=128, num_return_sequences=3) for sentence in sentences: print(sentence["generated_text"]) print("="*50) ``` This is example code from the transformers docs and should "just work". It feels like an environment issue, but the error is super indistinct. `CUDA_LAUNCH_BLOCKING=1` does not produce a clearer error. I've tried in a fresh environment, with different versions of transformers, torch, and cuda to no avail. <|||||>Hey @blucz @murthyrudra 👋 I am unable to reproduce the issue on my end, which seems to indicate this is an environment issue 🤔 This is my current env: ``` - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.10.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (gpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,116
closed
fix overflow when training mDeberta in fp16
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/microsoft/DeBERTa/issues/77 (issue about transformers opened in Microsoft repo) - This issue was originally raised in the https://github.com/microsoft/DeBERTa repo which had to do with mDeberta not being able to be trained using fp16. A fix for this was implemented in the Microsoft repo by @BigBird01 but did not yet make it to HuggingFace. I was interested in training mDeberta models on small hardware (e.g. a 3070, T4) so I updated the HF implementation with the changes from the Microsoft repo. I tried to only bring over the minimal changes needed to get the fp16 training to work. - I checked that existing tests passed and also used this code to successfully train an mDeberta model in fp16 on Squad2 that can be found [here](https://huggingface.co/sjrhuschlee/mdeberta-v3-base-squad2) which is not currently possible with the main branch of transformers. I'm unsure if there is a good way to add an additional test to make sure mDeberta-V3 training works in fp16 in the CI. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hey, based on the recommendations from the PR template (and git blame) I decided to tag @ArthurZucker and @sgugger in case you may be interested.
06-08-2023 15:40:54
06-08-2023 15:40:54
cc @younesbelkada @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I used this code block to check results. This was run on: - Ubuntu 20.04.4 LTS - NVIDIA 3070 - CUDA Version: 11.7 ```python import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering from transformers.pipelines import QuestionAnsweringPipeline tokenizer = AutoTokenizer.from_pretrained("sjrhuschlee/mdeberta-v3-base-squad2") model = AutoModelForQuestionAnswering.from_pretrained( "sjrhuschlee/mdeberta-v3-base-squad2", # torch_dtype=torch.float16, # torch_dtype=torch.bfloat16, # load_in_8bit=True, ) pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # device=... was removed for 8bit ``` **Running on Main Branch** Running the above code using `torch.float16` on the main branch gives me no answer ```python # with torch.float16 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # [] ``` Running with `torch.bfloat16` and `torch.float32` gives me the expected answer ```python # with torch.bfloat16 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # {'score': 0.98369300365448, 'start': 33, 'end': 41, 'answer': ' Berlin.'} # with torch.float32 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # {'score': 0.9850791096687317, 'start': 33, 'end': 41, 'answer': ' Berlin.'} ``` Also running in `8bit` works ```python # with load_in_8bit=True pipe = QuestionAnsweringPipeline(model, tokenizer) # {'score': 0.9868391752243042, 'start': 33, 'end': 41, 'answer': ' Berlin.'} ``` **Running on the PR** The change in this PR also enables mDeberta models to run at inference in `torch.float16` which wasn't possible before. And it doesn't look to affect any of the other dtypes. ```python # with torch.float16 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # {'score': 0.9848804473876953, 'start': 33, 'end': 41, 'answer': ' Berlin.'} # with torch.bfloat16 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # {'score': 0.9841369986534119, 'start': 33, 'end': 41, 'answer': ' Berlin.'} # with torch.float32 pipe = QuestionAnsweringPipeline(model, tokenizer, device=torch.device("cuda:0")) # {'score': 0.9850791096687317, 'start': 33, 'end': 41, 'answer': ' Berlin.'} # with load_in_8bit=True pipe = QuestionAnsweringPipeline(model, tokenizer) # {'score': 0.9870386719703674, 'start': 33, 'end': 41, 'answer': ' Berlin.'} ```<|||||>I also noticed that the TF implementation in DebertaV2 has the same line https://github.com/huggingface/transformers/blob/2e2088f24b60d8817c74c32a0ac6bb1c5d39544d/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L678-L679 I'm not too familiar with TF though so I'm not sure if this change should be made there as well. <|||||>@sjrl To the best of my knowledge, we don't support training in fp16 in TF, so less of a risk here. I'd be pro updating in TF, so that the implementations are aligned and it's potentially safer. cc @Rocketknight1 for his thoughts. <|||||>Yes, we support mixed-precision float16/bfloat16 training in TensorFlow, but in general we still expect a 'master' copy of the weights to remain in float32. We're planning some exploration to see if we can get Keras to accept full (b)float16 training, but it might require some refactoring!<|||||>@Rocketknight1 should I go ahead and update the TF implementation as well then? <|||||>@sjrl Yes please! Better numerical stability will be nice to have once we've enabled full float16 training<|||||>@sjrl - Are there any other changes to add? Otherwise I think we're good to merge :) <|||||>@amyeroberts You're welcome, and that's it for the changes!
transformers
24,115
closed
Experiment with static past key/value buffer
This PR is just to see if this could reside in transformers or not. ## Motivation We suspect [the concatenations of the key/value buffer](https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/src/transformers/models/gpt2/modeling_gpt2.py#L321-L322) at each generation step to be expensive. The idea is to modify the buffer in place after a unique allocation. FasterTransformer [does preallocate the kv cache](https://github.com/NVIDIA/FasterTransformer/blob/c6e8f60ec40da218804a60e6aa986903e7fa8594/src/fastertransformer/models/decoding/Decoding.cc#L83C9-L84) (among others). Preallocating may also help `torch.compile` according to @Chillee, although I don't quite get why yet (there are still dynamic shapes in the model itself, so why care about model I/O static shapes?). For reference: https://huggingface.slack.com/archives/C055NT312LW/p1683038064467109 & https://huggingface.slack.com/archives/C055NT312LW/p1685570357532069 ## Current (ugly) workflow Very temporary ```python cache_size = max_new_tokens past_key_values = tuple([ tuple([torch.empty( batch_size, model.config.n_head, cache_size, model.config.n_embd // model.config.n_head, # head dimension dtype=dtype, device=device ) for _ in range(2)]) model.enable_static_kv_cache() res = model.generate( **inputs, num_beams=1, min_new_tokens=max_new_tokens, max_new_tokens=max_new_tokens, past_key_values=past_key_values ) ``` ## Results Some optimizations may still be missing - right now in small model / small batch size setting this is not interesting. Results are with PyTorch 2.0.1 eager. Script: https://github.com/fxmarty/transformers-preallocate-kv-cache/blob/main/run.py Raw results: https://docs.google.com/spreadsheets/d/15P1o9vDcXOSeLAwLUWiqXQatHxtQYBa_xcyLZanOjQQ/edit?usp=sharing ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/19db7654-819d-4939-8306-e0b3e5691af5) ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/0dd50f85-a3a0-4571-aa80-2a748b9c9942) ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/3a1e6efb-2358-40f4-bf8e-d09261296fd5) ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/adc4ec1c-529c-44b6-8e77-105b788e222a) ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/e41bc4ca-1d3b-4625-9ec6-283b1442420b) ![image](https://github.com/fxmarty/transformers-preallocate-kv-cache/assets/9808326/48294201-9798-4083-9776-3ef4dbd6ca12) ## Misc The current implementation of `valid_past_index` being `Optional[int]` is quite debatable and may hamper readability. I'm also not sure whether adding methods in `PreTrainedModel` that are specific to decoders is OK? Missing test right now is to generate a sequence length shorter than the KV buffer. Some todos: - [ ] `past_key_values` buffer should be initialized in the background, instead of requiring the user to initialize it himself and requiring to pass `generate(**inputs, past_key_values=past_key_values)` (as currently). - [ ] Current implementation is likely to break with `accelerate` with naive pipeline parallelism, as the buffer is currently initialized on a single device - [ ] Preallocated kv cache still does not help with small models / batch size. - [ ] Support an iterative buffer, e.g. that auto-extends each 512 tokens, instead of initializing a buffer of size `max_new_tokens`. This may help reducing memory usage (and speed? probably not). - [ ] Implement tests - [ ] Should this be in optimum or transformers? - [ ] Support all (is it possible?) decoding strategies, instead of currently only `greedy_search` - [ ] Have it work with cross-attention - [ ] Preallocate `attention_mask` as well - [ ] Test on CPU as well
06-08-2023 15:20:21
06-08-2023 15:20:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24115). All of your documentation changes will be reflected on that endpoint.<|||||>In order to efficiently work with the PyTorch team here and to figure out exactly what needs to be done for a super fast generate method, I'd suggest to open a new benchmark repo that includes this PR for say three important LLM models: - GPT2 - Llama - starcoder The repo "maybe just called `transformers-generate-benchmark` should have: - copied the modeling files of GPT2, Llama, Starcoder - Use the SDPA attention for all models (just like `betterTransformers` does - you could copy it from here: https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/attention.py) - A super small generate function that we write ourselves (just greedy without any logit processor) Then in this repo we run all the benchmarking and also make it easy for the PyTorch team to reproduce the benchmarking numbers.<|||||>Sounds good - I'll continue on this PR for easy diff.
transformers
24,114
closed
doc issue from docs/source/en/model_doc/open-llama.mdx
In overview, 2nd paragraph: The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PLAM. And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks. Shared input-output embedding should be from PaLM?
06-08-2023 14:45:08
06-08-2023 14:45:08
@xingener Good spot - would you like to open a PR to fix it? <|||||>> @xingener Good spot - would you like to open a PR to fix it? Sure.
transformers
24,113
closed
[`GPT2`] Add correct keys on `_keys_to_ignore_on_load_unexpected` on all child classes of `GPT2PreTrainedModel`
# What does this PR do? as per title forgot to add them in https://github.com/huggingface/transformers/pull/23256 Currently this snippet: ```python from transformers import GPT2Model model = GPT2Model.from_pretrained("gpt2") ``` Gives a big warning: ```bash Some weights of the model checkpoint at gpt2 were not used when initializing GPT2Model: ['h.10.attn.bias', 'h.5.attn.bias', 'h.7.attn.bias', 'h.0.attn.bias', 'h.11.attn.bias', 'h.8.attn.bias', 'h.1.attn.bias', 'h.9.attn.bias', 'h.2.attn.bias', 'h.4.attn.bias', 'h.6.attn.bias', 'h.3.attn.bias'] - This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` This PR fixes it by adding the correct regex expressions on `_keys_to_ignore_on_load_unexpected` for all child classes that inherit from `GPT2PreTrainedModel` cc @sgugger @patrickvonplaten
06-08-2023 14:04:16
06-08-2023 14:04:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24113). All of your documentation changes will be reflected on that endpoint.
transformers
24,112
closed
LogitsProcessor - are there any examples of how to use it?
### Feature request I could not find any more examples than this one, and this one does not work. It reports some erors, when used with t5. https://colab.research.google.com/drive/1ezT24sogpVyr2HJLOvXHzjv61JZJ1gMT?usp=sharing#scrollTo=0MJJZEylVO-x Are there any more examples of usage. I need to lower the scores for a specified value, for chosen tokens, but any example will do. ### Motivation We need some usage examples of every feature of transformers, otherwise these features are not very useful for 90 percent of people. ### Your contribution I found this example: https://colab.research.google.com/drive/1ezT24sogpVyr2HJLOvXHzjv61JZJ1gMT?usp=sharing#scrollTo=0MJJZEylVO-x
06-08-2023 14:00:12
06-08-2023 14:00:12
cc @gante <|||||>I have found one more example for T5. Only I do not know how would I add multiple inputs, not just one, when i increase the list seq1 to more elements it does not work as sizes do not match. https://stackoverflow.com/questions/72180737/beam-search-and-generate-are-not-consistent seq1 = [ "summarize: beamsearch and generate does not give the same result"] encoding = tokenizer( seq1, padding="longest", max_length=128, truncation=True, return_tensors="pt", ) encoder_input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda") num_beams = 2 input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) input_ids = input_ids * model.config.decoder_start_token_id model_kwargs = { "encoder_outputs": model.get_encoder()( encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ) } beam_scorer = BeamSearchScorer( batch_size=1, do_early_stopping=True, num_beams=num_beams, device=model.device, ) outputs = model.beam_search(input_ids, beam_scorer, logits_processor=None, early_stopping=True, no_repeat_ngram_size=4, max_length=64, **model_kwargs, output_scores=True, return_dict_in_generate=True) # beam_search result": <|||||>@Oxi84 If there are errors in the examples, could you share the errors with a full stack trace and information about the environment being run? Keep in mind that examples are not meant to be exhaustive and there may be cases they don't cover. <|||||>Here is an example on colab - https://colab.research.google.com/drive/1TR6PWwKK4SuD7RluN_f82lOZQi_SOJnf#scrollTo=Uj4YddOGq2ee This is a basic example, where logits_processor=None from https://stackoverflow.com/questions/72180737/beam-search-and-generate-are-not-consistent. When seq1 = ["paraphrase: Beamsearch and generate give the same result."] , it works fine, but when seq1 = ["paraphrase: Beamsearch and generate give the same result.","paraphrase: Beamsearch and generate give the same result."] I do not know how to make it work. Error is: RuntimeError: The size of tensor a (28) must match the size of tensor b (14) at non-singleton dimension 3. import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import LogitsProcessorList, MinLengthLogitsProcessor, BeamSearchScorer,MaxLengthCriteria, StoppingCriteriaList from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base") model = AutoModelForSeq2SeqLM.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base") model.resize_token_embeddings(len(tokenizer)) model.to("cuda") seq1 = ["paraphrase: Beamsearch and generate give the same result."] encoding = tokenizer( seq1, padding="longest", max_length=128, truncation=True, return_tensors="pt", ) encoder_input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda") num_beams = 2 input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) input_ids = input_ids * model.config.decoder_start_token_id model_kwargs = { "encoder_outputs": model.get_encoder()( encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ) } #print("input_ids",input_ids) #print("model_kwargs",model_kwargs) beam_scorer = BeamSearchScorer( batch_size=1, do_early_stopping=True, num_beams=num_beams, device=model.device, ) outputs = model.beam_search(input_ids, beam_scorer, logits_processor=None, early_stopping=True, max_length=64, **model_kwargs, output_scores=True, return_dict_in_generate=True) # beam_search result": out = tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True) print("out",out) #generate results: out = model.generate(encoder_input_ids, max_length=64, early_stopping=True, num_beams=2, do_sample=False, num_return_sequences=1) out1 = tokenizer.batch_decode(out, skip_special_tokens=True) print("out1",out1) <|||||>Hey @Oxi84 👋 You are absolutely right, our docs and examples for generation are quite poor atm. It is my highest priority for the next 1-2 months -- stay tuned 💪 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,111
closed
Generate: PT's `top_p` enforces `min_tokens_to_keep` when it is `1`
# What does this PR do? Fixes #23688 Contrary to our description in the docstring, PT's `top_p` was not enforcing `min_tokens_to_keep` when it was 1 (the default). TF and FLAX were fine. This PR corrects it, and adds a check on `min_tokens_to_keep` (must be a non-negative integer)
06-08-2023 13:35:09
06-08-2023 13:35:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>> top_p was not enforcing min_tokens_to_keep when it was 1 From the diff - I don't see how this is resolved. The checks ensure the value of `min_tokens_to_keeps` but doesn't seem to be conditional on `top_p`. Am I missing something? <|||||>@gante I hit an issue related to this in the prior version of transformers, glad to see that it's fixed thanks! However why don't we enforce `min_tokens_to_keep >= 1`. 0 makes no sense right?<|||||>@njhill true, the initial check should be against `>=1`, patching it
transformers
24,110
closed
Update the pin on Accelerate
# What does this PR do? Move to the second patch as the minimum required version. cc @muellerzr
06-08-2023 13:34:41
06-08-2023 13:34:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,109
closed
Add Musicgen
# What does this PR do? Adds the musicgen model by fairseq to transformers This model is made of three components: 1. T5Encoder (which import as `AutoModelForTextEncoding`) 2. MusicgenDecoder (which we copy as much as possible from [`modeling_bart.py`](https://github.com/huggingface/transformers/blob/1e9da2b0a6ef964c2cf72dd715dbee991a3f49fa/src/transformers/models/bart/modeling_bart.py#L142)) 3. Encodec (which we import as `AutoModel`)
06-08-2023 13:29:34
06-08-2023 13:29:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Would be great to hear your thoughts on the design here @patrickvonplaten (adding the tests otherwise now) TODO: - [x] convert m/l checkpoints - [x] handle padded tokens from encodec (in delay pattern mask, then again when we decode) - [x] fast tests - [x] integration tests - [x] add method for unconditional generation (no need to use processor to get input ids) - [x] finish docs / docstrings<|||||>This is ready for review (fyi @ArthurZucker / @patrickvonplaten) - kindly requesting review from @sgugger!
transformers
24,108
closed
error bug on saving distributed optim state when using data parallel
the indexing typo causes a wrong result when saving distributed optimizer when enabling data parallelism.
06-08-2023 13:16:47
06-08-2023 13:16:47
cc @pacman100 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,107
closed
reset accelerate env variables after each test
# What does this PR do? ## Context: For the failing tests in `test_trainer_ext`, the reason is given below: 1. These tests are run via following command: ``` python -m pytest -v --make-reports=multi-gpu_tests_torch_cuda_extensions_gpu tests/deepspeed tests/extended ``` 2. Now, as DeepSpeed tests are run first, even though the AcceleratorState is reset during teardown, the env variable set by Accelerate ACCELERATE_USE_DEEPSPEED isn't deleted (if the test isn't a script run as a subprocess) and as such Accelerator object initialization in extended tests create DeepSpeedPlugin leading to them failing with - HFDeepSpeedPlugin raises config mismatch error 3. Simple reproducer: ``` cd transformers export CUDA_VISIBLE_DEVICES="0,1" export RUN_SLOW="yes" pytest -sv tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_ds_config_mismatch tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq ``` This PR fixes it by deleting all the env variables having `ACCELERATE` in them during test `tearDown`.
06-08-2023 12:44:52
06-08-2023 12:44:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,106
closed
Avoid `GPT-2` daily CI job OOM (in TF tests)
# What does this PR do? Clear (as much as possible) GPU memory usage allocated by torch, so the TF tests (GPT-2) get more room and make @Rocketknight1 's life easier 😆. Some (TF) tests get OOM after #23234. Note, the changes in this PR are in torch test files instead of TF test files! This is similar to #16881, which has more details mentioned. Running manually and all gpt2 tests pass now.
06-08-2023 11:47:33
06-08-2023 11:47:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,105
closed
[Whisper] Make tests faster
# What does this PR do? Reduces the input seq length of the Whisper tests from 1500 -> 60 frames. This in turn should speed up the tests quite considerably.
06-08-2023 10:44:09
06-08-2023 10:44:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24105). All of your documentation changes will be reflected on that endpoint.<|||||>Sorry, I am asking not because I see a test being slow, but just I saw some more Whisper test failures on daily CI, which is `tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_cpu_offload`. But yes, in general, it's best to use low number. I will take a look.<|||||>Note that the Whisper tests have already been flagged as being slow (#23736) so this should help combat this issue!<|||||>It's not because it's slow test that we use large value without really valid reason :-). Always better to make them use low values is the goal, unless it's absolute necessary. I still have questions on why we don't need to pass `input_shape` in the corresponding flax test file.<|||||>OK, in flax test file, I see ``` self.all_model_classes = ( make_partial_class(model_class, input_shape=self.init_shape) for model_class in self.all_model_classes ) ``` probably it's the reason.<|||||>Yep agreed - the seq len was unnecessarily high here :) You're spot on regarding the init shape: we have to change this based on the sequence length since Flax Whisper initialises the positional embeddings based on the context window, so if we change the seq len (= context window) we need to init the weights with the new shape
transformers
24,104
closed
Error when overriding generation config: GenerationConfig() got multiple values for keyword argument 'num_beams'
### System Info - `transformers` version: 4.30.0.dev0 (commit: 4aa13224a5bca560147a29c06b2e0597137caf3e) - Platform: Linux-5.15.0-1013-oracle-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes (launching with `accelerate`) ### Who can help? @gante @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Calling `GenerationConfig.from_pretrained` with a model that already defines `num_beams` in its configuration, and attempting to override the `num_beams` parameter (and presumably any other parameter), results in a runtime exception `got multiple values for keyword argument 'num_beams'` ```python generation_config: GenerationConfig = GenerationConfig.from_pretrained( "My-private-model", num_beams=num_beams) ``` Results in : ``` File "/app/scripts/fine_tune/./fine_tune_and_evaluate.py", line 1481, in <module> main() File "/app/scripts/fine_tune/./fine_tune_and_evaluate.py", line 1267, in main generation_config: GenerationConfig = GenerationConfig.from_pretrained( File "/app/ai_categorize_env/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 541, in from_pretrained return cls.from_dict(config_dict, **kwargs) File "/app/ai_categorize_env/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 574, in from_dict config = cls(**config_dict, **kwargs) TypeError: transformers.generation.configuration_utils.GenerationConfig() got multiple values for keyword argument 'num_beams' ``` This appears to be because of this code: https://github.com/huggingface/transformers/blob/ba695c1efd55091e394eb59c90fb33ac3f9f0d41/src/transformers/generation/configuration_utils.py#L572-L576 That is calling `cls(**config_dict, **kwargs)`, which might pass the same keyword values in twice if the `config_dict` has the property that `kwargs` does, right? I don't see a step where we remove the properties from `config_dict` that are mentioned in `kwargs`, although there is a comment right above that says: `# remove all the arguments that are in the config_dict` Wouldn't the code need to do something more like this? ``` config_dict_copy = config_dict.copy() config_dict_copy.update(kwargs) config = cls(**config_dict_copy) ``` My generation_config.json from my model is this: ```json { "decoder_start_token_id": 0, "eos_token_id": 1, "length_penalty": 0, "max_length": 32, "num_beams": 2, "num_return_sequences": 2, "output_scores": true, "pad_token_id": 0, "return_dict_in_generate": true, "transformers_version": "4.30.0.dev0" } ``` ### Expected behavior This should not throw an exception: ```python generation_config: GenerationConfig = GenerationConfig.from_pretrained( "My-model", num_beams=num_beams) ```
06-08-2023 08:48:41
06-08-2023 08:48:41
Hey @Taytay 👋 Thank you for raising this issue! This is indeed a bug, I'll open a PR ASAP
transformers
24,103
closed
[`Trainer`] Correct behavior of `_load_best_model` for PEFT models
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/24096 This PR fixes the bugs related with PEFT models and `load_best_model_at_end`. It also refactors a bit the current logic to extend it generally to all LoRA models, not only 8-bit base models + LoRA. <details><summary>Repro script</summary> ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import TrainingArguments dataset = load_dataset("imdb", split="train") peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) args = TrainingArguments( max_steps=1, save_steps=1, eval_steps=1, evaluation_strategy="steps", per_device_train_batch_size=1, resume_from_checkpoint=True, output_dir="test_trainer", load_best_model_at_end=True, ) trainer = SFTTrainer( "EleutherAI/gpt-neo-125m", train_dataset=dataset, eval_dataset=dataset, dataset_text_field="text", peft_config=peft_config, max_seq_length=128, args=args, ) trainer.train() ``` </details> cc @sgugger @pacman100
06-08-2023 07:40:30
06-08-2023 07:40:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,102
closed
Memory leak when using GIT for image captioning in reference
### System Info transformers version: 4.28.1 Platform: Linux-5.8.0+-x86_64-with-Ubuntu-20.04 Python version: 3.7.11 PyTorch version (GPU?): 1.10.0+cu113 (True) Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I am using the following script some times, the memory rises and memory leak happend. processor = AutoProcessor.from_pretrained("microsoft/git-large-textcaps")) model = AutoModelForCausalLM.from_pretrained("microsoft/git-large-textcaps")).to(device) def test(image): with torch.no_grad(): pixel_values_org = processor(images=image_cv2, return_tensors="pt").pixel_values pixel_values1 = pixel_values_org.to(device).detach() generated_ids = model.generate(pixel_values=pixel_values1, max_length=20) generated_ids1 = generated_ids.to(device).detach() generated_caption = processor.batch_decode(generated_ids1, skip_special_tokens=True)[0] return generated_caption ### Expected behavior The memory is nearly stable
06-08-2023 07:25:56
06-08-2023 07:25:56
Hi @XingyuZhu-Pamela, thanks for raising this issue! A few questions from my side so that I can best try and help: * In the above snippet, what values does `device` have (I'm assuming `"cuda"`)? * Are there any patterns to when the memory leak occurs e.g. certain images, running inside certain scripts, running after a long time? How often do you observe this - is it every time or more sporadic? * Would you be able to share an example image being used so that I can run the script? * Is there any chance that you're using `decord` as part of the pipeline? There was a known issue where [decord would cause crashes when moving the models to GPU](https://github.com/huggingface/transformers/issues/21085). As a side note - soon python 3.7 will no longer be officially support by the transformer library. I'd suggest upgrading the python version to make sure your code remains compatible with the library. <|||||>> Thank you! Here are my answers for your questions: - The value of device is cuda, but when I try to use cpu, the memory leak problem still exists. - This problem happened no matter what image I use, and the memory increase is small for each call and continues to increase after multiple calls. - Here is a example url of image : http://p6.music.126.net/obj/w5zDmcODw6PDjj7DiMOi/4497423202/f55d/1ece/b350/7d32fea2646ce85fba6756f84761a702.jpg I'm sorry that there is a slight problem with the code provided above, which has been fixed as follows: **import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda" processor = AutoProcessor.from_pretrained("microsoft/git-large-textcaps")) model = AutoModelForCausalLM.from_pretrained("microsoft/git-large-textcaps")).to(device) def test(image): image_cv2 = Image.open(requests.get(pic_url, stream=True).raw).convert('RGB') with torch.no_grad(): pixel_values_org = processor(images=image_cv2, return_tensors="pt").pixel_values pixel_values1 = pixel_values_org.to(device).detach() generated_ids = model.generate(pixel_values=pixel_values1, max_length=20) generated_ids1 = generated_ids.to(device).detach() generated_caption = processor.batch_decode(generated_ids1, skip_special_tokens=True)[0] return generated_caption** - I don't use decord in my code <|||||>Along with @amyeroberts 's comments, I would suggest: - First, please format the code snippet in a better way - you can enclose the code inside like done in the following screenshop <img width="500" alt="Screenshot 2023-06-27 111112" src="https://github.com/huggingface/transformers/assets/2521628/baadbd25-f2b0-4d80-9d7a-68fa413de429"> - with proper indent (currently it's difficult to read the code snippet) - provide a for loop that run the model and print some cpu/gpu memory usage That would help us to provide help 🙏 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,101
closed
Change ProgressCallback to use dynamic_ncols=True
# What does this PR do? Fixes #24100 ## Who can review? This is about the trainer module, therefore I will tag, @sgugger Thank you
06-08-2023 06:37:26
06-08-2023 06:37:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, I merge from `upstream/main`, and then run `make style`! I hope this will be fine.<|||||>Mm the diff now shows 43 files. COuld you limit your changes to the file you are actually modifying?<|||||>@sgugger Sorry for the late reply, and it looks fine now!<|||||>Thanks! Failures are unrelated and due to a down time on the Hub.
transformers
24,100
closed
[Trainer] Why not use `tqdm`'s `dynamic_ncols=True` option?
### Feature request # Problem Tqdm progress bar is getting ugly when the width of the terminal is shrunk! ![image](https://github.com/huggingface/transformers/assets/4879345/b60f232f-41a5-40de-b759-8bb2710d3b5f) It progress bar makes the new line on every update! It is very ugly... # Solution Simply add the `dynamic_ncols=True` option to `tqdm`. It is located in `trainer_callbacks.ProgressCallback`. ![image](https://github.com/huggingface/transformers/assets/4879345/6741eb00-7430-48db-acc8-4c3a0eb00217) You can check the progress bar is now dynamically resized when the terminal size is updated. ### Motivation When I connect `tmux` session with different widths of the terminal, then the `tqdm` printing is getting ugly. ### Your contribution Please check the PR #24101
06-08-2023 06:37:04
06-08-2023 06:37:04
transformers
24,099
open
generation error when same token in "forced_eos_token_id" and "supress_token" parameter.
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. HuggingFace's generation method was being used in the Open-Assistant library that used HuggingFace's Model class. https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/model_training/utils/ppo_utils.py#L264-L267 2. generate method throw an error: RuntimeError:probability tensor contains either `int`, `nan` or element < 0 ![error1](https://github.com/huggingface/transformers/assets/85441953/56771083-4800-4138-a9b1-ad45c0505d61) 3. I found that if same token is in "forced_eos_token_id" and "suppress_tokens", error occurs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "EleutherAI/pythia-70m-deduped" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) gen_kwargs = {'max_new_tokens': 10, 'top_k': 0, 'top_p': 0.7, 'do_sample': True, 'temperature': 1.0} gen_kwargs["forced_eos_token_id"] = tokenizer.eos_token_id gen_kwargs["suppress_tokens"] = [tokenizer.eos_token_id] print(gen_kwargs) question = """ Where is Gangnam? """ batch = tokenizer.encode(f"<|prompter|>{question}<|assistant|>", return_tensors="pt") out = model.generate( input_ids=batch.to(model.device), **gen_kwargs ) ``` ### Expected behavior I want to know, using same token in "forced_eos_token_id" and "suppress_token" is restricted usage, because it seems to me bit contradictory. If it is an general error, I think that something like warning should be recommended. Or it might be my own error.
06-08-2023 05:12:41
06-08-2023 05:12:41
Hey @idisuu 👋 This is a tricky one that we haven't considered. I'm leaning towards not allowing it -- if you want to suppress the eos token until a certain point, you can also set `min_length`. Suppressing and forcing at the same time is ambiguous 😅 We have very little argument verification atm. Over the next month, I'll be working on validation of `generate` arguments. I'll make sure this one goes in! (and keep the issue open until then) Thank you for raising this issue!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,098
closed
Trainer throws error when using torchrun and small dataset
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes using torchrun - Using distributed or parallel set-up in script?: Yes, using torchrun ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## Command torchrun --nproc_per_node=2 --master_port=8080 train/train_old_2.py \ --model_name_or_path /workspace/models/llama7b \ --data_path /workspace/instruct-generalize/data/test.json \ --bf16 True \ --output_dir /workspace/fine-tune-result \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True ## Stacktrace excerpt Note: these are actually two stack traces that are printed over each other because there are two threads. File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 171, in step adamw( RuntimeError: The size of tensor a (131078144) must match the size of tensor b (262156288) at non-singleton dimension 0 File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 321, in adamw func( File "/usr/local/lib/python3.10/dist-packages/torch/optim/adamw.py", line 389, in _single_tensor_adamw exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) RuntimeError: The size of tensor a (131078144) must match the size of tensor b (262156288) at non-singleton dimension ## Dataset that throws error: ` [{"instruction":"Do the addition.", "input": "Hi. What is zero plus zero?","output": "zero"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus one?","output": "one"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus two?","output": "two"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus three?","output": "three"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus four?","output": "four"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus five?","output": "five"}, {"instruction":"Do the addition.","input": "Hi. What is zero plus six?","output": "six"}] ` ## Dataset that doesn't throw error: Just copy and paste the code above about 10 times to increase the size of the dataset. ## Training code ``` import copy import logging from dataclasses import dataclass, field from typing import Dict, Optional, Sequence import torch import transformers import utils from torch.utils.data import Dataset from transformers import Trainer IGNORE_INDEX = -100 DEFAULT_PAD_TOKEN = "[PAD]" DEFAULT_EOS_TOKEN = "</s>" DEFAULT_BOS_TOKEN = "<s>" DEFAULT_UNK_TOKEN = "<unk>" PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="facebook/opt-125m") @dataclass class DataArguments: data_path: str = field(default=None, metadata={"help": "Path to the training data."}) @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, ) def smart_tokenizer_and_embedding_resize( special_tokens_dict: Dict, tokenizer: transformers.PreTrainedTokenizer, model: transformers.PreTrainedModel, ): """Resize tokenizer and embedding. Note: This is the unoptimized version that may make your embedding size not be divisible by 64. """ num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict) model.resize_token_embeddings(len(tokenizer)) if num_new_tokens > 0: input_embeddings = model.get_input_embeddings().weight.data output_embeddings = model.get_output_embeddings().weight.data input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True) output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True) input_embeddings[-num_new_tokens:] = input_embeddings_avg output_embeddings[-num_new_tokens:] = output_embeddings_avg def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict: """Tokenize a list of strings.""" tokenized_list = [ tokenizer( text, return_tensors="pt", padding="longest", max_length=tokenizer.model_max_length, truncation=True, ) for text in strings ] input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list] input_ids_lens = labels_lens = [ tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list ] return dict( input_ids=input_ids, labels=labels, input_ids_lens=input_ids_lens, labels_lens=labels_lens, ) def preprocess( sources: Sequence[str], targets: Sequence[str], tokenizer: transformers.PreTrainedTokenizer, ) -> Dict: """Preprocess the data by tokenizing.""" examples = [s + t for s, t in zip(sources, targets)] examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)] input_ids = examples_tokenized["input_ids"] labels = copy.deepcopy(input_ids) for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]): label[:source_len] = IGNORE_INDEX return dict(input_ids=input_ids, labels=labels) class SupervisedDataset(Dataset): """Dataset for supervised fine-tuning.""" def __init__(self, data_path: str, tokenizer: transformers.PreTrainedTokenizer): super(SupervisedDataset, self).__init__() logging.warning("Loading data...") list_data_dict = utils.jload(data_path) logging.warning("Formatting inputs...") sources = [f"{example['input']}{tokenizer.eos_token}" for example in list_data_dict["examples"]] targets = [f"{example['target']}{tokenizer.eos_token}" for example in list_data_dict["examples"]] logging.warning("Tokenizing inputs... This may take some time...") data_dict = preprocess(sources, targets, tokenizer) self.input_ids = data_dict["input_ids"] self.labels = data_dict["labels"] def __len__(self): return len(self.input_ids) def __getitem__(self, i) -> Dict[str, torch.Tensor]: return dict(input_ids=self.input_ids[i], labels=self.labels[i]) @dataclass class DataCollatorForSupervisedDataset(object): """Collate examples for supervised fine-tuning.""" tokenizer: transformers.PreTrainedTokenizer def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]: input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels")) input_ids = torch.nn.utils.rnn.pad_sequence( input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id ) labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX) return dict( input_ids=input_ids, labels=labels, attention_mask=input_ids.ne(self.tokenizer.pad_token_id), ) def make_supervised_data_module(tokenizer: transformers.PreTrainedTokenizer, task_name) -> Dict: """Make dataset and collator for supervised fine-tuning.""" train_dataset = SupervisedDataset(f"/workspace/instruct-generalize/data/{task_name}.json", tokenizer) data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer) return dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator) @dataclass class DataCollatorBCAllTargets(object): tokenizer: transformers.PreTrainedTokenizer def __call__(self, instances: Sequence[Dict]): # Get inputs and labels as strings input_strings = [] label_strings = [] for item in instances: if "target" not in item: raise Exception(f"The bc_all_targets training strategy requires every example to have a 'target' key. But the following example does not:\n{item}") input_strings.append(item["input"]) if isinstance(item["target"], list): for target in item["target"]: label_strings.append(target) else: label_strings.append(item["target"]) # Combine them and tokenize to produce tokenized inputs combined = [input_strings[i] + label_strings[i] for i in range(len(input_strings))] encoded_combined = self.tokenizer.batch_encode_plus(combined, padding="longest", return_tensors="pt") # append eos tokens ones = torch.ones((encoded_combined["input_ids"].shape[0], 1), dtype=int) encoded_combined["input_ids"] = torch.cat((encoded_combined["input_ids"], self.tokenizer.eos_token_id*ones), dim=1) encoded_combined["attention_mask"] = torch.cat((encoded_combined["attention_mask"], ones), dim=1) encoded_labels = self.tokenizer.batch_encode_plus(label_strings, padding="longest", return_tensors="pt") label_lengths = [sum(mask) + 1 for mask in encoded_labels["attention_mask"]] labels = copy.deepcopy(encoded_combined)["input_ids"] for label, label_len in zip(labels, label_lengths): label[:-label_len] = IGNORE_INDEX print(encoded_combined["input_ids"].shape) print(labels.shape) print(encoded_combined["attention_mask"].shape) return dict( input_ids = encoded_combined["input_ids"], labels=labels, attention_mask=encoded_combined["attention_mask"] ) from classes import Model, Task def train(): parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() training_args.remove_unused_columns = False yoyo = Model(model_args.model_name_or_path) model = yoyo.model tokenizer = yoyo.tokenizer special_tokens_dict = dict() if tokenizer.pad_token is None: special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN if tokenizer.eos_token is None: special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN if tokenizer.bos_token is None: special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN if tokenizer.unk_token is None: special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN #tokenizer.add_special_tokens(special_tokens_dict) smart_tokenizer_and_embedding_resize( special_tokens_dict=special_tokens_dict, tokenizer=tokenizer, model=model, ) data_module = make_supervised_data_module(tokenizer=tokenizer, task_name=data_args.data_path) trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) if __name__ == "__main__": train() ``` ### Expected behavior Dataset size should not affect whether the code throws an error.
06-08-2023 05:03:15
06-08-2023 05:03:15
> Dataset size should not affect whether the code throws an error. I disagree here. The dataset is too small so you will have an error anyway, but it could be clearer I agree. This might be linked to FSDP so tagging @pacman100 <|||||>For anyone who happens to be fine-tuning on tiny datasets: you can fix this issue by reducing `gradient_accumulation_steps`. I think num_examples has to be greater than `gradient_accumulation_steps*num_GPUs*per_device_train_batch_size` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,097
closed
add trust_remote_code option to CLI download cmd
# What does this PR do? Add option to allow trust remote code download via CLI command Address #24063 Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). #24063 - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #24063 - [] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Could't find much documentation about `transformers-cli download` - [X] Did you write any new necessary tests? Added two test, one for a simple download pointing to `tmp` and check if folders `blobs,snapshots,refs` are present, and a test for `--trust-remote-code`. Duplicated testing model https://huggingface.co/hf-internal-testing/test_dynamic_model_with_tokenizer adding a tokenizer to work as expected ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-08-2023 04:57:19
06-08-2023 04:57:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger add the decorator, waiting to se if the tests here are passing. on my local env all the 4 test passes successfully 🤔 <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24097). All of your documentation changes will be reflected on that endpoint.
transformers
24,096
closed
Exception when saving weights from QLORA due to UnboundLocalError
### System Info - `transformers` version: 4.30.0.dev0 - Platform: Linux-6.1.0-9-amd64-x86_64-with-glibc2.36 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger you reviewed the PR so it looks like your eyes were on this most recently. Relevant commit: https://github.com/huggingface/transformers/commit/357f281ba24af8d49afd84c7628329d99868f411 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the following notebook: https://colab.research.google.com/github/utensil/llm-playground/blob/main/notebooks/axolotl/colab/axolotl_falcon_1b_qlora_gsm8k.ipynb. I get the following error: ``` Traceback (most recent call last): File "/home/e/et/ethanhs/axolotl/axolotl/scripts/finetune.py", line 295, in <module> fire.Fire(train) File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/home/e/et/ethanhs/axolotl/axolotl/scripts/finetune.py", line 282, in train trainer.train(resume_from_checkpoint=resume_from_checkpoint) File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1661, in train return inner_training_loop( File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2070, in _inner_training_loop self._load_best_model() File "/home/e/et/ethanhs/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2258, in _load_best_model self._issue_warnings_after_load(load_result) UnboundLocalError: local variable 'load_result' referenced before assignment ``` ### Expected behavior I expect for transformers not to raise an exception.
06-07-2023 23:48:28
06-07-2023 23:48:28
cc @younesbelkada <|||||>Hi @ethanhs Thanks for reporting #24103 should solve the issue<|||||>Thanks for the quick response and quick fix!
transformers
24,370
closed
Incorrect information on BlipImageProcessor documentation
**Is your feature request related to a problem? Please describe.** There is slightly incorrect information on the [BlipImageProcessor documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip#transformers.BlipImageProcessor). **Describe the solution you'd like** The documentation currently says that the `image_mean` and `image_std` arguments default to `IMAGENET_STANDARD_MEAN` and `IMAGENET_STANDARD_STD` respectively. This seems to be incorrect looking at the source code however, in the init of `BlipImageProcessor` found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/image_processing_blip.py#L43) the values default to `OPENAI_CLIP_MEAN` and `OPENAI_CLIP_STD` respectively as defined [here](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/constants.py). Means are pretty similar but the standard deviations are quite different, so may be worth updating the documentation here :)
06-07-2023 23:04:19
06-07-2023 23:04:19
Transferring to transformers<|||||>Hi @LukeBailey181, thanks for pointing out! Would you like to open a PR to update the documentation? This way you would get the git contribution. <|||||>Wonderful yes I will create the PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
24,095
closed
fix get_keys_to_not_convert function
# What does this PR do ? Fix the behavior of the get_keys_to_not_convert function for the following cases: - If the lm_head is tied, we won't be able to see it using the method named_parameters() and the last visible module was added instead -> using `named_children() `instead - Fix tied_params variable that we should not crop.( Example of what was happening : `[['lm_head.weight', 'model.decoder.embed_tokens.weight']] -> ['model.decoder.embed_tokens.weight']`)
06-07-2023 22:28:49
06-07-2023 22:28:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,094
closed
fix blip2config int8 error to serialize json
# What does this PR do? Running the following using transformers=4.30.0.dev0 ``` bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForVision2Seq.from_pretrained("Salesforce/blip2-opt-2.7b", quantization_config=bnb_config, device_map='auto') print(model.config) ``` Will give the following error: ``` TypeError: Object of type BitsAndBytesConfig is not JSON serializable ``` Solution add convert BitsAndBytesConfig to json in Blip2Config ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 21:40:16
06-07-2023 21:40:16
cc @younesbelkada <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24094). All of your documentation changes will be reflected on that endpoint.<|||||>I ran `pip install -e .` on transformer main branch 8b169142f8b5735f25ad81313ee382350161d993 Then ran the code snippet and gave the error: `TypeError: Object of type BitsAndBytesConfig is not JSON serializable` Adding my commit will fix this print screen attached ![install0](https://github.com/huggingface/transformers/assets/9553458/1a95e026-fbfb-4eac-8454-bcdb35b86888) ![error0](https://github.com/huggingface/transformers/assets/9553458/f77f56ce-341e-4fff-a2c4-fb263539c718) <|||||>Hi @Andrechang Again thanks very much for flagging the issue, I realized that you have flagged an issue that is quite important to fix, therefore I made https://github.com/huggingface/transformers/pull/24137 and added you as a co-author, will merge that PR and I will close this one. Feel free to re-open it if you think that the issue is not resolved. Again thanks a lot!<|||||>Thank you for checking and for fix
transformers
24,093
closed
wrapped up efficient-memory contrastive search
null
06-07-2023 21:39:36
06-07-2023 21:39:36
transformers
24,092
closed
[BlenderBotSmall] Update doc example
# What does this PR do? Blenderbot small seems to be using `__start__` and `__end__`, updated the doc to reflect that. Super hard to fin original source, but based on the tokenizer this is what is used.
06-07-2023 20:27:58
06-07-2023 20:27:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,091
closed
⚠️ Time to say goodbye to py37
# What does this PR do? [Sorry for the spams, GitHub had some issues] Same as #24075, but that PR got freezed after I force pushed (after rebase), and my changes to address the comments were not able to appear) (amy have already approved #24075) ---- Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
06-07-2023 18:59:45
06-07-2023 18:59:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,090
open
Deepspeed hang when tuning redpajama-3b
### System Info transformers-cli says: ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ds_report says: ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.0.dev0 - Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> root@etc-gpu-12:/workspace# ds_report [2023-06-07 16:33:38,748] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch'] torch version .................... 2.0.1+cu117 deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.9.4+f2f5f21b, f2f5f21b, master torch cuda version ............... 11.7 torch hip version ................ None nvcc version ..................... 11.6 deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6 ``` I am running this using a docker image from this dockerfile: ``` # https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-04.html#rel_22-04 FROM nvcr.io/nvidia/pytorch:22.04-py3 # https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-01.html # This one OOM's on the tune-broken case # FROM nvcr.io/nvidia/pytorch:23.01-py3 RUN git clone https://github.com/huggingface/transformers.git RUN pip install transformers/. RUN pip install git+https://github.com/huggingface/accelerate.git # RUN git clone https://github.com/huggingface/accelerate.git # RUN pip install accelerate/. RUN pip install git+https://github.com/microsoft/DeepSpeed.git # RUN git clone https://github.com/microsoft/DeepSpeed.git # RUN pip install deepspeed/. RUN pip install git+https://github.com/huggingface/peft.git RUN pip install datasets evaluate loralib --upgrade --quiet RUN pip install bitsandbytes rouge-score tensorboard py7zr einops py-spy RUN pip install jupyter RUN pip uninstall -y apex RUN pip uninstall -y apex # This is so we can run the translation test RUN pip install -r transformers/examples/pytorch/translation/requirements.txt ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Summary: - I kick off the training script using deepspeed and NO configuration and it fails. I've also tried with `ds_config_zero3.json` from the test directory and it fails too. My script "tune.py": ``` #! /usr/bin/env python3 import transformers from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import DataCollatorForLanguageModeling from transformers import AutoModelForCausalLM, TrainingArguments, Trainer from datasets import load_dataset from accelerate import Accelerator MIN_TRANSFORMERS_VERSION = '4.25.1' assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' accelerator = Accelerator() # ============================================================================== # DDP: Usually we use NCCL, so set that. # Maybe need to use: NCCL_P2P_DISABLE=1 training_args = TrainingArguments( output_dir="redpajama-tuning-test", #evaluation_strategy="epoch", learning_rate=2e-5, weight_decay=0.01, per_device_train_batch_size=4, per_device_eval_batch_size=4, #log_level="debug", report_to="none", ddp_backend="nccl", ddp_timeout=60, push_to_hub=False ) # ============================================================================= tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model.train() model = model.half() model = model.cuda() # ============================================================================= tokenizer.model_max_length=512 tokenizer.pad_token = tokenizer.eos_token eli5 = load_dataset("eli5", split="train_asks[:5000]") eli5 = eli5.train_test_split(test_size=0.2) eli5 = eli5.flatten() def preprocess_function(examples): return tokenizer([" ".join(x) for x in examples["answers.text"]]) with training_args.main_process_first(desc="tokenizing"): tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5["train"].column_names ) block_size = 512 def group_texts(examples): concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) if total_length >= block_size: total_length = (total_length // block_size) * block_size result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result with training_args.main_process_first(desc="grouping"): lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) # ================================================================================= data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset["train"], eval_dataset=lm_dataset["test"], tokenizer=tokenizer, data_collator=data_collator ) trainer.train() ``` I run it with: ``` deepspeed tune.py ``` When it deadlock (pretty reproducibly, sometimes it completes) I use py-spy to get the stack traces. ``` root@etc-gpu-12:/workspace# py-spy dump --pid 4282 Process 4282: /opt/conda/bin/python3.8 -u tune-broken.py --local_rank=0 Python v3.8.13 (/opt/conda/bin/python3.8) Thread 4282 (active+gil): "MainThread" store_flos (transformers/trainer.py:2938) _inner_training_loop (transformers/trainer.py:2059) train (transformers/trainer.py:1643) <module> (tune-broken.py:89) Thread 4754 (idle): "Thread-4" wait (threading.py:306) wait (threading.py:558) run (tqdm/_monitor.py:60) _bootstrap_inner (threading.py:932) _bootstrap (threading.py:890) Thread 4964 (idle) Thread 4965 (idle) root@etc-gpu-12:/workspace# py-spy dump --pid 4283 Process 4283: /opt/conda/bin/python3.8 -u tune-broken.py --local_rank=1 Python v3.8.13 (/opt/conda/bin/python3.8) Thread 4283 (active): "MainThread" forward (transformers/models/gpt_neox/modeling_gpt_neox.py:278) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:149) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:331) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:564) _call_impl (torch/nn/modules/module.py:1501) forward (transformers/models/gpt_neox/modeling_gpt_neox.py:673) _call_impl (torch/nn/modules/module.py:1501) _run_ddp_forward (torch/nn/parallel/distributed.py:1110) forward (torch/nn/parallel/distributed.py:1156) _call_impl (torch/nn/modules/module.py:1501) compute_loss (transformers/trainer.py:2763) training_step (transformers/trainer.py:2738) _inner_training_loop (transformers/trainer.py:1928) train (transformers/trainer.py:1643) <module> (tune-broken.py:89) Thread 4824 (idle): "Thread-4" wait (threading.py:306) wait (threading.py:558) run (tqdm/_monitor.py:60) _bootstrap_inner (threading.py:932) _bootstrap (threading.py:890) Thread 4966 (idle) Thread 4967 (idle) ``` ### Expected behavior That it shouldn't deadlock.
06-07-2023 17:45:28
06-07-2023 17:45:28
cc @stas00 <|||||>Hello, ``` accelerator = Accelerator() ``` This shouldn't be there as Trainer creates it internally now. Without any deepspeed config, I'm confused why you are trying to use deepspeed launcher via `deepspeed tune.py`? <|||||>Please refer here https://huggingface.co/docs/transformers/main_classes/deepspeed for properly using deepspeed as well as this PR #23236 in case you want to use accelerate launcher for the same.<|||||>I've been trying to follow the docs in the link above. That's why I added the issue last week about the tuning not working as described in the page. This week I am trying to get my own code working since I know that deepspeed by itself works. Even with using the config from the test area "tests/deepspeed/ds_config_zero3.json" I get the same deadlock. If one doesn't use the config, what are all the defaults btw? I found it hard to get a clear understanding for comparison of the all the defaults. So I guess one of my assumptions is that even if I don't take advantage of all the features and get better acceleration, deepspeed should just "work" without deadlocking? If I do the same command "deepspeed tune.py" on one gpu it completes fine. Is that not true? My plan is to get a working baseline then tweak from there. <|||||>Okay, after reading much further down beyond the working with multiple gpu examples I have found the "Shared Configuration" section with statements like: "be very careful that your the [Trainer](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.Trainer) arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned." I will dig through the configuration files and try to debug these issues. Any guidance on how to detect this mismatches? Is there any way to print/compare all the configs? Any help is appreciated.<|||||>Okay, now that I look at the example more closely (run_translation.py) I see that it does all the handling of args, etc. Is there a tutorial on writing a deepspeed enabled script? Anyway, I am now int the process of passing in the deepspeed myself to my trainer using the argument properly, I still have a hang but elsewhere. It still hangs. The stack trace is attached. The deepspeed config is attached. The training_args are attached. As far as I can tell these values should work together, but I have some trouble reconciling things like "linear" and "WarmupLR". Thoughts on debugging approaches/options? [ds_config_zero3.txt](https://github.com/huggingface/transformers/files/11710676/ds_config_zero3.txt) [stack_trace.txt](https://github.com/huggingface/transformers/files/11710677/stack_trace.txt) [training_args.txt](https://github.com/huggingface/transformers/files/11710678/training_args.txt) <|||||>Any thoughts on this?<|||||>Code: Note that I have removed few lines as they were incorrect/unnecessary ```diff import transformers from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import DataCollatorForLanguageModeling from transformers import AutoModelForCausalLM, TrainingArguments, Trainer from datasets import load_dataset - from accelerate import Accelerator MIN_TRANSFORMERS_VERSION = '4.25.1' assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' - accelerator = Accelerator() # ============================================================================== # DDP: Usually we use NCCL, so set that. # Maybe need to use: NCCL_P2P_DISABLE=1 training_args = TrainingArguments( output_dir="redpajama-tuning-test", #evaluation_strategy="epoch", learning_rate=2e-5, weight_decay=0.01, per_device_train_batch_size=4, per_device_eval_batch_size=4, #log_level="debug", report_to="none", ddp_backend="nccl", ddp_timeout=60, push_to_hub=False, + deepspeed="ds_config_zero3.json" ) # ============================================================================= tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model.train() - model = model.half() - model = model.cuda() # ============================================================================= tokenizer.model_max_length=512 tokenizer.pad_token = tokenizer.eos_token eli5 = load_dataset("eli5", split="train_asks[:5000]") eli5 = eli5.train_test_split(test_size=0.2) eli5 = eli5.flatten() def preprocess_function(examples): return tokenizer([" ".join(x) for x in examples["answers.text"]]) with training_args.main_process_first(desc="tokenizing"): tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5["train"].column_names ) block_size = 512 def group_texts(examples): concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) if total_length >= block_size: total_length = (total_length // block_size) * block_size result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result with training_args.main_process_first(desc="grouping"): lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) # ================================================================================= data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset["train"], eval_dataset=lm_dataset["test"], tokenizer=tokenizer, data_collator=data_collator ) trainer.train() ``` ds config: ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` command: ``` CUDA_VISIBLE_DEVICES=2,3 deepspeed issue_24090.py ``` Output logs: ``` [2023-06-22 10:30:06,594] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-06-22 10:30:08,119] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. Detected CUDA_VISIBLE_DEVICES=2,3: setting --include=localhost:2,3 [2023-06-22 10:30:08,159] [INFO] [runner.py:555:main] cmd = /home/sourab/miniconda3/envs/ml/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None issue_24090.py [2023-06-22 10:30:09,458] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-06-22 10:30:10,887] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [2, 3]} [2023-06-22 10:30:10,887] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0 [2023-06-22 10:30:10,887] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]}) [2023-06-22 10:30:10,887] [INFO] [launch.py:163:main] dist_world_size=2 [2023-06-22 10:30:10,887] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=2,3 [2023-06-22 10:30:13,111] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-06-22 10:30:13,133] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0'), PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) CUDA SETUP: CUDA runtime path found: /home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so... ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0'), PosixPath('/home/sourab/miniconda3/envs/ml/lib/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) CUDA SETUP: CUDA runtime path found: /home/sourab/miniconda3/envs/ml/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so... [2023-06-22 10:30:14,921] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-06-22 10:30:14,921] [INFO] [comm.py:594:init_distributed] cdb=None [2023-06-22 10:30:14,921] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-06-22 10:30:14,932] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-06-22 10:30:14,932] [INFO] [comm.py:594:init_distributed] cdb=None [2023-06-22 10:30:23,033] [INFO] [partition_parameters.py:453:__exit__] finished initializing model with 2.78B parameters Found cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa) Found cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa) Map (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (661 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (1066 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (557 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (560 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 0%| | 0/1000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (990 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (814 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (723 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 25%|███████████████ | 250/1000 [00:00<00:00, 1105.17 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (661 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (550 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (632 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (3324 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 25%|██████████████▊ | 1000/4000 [00:00<00:02, 1384.05 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (631 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (556 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (845 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (866 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (816 > 512). Running this sequence through the model will result in indexing errors Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 2.314084529876709 seconds Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... Emitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/utils/build.ninja... Building extension module utils... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module utils... Time to load utils op: 0.05933046340942383 seconds Parameter Offload: Total persistent parameters: 1070080 in 258 params Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 2.3131399154663086 seconds Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... Emitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118/utils/build.ninja... Building extension module utils... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module utils... Time to load utils op: 0.05986285209655762 seconds Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.0002925395965576172 seconds Using /raid/sourab/.cache/huggingface/torch_extensions/py311_cu118 as PyTorch extensions root... No modifications detected for re-loaded extension module utils, skipping build step... Loading extension module utils... Time to load utils op: 0.00027251243591308594 seconds 0%| | 0/837 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:1209: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.) total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)]) /home/sourab/miniconda3/envs/ml/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:1209: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at /opt/conda/conda-bld/pytorch_1687280020902/work/torch/csrc/tensor/python_tensor.cpp:83.) total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)]) 1%|█▏ | 12/837 [01:05<1:14:36, 5.43s/it] ``` versions: ``` - `Accelerate` version: 0.21.0.dev0 -> install from main branch - `Transformers` version: 4.31.0.dev0 -> install from main branch - DeepSpeed version: 0.9.4 - Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.11.3 - Numpy version: 1.24.3 - PyTorch version (GPU?): 2.1.0.dev20230620 (True) - PyTorch XPU available: False - System RAM: 503.55 GB - GPU type: NVIDIA A100-SXM4-80GB ``` Summary: The code runs and no deadlocks or hangs. However, the warnings related to the data prep during tokenization seem concerning (not relevant to DeepSpeed/trainer) <|||||>Thanks for looking in this! I made the changes and tried it again. If I take out the model.half() and model.cuda() it won't fit on my 2x 40G A100's. I tried turning on fp16 via deepspeed by setting it to true and setting `fp16=true` in my TrainingConfigs. It starts up, but still hangs at the very end of the pass. ``` 100%|██| 417/417 [15:45<00:00, 2.21s/it] ``` When I use pyspy again it is still hung up on "to" lines and down in gpt_neo. Can you run it with fp16 turned on? Thanks again! EDIT: Missing "hangs" at the end of the run.<|||||>I am trying to come up with a similar version using run_clm and I can't get it to fit into memory on my gpu at all. When I turn on fp16 and model.cuda() (even without deepspeed) I can't get it to run right. I am taking this code from a notebook (which runs fine on an a100) and trying to adapt it to run on the huggingface and deepspeed infrastructe. This seems surprising hard. The script I have above doesn't do that much at all. Why is it so hard to turn into run_clm? Is there a tutorial for this sort of thing?<|||||>Hello, what is the issue you are facing and please provide a minimal example for deep dive. The above script works with deepSpeed as I have mentioned with all the steps in detail in previous message<|||||>also could you try running with gradient checkpointing enabled so that you could fit very long sequences while training CLM models with large models. Add `--gradient_checkpointing` to the command you run<|||||>> Hello, what is the issue you are facing and please provide a minimal example for deep dive. The above script works with deepSpeed as I have mentioned with all the steps in detail in previous message As I mentioned before I only have 40G A100's whereas you have 80's so removing the model.half() makes it OOM for me. I tried turning on fp16 at trainer arg and the deepspeed settings but it still hangs there. What happens when you turn on fp16? For me I have a very clear reproducible case with the fp16 parts turned on. I've attached updated versions (take off the .txt) of the ds config and the script to run with "deepspeed tune-broken.py" that should work. [tune-broken.py.txt](https://github.com/huggingface/transformers/files/11904199/tune-broken.py.txt) [ds_config_zero3.json.txt](https://github.com/huggingface/transformers/files/11904206/ds_config_zero3.json.txt) > also could you try running with gradient checkpointing enabled so that you could fit very long sequences while training CLM models with large models. Add `--gradient_checkpointing` to the command you run I'll also try the gradient checkpointing as I am down to sequence len of 400. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, have you had a chance to reproduce with fp16?<|||||>Hello, changed the code to use `fp16` by adding the below line: ```diff training_args = TrainingArguments( output_dir="redpajama-tuning-test", #evaluation_strategy="epoch", learning_rate=2e-5, weight_decay=0.01, per_device_train_batch_size=4, per_device_eval_batch_size=4, #log_level="debug", report_to="none", ddp_backend="nccl", ddp_timeout=60, push_to_hub=False, + deepspeed="ds_config_zero3.json", + fp16=True, ) ``` Output: ``` [2023-07-24 19:54:43,521] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect) The following values were not passed to `accelerate launch` and had defaults used instead: More than one GPU was found, enabling multi-GPU training. If this was unintended please pass in `--num_processes=1`. `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. [2023-07-24 19:54:48,144] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-07-24 19:54:48,151] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-07-24 19:54:49,459] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-07-24 19:54:49,459] [INFO] [comm.py:616:init_distributed] cdb=None [2023-07-24 19:54:49,459] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented [2023-07-24 19:54:49,459] [INFO] [comm.py:616:init_distributed] cdb=None [2023-07-24 19:54:49,459] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-07-24 19:54:59,209] [INFO] [partition_parameters.py:326:__exit__] finished initializing model with 2.78B parameters Downloading builder script: 100%|█████████████████████████████████████████████████████████| 18.2k/18.2k [00:00<00:00, 65.8MB/s] Downloading metadata: 100%|███████████████████████████████████████████████████████████████| 6.36k/6.36k [00:00<00:00, 38.6MB/s] Downloading readme: 100%|█████████████████████████████████████████████████████████████████| 15.8k/15.8k [00:00<00:00, 69.3MB/s] Found cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa) Found cached dataset eli5 (/raid/sourab/.cache/huggingface/datasets/eli5/LFQA_reddit/1.0.0/17574e5502a10f41bbd17beba83e22475b499fa62caa1384a3d093fc856fe6fa) Map (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (803 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (743 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (923 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (1463 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 0%| | 0/1000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (770 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (608 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (810 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (516 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 0%| | 0/4000 [00:00<?, ? examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (544 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (833 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (596 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 25%|██████████████▊ | 1000/4000 [00:00<00:02, 1432.28 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (743 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 25%|███████████████ | 1000/4000 [00:01<00:03, 925.60 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (616 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (551 > 512). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (774 > 512). Running this sequence through the model will result in indexing errors Map (num_proc=4): 25%|███████████████ | 250/1000 [00:00<00:00, 1145.15 examples/s]Token indices sequence length is longer than the specified maximum sequence length for this model (617 > 512). Running this sequence through the model will result in indexing errors [2023-07-24 19:55:10,594] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs [2023-07-24 19:55:10,604] [WARNING] [cpu_adam.py:84:__init__] FP16 params for CPUAdam may not work on AMD CPUs Using /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root... Using /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /raid/sourab/.cache/huggingface/torch_extensions/py310_cu118/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module cpu_adam... Time to load cpu_adam op: 2.2673165798187256 seconds Loading extension module cpu_adam... Time to load cpu_adam op: 2.3539650440216064 seconds Parameter Offload: Total persistent parameters: 1070080 in 258 params 0%| | 0/840 [00:00<?, ?it/s]You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. 1%|▋ | 6/840 [01:27<3:25:40, 14.80s/it] ``` Also, the memory usage is `21489MiB` (21GB vram) on both the GPUs<|||||>Can you post the final results please?
transformers
24,089
closed
Up pinned accelerate version
# What does this PR do? Increases pinned accelerate version, and also let's the `is_accelerate_available` check see for a specific version, since now we care much more about just if `PartialState` is available. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger re-opened again so tests can pass and we can merge 😅
06-07-2023 17:33:57
06-07-2023 17:33:57
Let's wait for the patch and pin 0.20.1<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,088
closed
Do not prepare lr scheduler as it as the right number of steps
# What does this PR do? Right now the LR scheduler is prepared by the Accelerate but it has a number of steps that already accounts for the number of processes. This results in the LR scheduler being stepp through num_processes too fast. This PR thus removes the lr_scheduler from the prepared objects. Should fix #23986
06-07-2023 17:32:51
06-07-2023 17:32:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,087
closed
[Not to merge before 2023/06/28] Time to say goodbye to py37
# What does this PR do? Same as #24075, but that PR got freezed after I force pushed (after rebase), and my changes to address the comments were not able to appear) ---- Byebye! EOL of python 3.7 is `2023/06/27`. https://endoflife.date/python
06-07-2023 17:21:23
06-07-2023 17:21:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24087). All of your documentation changes will be reflected on that endpoint.
transformers
24,086
closed
Add bark
This PR aims at integrating Bark, a TTS model, to `transformers`. `Bark` was designed and trained by [Suno-AI team](https://github.com/suno-ai/bark) and is made of 4 main components: - A `semantic model ` (also named `text model`), i.e a causal autoregressive transformer (GPT2-like), which takes into input a tokenized text - A `coarse acoustics model` (also named `coarse model`), also a causal autoregressive transformer, taking into input the results of the last model. It aims at regressing the first two audio codebooks necessary to `encodec`. - A `fine acoustics model` (`fine model`), this time a non-causal autoencoder transformer, which iteratively predicts the 6 last codebooks based on the sum of the previous codebooks embeddings. - having predicted 8 codebooks channels of `encodec`, Bark uses `encodec` to generate the output audio array. Note that each of the first 3 modules can take optional conditional speaker embeddings aiming at conditioning the output audio according to `specific preset voices.
06-07-2023 16:39:33
06-07-2023 16:39:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sanchit-gandhi <|||||>PR supersedes #23375<|||||>Hi @sanchit-gandhi , I think it's finally time for the final review! You might want to check the refactoring of `generate_text_semantic`, `generate_coarse`, `generate_fine`, but otherwise, sounds good!<|||||>Hi @amyeroberts, the PR is ready for review ! I'd be delighted to get your feedback on this when you have a chance. Let me know if I can help with anything!<|||||>@ylacombe Great! Could you resolve the conflicts? Once that's done I'll review 👍 <|||||>Hi @amyeroberts, as demanded, I resolved the merge conflicts! I've also updated the `speaker_embeddings` processing in `BarkProcessor`. Could you take a look when you have time ? Thanks! <|||||>Before I start a full review of this model, could you explain why the model was written in this structure - with no `forward` method and no task specific model e.g. `BarkForTextToSpeech`? <|||||>> Before I start a full review of this model, could you explain why the model was written in this structure - with no `forward` method and no task specific model e.g. `BarkForTextToSpeech`? Of course! You can't really `forward` through `BarkModel` because it uses the `generate` methods of its sub-models, each one with its one `GenerationConfig`. To be a little bit more specific, `BarkModel` is a nested model composed of 4 sub-models. The first 3 submodels follow a classic `transformer` architecture - hence the existence of a `forward` method for these submodels. However, when used by `BarkModel` in `generate_speech`, they are used in a non-traditional way (sliding windows, addition of input vectors alongside dimensions) and directly with their `generate` methods, model by model. To be more in line with `transformers` paradigm, we decided to keep the classical architecture for the 3 sub-models (the fourth, [`Encodec`](https://huggingface.co/docs/transformers/main/model_doc/encodec), being already implemented) and to provide a `generate_speech` method for the final model, with nested configs and nested generation configs. Using `forward` would have meant a much messier and probably slightly more confusing path, since it's not actually a matter of `forwarding` a list of tokens to generate new audio tokens one by one, but of generating the audio all at once! @sanchit-gandhi, I might have missed some arguments here, feel free to contribute! @amyeroberts, let me know if this answers your question!<|||||>Regarding the last part of your question, the `BarkModel` architecture can't really be used for anything other than this specific task, so it's a kind of single-purpose model. We considered adding task-specific sub-models, since the two first sub-models are GPT2-like auto-regressive models, but we decided not to move forward, for multiple reasons imo: 1. The first two sub-models, `BarkSemanticModel` and `BarkCoarseModel` could have had task-specific sub-classes, but I think this complicates both the code and users' understanding of the model architecture. What's more, although their architecture is general, it's only used here with an `lm_head`, and I can't think of any other use for them. 2. The third sub-model, `BarkFineModel`, needs [multiple embeddings layers and lm_heads](https://github.com/ylacombe/transformers/blob/33081dc7e3650ff07f2c766c3aa69bc7b6c82351/src/transformers/models/bark/modeling_bark.py#L968C2-L985), one per codebook channels of `Encodec`. So it's a non-regular type of task. <|||||>In reality, the Bark TTS model is just three auto-regressive models concatenated together. To generate with the Bark TTS model, you first have to generate **all** the ids for the first model. You then forward **all** of these ids to the coarse model to generate a new set of ids. You subsequently forward **all** of these generated ids to the third model to get your audio code outputs. So we can't just define one `forward` call and then auto-regressively generate with it (this would only get you one set of ids, not the three stages that you need). -> for this reason it doesn't really make sense to have a `forward` call for the `BarkModel`, since it's just a placeholder to hold the three submodes and pipe the generated outputs from one model into the next Regarding why this the model isn't called `BarkForTextToSpeech`, it's the same argument as VITS: https://github.com/huggingface/transformers/pull/24085#discussion_r1252222434 Happy to rename to `BarkForTextToSpeech` if you feel that this gives a more unified API between models, but Bark can **only** do text-to-speech, so this part of the name is somewhat redundant<|||||>@sanchit-gandhi @ylacombe Thanks for the detailed explanations! <|||||>Hi @amyeroberts, you're welcome! Have you had time to look into the PR? I'd be happy to answer any questions you might have about my code, as it's a rather atypical model!<|||||>Hi @amyeroberts , Thanks for the comprehensive review! I've answered most of your comments, but there are still a few I've asked questions/clarifications about! <|||||>Hi @amyeroberts and @sgugger! Many thanks for the additional review (and thanks @sanchit-gandhi for your insights)! I've addressed most of your comments, especially those requiring more consistency with transformers regarding naming the `generate_xxx`. I still have a few comments to resolve, I'll wait for your returns on that! <|||||>Hi @amyeroberts, there was a time-out when executing the python snippet of the `generate` docstrings. I took advantage of this to [add the ability to specify sub-model specific parameters](https://github.com/huggingface/transformers/pull/24086/commits/97cdc38e66b600d3bc1d82c56099acf3cdc6a0f8) in `BarkModel.generate`. To give a concrete example, you can specify now how many `max_new_tokens` you want for the `semantic` part of the model: ```audio_array = model.generate(**inputs, semantic_max_new_tokens=100)``` Now that it is done, there are still a few comments to resolve, so I look forward to hearing from you!<|||||>Hey @amyeroberts, I've addressed your last remarks! Does that work with you? Many thanks!<|||||>@ylacombe LGTM! I think we're good to merge 👍
transformers
24,085
open
add VITS model
# What does this PR do? Adds the VITS model for text-to-speech, in particular to support the MMS-TTS checkpoints (which use the same model architecture but a different tokenizer). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
06-07-2023 15:56:06
06-07-2023 15:56:06
Notes about the tokenizer: 1. This is not the VITS tokenizer but the one for MMS-TTS. 2. The vocab doesn't have padding (or unknown) tokens in it, but uses token_id 0 for this. That breaks on the HF tokenizers because it will split the input text on the padding token, so if I set `pad_token_id = 0` then the letters that token_id 0 corresponds to will disappear from the text. 3. To fix this issue, I'm adding `<pad>` and `<unk>` to the vocab, but then in the model we set such token_ids to 0 before feeding the input into the first layer. It's a bit hacky. Ideas for a nicer solution are appreciated. 4. The tokenizer also inserts an additional token_id 0 in between every token. No idea why but that's how it works. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24085). All of your documentation changes will be reflected on that endpoint.<|||||>This is ready for a first review yet. Two checkpoints are currently available: * https://huggingface.co/Matthijs/mms-tts-eng * https://huggingface.co/Matthijs/mms-tts-nld Small usage example: ``` from transformers import VitsMmsTokenizer, VitsModel import torch tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-eng") model = VitsModel.from_pretrained("Matthijs/mms-tts-eng") inputs = tokenizer(text="Hello, my dog is cute", return_tensors="pt") outputs = model(inputs["input_ids"]) speech = outputs.audio ``` The current model is the MMS-TTS version, not the original VITS version. The conversion scripts can handle both, but for original VITS support the tokenizer is still missing. Still needs to be done: * tests * tokenizer for actual VITS @Vaibhavs10 For this review, in particular could you verify the names of the layers in the flow layers etc make sense? Thanks! <|||||>Some of the MMS-TTS checkpoints require the use of the tool `uromanize` from https://github.com/isi-nlp/uroman to convert the input script into the Latin alphabet. Since this is a separate Perl script, it is not included in Transformers and the user will have to run `uromanize.pl` themselves before using the tokenizer.<|||||>> I'm not too sure why I'm asked for a review here as all comments from @sanchit-gandhi are being ignored. No they aren't?! I've integrated most of his suggestions and replied with counterarguments otherwise. <|||||>Tokenizer can now handle both the original VITS models (which require phonemization) and the MMS-TTS models.<|||||>Hey @sgugger / @amyeroberts - this one is ready for a review! We've got one open discussion around variable namings: https://github.com/huggingface/transformers/pull/24085#discussion_r1243884355 But otherwise the comments have been resolved and the code cleaned-up. Please address any comments / suggestions to myself, as I'll be taking over this PR for the rest of the integration<|||||>Would be really great to get your review here @amyeroberts! We're aiming to have this model feature as part of the next Unit of the audio transformers course 🤗 https://github.com/huggingface/audio-transformers-course/pull/61<|||||>This is ready for a second look @amyeroberts
transformers
24,084
closed
Update delete_doc_comment_trigger.yml
fix base workflow name, follow up to https://github.com/huggingface/transformers/pull/24079
06-07-2023 15:48:08
06-07-2023 15:48:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24084). All of your documentation changes will be reflected on that endpoint.
transformers
24,083
closed
Up pinned accelerate version
# What does this PR do? Increases pinned accelerate version, and also let's the `is_accelerate_available` check see for a specific version, since now we care much more about just if `PartialState` is available. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-07-2023 15:34:42
06-07-2023 15:34:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,082
closed
testing doc build actions
testing #24079
06-07-2023 15:34:29
06-07-2023 15:34:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>it worked !