repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
โŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
22,979
closed
transition scores can be negative infinity
### System Info Running transformers 4.28.1 in google colab: Collecting environment information... PyTorch version: 2.0.0+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.5 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.25.2 Libc version: glibc-2.31 Python version: 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.10.147+-x86_64-with-glibc2.31 Is CUDA available: False CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU @ 2.20GHz Stepping: 0 CPU MHz: 2199.998 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB L1i cache: 32 KiB L2 cache: 256 KiB L3 cache: 55 MiB NUMA node0 CPU(s): 0,1 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Versions of relevant libraries: [pip3] numpy==1.22.4 [pip3] torch==2.0.0+cu118 [pip3] torchaudio==2.0.1+cu118 [pip3] torchdata==0.6.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.15.1 [pip3] torchvision==0.15.1+cu118 [pip3] triton==2.0.0 [conda] Could not collect ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The code example is this: ``` from transformers import GPT2Tokenizer, AutoModelForCausalLM import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") tokenizer.pad_token_id = tokenizer.eos_token_id inputs = tokenizer(5*["Today is"], return_tensors="pt") torch.manual_seed(10) for i in range(100): outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True, output_scores=True, do_sample=True, temperature=0.9, top_k=40, pad_token_id=tokenizer.eos_token_id) transition_scores = model.compute_transition_scores( outputs.sequences, outputs.scores, normalize_logits=False ) if torch.isinf(transition_scores).any().item(): print(i) break ``` Colab link: https://colab.research.google.com/drive/12KIOKGfZtoChC1ohTlesUWEL6AZT_luo?usp=sharing ### Expected behavior I expect `transition_scores` to be finite however on my end I see torch.isinf(transition_scores) == True for i = 0. I traced the issue and it is actually not with compute_transition_scores but originates in the original scores returned in outputs.scores. The issue specifically happen we use non-greedy sampling approach (ie, do_sample=True). I looked deeper and I think the issue is because of the torch.multinomial selecting tokens with probability 0 (when it shouldn't) but I'm not sure: https://github.com/pytorch/pytorch/issues/48841
04-25-2023 06:59:55
04-25-2023 06:59:55
cc @gante <|||||>I have found that this problem also occurs when running on GPU, but torch.multinomial behaves as expected on GPU (erroneously sampling elements with prob 0 only happens on CPU with float data type). So I'm not sure why we are seeing -inf scores here. <|||||>Hey @myazdani ๐Ÿ‘‹ Thank you for raising the issue! Uhmmm this is a very annoying PyTorch problem. In practice, we have two options: 1. Wait for PT to fix the issue; 2. Add a workaround ourselves, e.g. sample N tokens at each step and select the first non-`-inf` token. Any workaround will add an execution time overhead, which is also undesirable. Given that the number of problematic tokens is very high (~0.158%[CPU]/~0.000%[GPU] of the tokens in a test run ๐Ÿ‘€ ), I'll add a workaround ASAP! ________________________________ script used to get the error ratio: ```py from transformers import GPT2Tokenizer, AutoModelForCausalLM import torch from tqdm import tqdm tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2").to("cuda") tokenizer.pad_token_id = tokenizer.eos_token_id # batch size == 1, larger batch sizes have nuances that are not relevant here inputs = tokenizer(["Today is"], return_tensors="pt").to("cuda") torch.manual_seed(10) invalid_tokens = 0 for i in tqdm(range(10000)): outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True, output_scores=True, do_sample=True, temperature=0.9, top_k=40, pad_token_id=tokenizer.eos_token_id) transition_scores = model.compute_transition_scores( outputs.sequences, outputs.scores, normalize_logits=False ) invalid_tokens += torch.isinf(transition_scores).sum().item() print(f"invalid token ratio: {(invalid_tokens / (10000 * 15))*100:.4f}%", ) ```<|||||>We got in touch with the PT team, which should give a hand on the problem :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It looks like the PyTorch multinomial issue has been fixed: https://github.com/pytorch/pytorch/pull/101720 I verified with the nightly build that the torch.multinomial works as expected. However, I am still getting -inf values in the transition scores `compute_transition_scores`, cc: @gante @sgugger <|||||>Hey @myazdani -- is the script to reproduce the issue with PT nightly still the same as the one at the top?<|||||>Yes @gante I'm running: ``` from transformers import GPT2Tokenizer, AutoModelForCausalLM import torch from tqdm import tqdm tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") tokenizer.pad_token_id = tokenizer.eos_token_id inputs = tokenizer(5*["Today is"], return_tensors="pt") torch.manual_seed(10) for i in range(100): outputs = model.generate(**inputs, max_new_tokens=15, return_dict_in_generate=True, output_scores=True, do_sample=True, temperature=0.9, top_k=40, pad_token_id=tokenizer.eos_token_id) transition_scores = model.compute_transition_scores( outputs.sequences, outputs.scores, normalize_logits=False ) if torch.isinf(transition_scores).any().item(): print(i) break ``` Loop breaks at i=22. Below is my env: ``` - huggingface_hub version: 0.15.1 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Running in iPython ?: Yes - iPython shell: Shell - Running in notebook ?: Yes - Running in Google Colab ?: Yes - Token path ?: /root/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: 2.7.12 - Tensorflow: 2.12.0 - Torch: 2.1.0.dev20230612+cu118 - Jinja2: 3.1.2 - Graphviz: 0.20.1 - Pydot: 1.4.2 - Pillow: 8.4.0 - hf_transfer: N/A - gradio: N/A - numpy: 1.25.0rc1 - ENDPOINT: https://huggingface.co/ - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False {'huggingface_hub version': '0.15.1', 'Platform': 'Linux-5.15.107+-x86_64-with-glibc2.31', 'Python version': '3.10.12', 'Running in iPython ?': 'Yes', 'iPython shell': 'Shell', 'Running in notebook ?': 'Yes', 'Running in Google Colab ?': 'Yes', 'Token path ?': PosixPath('/root/.cache/huggingface/token'), 'Has saved token ?': False, 'Configured git credential helpers': '', 'FastAI': '2.7.12', 'Tensorflow': '2.12.0', 'Torch': '2.1.0.dev20230612+cu118', 'Jinja2': '3.1.2', 'Graphviz': '0.20.1', 'Pydot': '1.4.2', 'Pillow': '8.4.0', 'hf_transfer': 'N/A', 'gradio': 'N/A', 'numpy': '1.25.0rc1', 'ENDPOINT': 'https://huggingface.co/', 'HUGGINGFACE_HUB_CACHE': '/root/.cache/huggingface/hub', 'HUGGINGFACE_ASSETS_CACHE': '/root/.cache/huggingface/assets', 'HF_TOKEN_PATH': '/root/.cache/huggingface/token', 'HF_HUB_OFFLINE': False, 'HF_HUB_DISABLE_TELEMETRY': False, 'HF_HUB_DISABLE_PROGRESS_BARS': None, 'HF_HUB_DISABLE_SYMLINKS_WARNING': False, 'HF_HUB_DISABLE_EXPERIMENTAL_WARNING': False, 'HF_HUB_DISABLE_IMPLICIT_TOKEN': False, 'HF_HUB_ENABLE_HF_TRANSFER': False} ``` <|||||>@myazdani In this case everything is fine :D If you print the transition scores for the row with `-inf` you see ``` [-128.0551, -120.1372, -91.5507, -70.1001, -88.0797, -100.0098, -82.7555, -34.6997, -inf, -inf, -inf, -inf, -inf, -inf, -inf], ``` and, if you print the sequence, you see ``` [ 8888, 318, 257, 6507, 640, 329, 262, 1499, 526, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256] ``` `50256` is the EOS token, so the `-inf` here only exists due to padding :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,978
closed
Deadlock in Image Processor of ViT by using OpenMP and Kserve
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.2.5 - Python version: 3.8.13 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: Yes (using OpenMP) - Kserve version: 0.9.0 - g++|gcc (Debian 10.2.1-6) 10.2.1 20210110 - cv2: 4.5.5 - numpy: 1.21.6 ### Who can help? - vision models: @amyeroberts - tokenizer: @ArthurZucker - PyTorch: @sgu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We face the deadlock at AutoImageProcessor (forward pass) with the following settings: 1. If we use OMP_NUM_THREADS > 1 (to activate intra-op parallelism in pytorch) and Kserve workers > 1 The code works in the following setting: 1. If OMP_NUM_THREADS == 1, then irrespective of the kserve workers the script works, but the inference time increases (by ~2x). CODE SNIPPET: https://gist.github.com/harshyadav17/149f1c990c17111d8340fcf2e89a5b88 **In the above code, the deadlock is happening at line 67.** ### Expected behavior Successful model prediction with OpenMP variables for optimised inference. If we continue with non-deadlock (with OMP_NUM_THREADS = 1) setting then there is an increase in the inference time by 2x. We have setup OMP_NUM_THREADS to decrease the latency. The best latency can be checked with optimum number of OMP_NUM_THREADS (set according to the machine, ideally it is num_physical_cores).
04-25-2023 06:21:04
04-25-2023 06:21:04
@amyeroberts can you please help on this ? <|||||>By searching "OMP_NUM_THREADS" "deadlock" on Google, it seems it's a general issue when `OMP_NUM_THREADS > 1`. Unfortunately, I am afraid there is no doable fix on `transformers` side. <|||||>Furthermore, this issue also involves the usage of `KServe`: it fits better on [HF Forum](https://discuss.huggingface.co/) to see if any user has the same issue and if some workaround is found, or maybe better, on `OpenMP` or `KServe` pages/forums.<|||||>Hi @ydshieh thanks for the prompt response. I face an interesting correlation with this issue. Whenever we initialise the gcp vision client in the predict method, the pipeline works perfectly. This is how we can replicate the same. `from google.cloud import vision` `os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "<path_to_service_account_key>"` `_ = vision.ImageAnnotatorClient()` In order to get the inference, following is the code (to be run from another kernel): `import requests` `service_response = requests.post('http://localhost:8095/v1/models/test1234:predict', json={})` I hope we can make something out of it and contribute to the open source community. This issue happens with most of the Image Processors of Hugging Face. Thanks!<|||||>@harshyadav17 Is it possible for you to remove the parts that involve `Kserve`, and just keep `OMP_NUM_THREADS > 1` to see if the issue is still there? If we can reproduce in this case, it might be much easier to dive into. And it's also nice to see if `Kserve workers = 1` will give the issue or not.<|||||>@ydshieh with kserve workers = 1, the script runs perfectly. We won't be able replicate the issue if Kserve is removed from the script. Moreover, if we add that gcp vision client, the script works with every setting possible. So can we please have a look at why this is happening. GCP client is completely unrelated over here but the Huggingface processor doesn't show us a deadlock. This can help us in implementing the solution to HF processors.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,977
closed
updated with docker setup
added docker files for setting up summary runs as docker images
04-25-2023 02:48:39
04-25-2023 02:48:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,976
closed
Add dummy_inputs for pytorch_version of vision_models
# What does this PR do? ``` from transformers import ViTForImageClassification from transformers.utils.fx import symbolic_trace model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") traced = symbolic_trace(model) ``` ``` Traceback (most recent call last): File "bug_check.py", line 5, in <module> traced = symbolic_trace(model) File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1214, in symbolic_trace concrete_args = get_concrete_args(model, input_names) File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1167, in get_concrete_args raise ValueError( ValueError: The model does not have input(s) named: input_ids, expected a subset of the following: pixel_values, head_mask, labels, output_attentions, output_hidden_states, interpolate_pos_encoding, return_dict ``` When using transformers.utils.fx.symbolic_trace, the pytorch version of vision models throws an error. This is because the default setting of dummy_inputs is "input_ids". It doesn't matter in TEXT MODELS, but this problem occurs because VISION MODELS requires "pixel_values" as a base. Added dummy_inputs to several PyTorch version models by referring to the dummy_inputs of the Tensorflow version model. This change fixes the convnext, convnextv2, resnet, segformer, vit, and vit_hybrid models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
04-25-2023 02:36:13
04-25-2023 02:36:13
Hi @kolonist-minjun, thanks for opening this PR! The `dummy_inputs` is a legacy property of the pretrained models and not one we're actively supporting. To use `symbolic_trace`, you can directly pass in the input names: ```py from transformers import ViTForImageClassification from transformers.utils.fx import symbolic_trace model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") traced = symbolic_trace(model, input_names=['pixel_values']) ```<|||||>Hi @amyeroberts, thanks for the comment! The TF version models have dummy_inputs, so I thought it would be good to have them in the PyTorch version models for unification.<|||||>@kolonist-minjun Yes, it's a bit confusing considering some PyTorch models also have `dummy_inputs` implemented - hopefully once fully deprecated and removed it'll be clearer. We have `dummy_inputs` for the TF models, because Keras models have to be built in order to load pretrained weights. <|||||>@amyeroberts Thank you for your comment. I will close this PR!
transformers
22,975
closed
Publish instance types best suited to finetune/inference of a popular model
### Feature request It would be very helpful to see a chart of instance types on the large public clouds (AWS, GCP, Oracle) most suitable for popular public LLMs like Google Flan-T5 family. This generalizes a page from @philschmid , whose notebooks indicate how he finetuned certain models using certain instances. You could publish the instance types on the model card broken down by inference and finetuning. Would this be possible? ### Motivation To help folks like me to not spin our wheels on trying to locate the most suitable vCPU-GPU combinations. It's a jungle on AWS for sure. ### Your contribution Happy to help how I can.
04-24-2023 23:56:37
04-24-2023 23:56:37
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,974
closed
Error when running MegaForCausalLM example code in Docs
### System Info Most recent version of Tranformers from Githup, on Google Colab ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is the example code from the documentation for MegaForCausalLM (https://huggingface.co/docs/transformers/main/model_doc/mega): ```python from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig import torch tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext") config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext") config.is_decoder = True config.bidirectional = False model = MegaForCausalLM.from_pretrained("mnaylor/mega-base-wikitext", config=config) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) prediction_logits = outputs.logits ``` After installing Transformers from source, when I run the above code snippet on Colab, I get this error: RuntimeError: Error(s) in loading state_dict for MegaForCausalLM: size mismatch for mega.layers.0.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.0.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.0.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.0.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]). size mismatch for mega.layers.1.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.1.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.1.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.1.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]). size mismatch for mega.layers.2.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.2.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.2.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.2.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]). size mismatch for mega.layers.3.mega_layer.ema_gate.damping_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.3.mega_layer.ema_gate.decay_factor: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.3.mega_layer.ema_gate.ema_expansion_matrix: copying a param with shape torch.Size([256, 16, 1]) from checkpoint, the shape in current model is torch.Size([128, 16, 1]). size mismatch for mega.layers.3.mega_layer.ema_gate.kernel_projection_matrix: copying a param with shape torch.Size([256, 16]) from checkpoint, the shape in current model is torch.Size([128, 16]). You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method. ### Expected behavior The pretrained model would load all weights without error
04-24-2023 22:45:17
04-24-2023 22:45:17
Hey! Thanks for reporting! This is because the default configuration argument of `bidirectional` is `True`. When setting it to False you reduce the size of the ema matrix. If you still want to use it, ` ignore_mismatched_sizes=True` will help you initialize the model. <|||||>Thank you for your response. When I set ignore_mismatched_sizes=True the code works. However, the example code in the docs is still incorrect.<|||||>@Tylersuard Yep, you're right! Would you like to open a PR to update the docs to get the git contribution for spotting? <|||||>@amyeroberts Absolutely!<|||||>Ok! I just made the PR here. https://github.com/huggingface/transformers/pull/23382
transformers
22,973
closed
Add Mask R-CNN
# What does this PR do? This PR adds the classic Mask R-CNN framework for object detection and instance segmentation. To do/to be discussed: - [ ] where to place utilities like NMS, loss computation, samplers - [ ] whether to create dummies for torchvision-backed models - [ ] how to add support for the object detection pipeline - either add `**kwargs` to each `post_process_object_detection` method, or add specific logic for Mask R-CNN inside `object_detection_pipeline.py`
04-24-2023 20:11:56
04-24-2023 20:11:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22973). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge As @sgugger mentions, the PR is still in WIP state. Happy to review once transformers ready :) <|||||>I've updated all docstrings and variable names, PR is ready for another review<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @NielsRogge, what is the status of this feature? Thanks in advance
transformers
22,972
closed
[i18n-PL] Translating docs to Polish
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the Polish-speaking community ๐ŸŒ (currently 0 out of 267 complete) Who would want to translate? Please follow the ๐Ÿค— [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers ๐Ÿค—). * Please translate in a gender-neutral way. * Add your translations to the folder called `PL` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `PL/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * ๐Ÿ™‹ If you'd like others to help you with the translation, you can also post in the ๐Ÿค— [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go ๐Ÿ”ฅ -->
04-24-2023 19:48:36
04-24-2023 19:48:36
transformers
22,971
closed
RuntimeError: CUDA error: device-side assert triggered
### System Info - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @sgugger I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) with 2 clients. In the first client, I am using the same dataset as that of the tutorial (IMDB) but for the 2nd client I am using [this](https://huggingface.co/datasets/sst2) dataset. While running the `server.py`, `client_1.py`, and `client_2.py` I am facing the following error while running `client_2.py`. I have also attached all the files below. Error: ``` Traceback (most recent call last): File "client_2.py", line 140, in <module> main() File "client_2.py", line 136, in main fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient()) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client start_client( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client client_message, sleep_duration, keep_going = handle( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 70, in handle return _evaluate(client, server_msg.evaluate_ins), 0, True File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 182, in _evaluate evaluate_res = client.evaluate(evaluate_ins) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 276, in _evaluate results = self.numpy_client.evaluate(parameters, ins.config) # type: ignore File "client_2.py", line 131, in evaluate loss, accuracy = test(net, testloader) File "client_2.py", line 97, in test loss += outputs.loss.item() RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` `client_1.py` file: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from datasets import load_dataset, load_metric from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification from transformers import AdamW # import os # os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' warnings.filterwarnings("ignore", category=UserWarning) # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") DEVICE = "cuda:0" CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("imdb") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True) # random 100 samples # population = random.sample(range(len(raw_datasets["train"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) # tokenized_datasets["train"] = tokenized_datasets["train"].select(population) # tokenized_datasets["test"] = tokenized_datasets["test"].select(population) tokenized_datasets = tokenized_datasets.remove_columns("text") tokenized_datasets = tokenized_datasets.rename_column("label", "labels") data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-5) net.train() for i in range(epochs): print("Epoch: ", i+1) j = 1 for batch in trainloader: print("####################### The batch number is: ", j) batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() j += 1 def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy def main(): net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) trainloader, testloader = load_data() # Flower client class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) print({"loss": float(loss), "accuracy": float(accuracy)}) return float(loss), len(testloader), {"loss": float(loss), "accuracy": float(accuracy)} # Start client fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient()) if __name__ == "__main__": main() ``` `client_2.py` file: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from datasets import load_dataset, load_metric from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification from transformers import AdamW #from transformers import tokenized_datasets # import os # os.environ['CUDA_VISIBLE_DEVICES'] = '2,3' warnings.filterwarnings("ignore", category=UserWarning) # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") DEVICE = "cuda:1" CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("sst2") # raw_datasets = load_dataset("yhavinga/imdb_dutch") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["validation"] # del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["sentence"], truncation=True) # random 100 samples # population = random.sample(range(len(raw_datasets["train"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) # tokenized_datasets["train"] = tokenized_datasets["train"].select(population) # tokenized_datasets["test"] = tokenized_datasets["test"].select(population) tokenized_datasets = tokenized_datasets.rename_column("label", "labels") tokenized_datasets = tokenized_datasets.remove_columns(["idx", "sentence"]) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-4) net.train() for i in range(epochs): print("Epoch: ", i+1) j = 1 # print("####################### The length of the trainloader is: ", len(trainloader)) for batch in trainloader: print("####################### The batch number is: ", j) batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() j += 1 def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy def main(): net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) trainloader, testloader = load_data() # Flower client class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) print({"loss": float(loss), "accuracy": float(accuracy)}) return float(loss), len(testloader), {"loss": float(loss), "accuracy": float(accuracy)} # Start client fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient()) if __name__ == "__main__": main() ``` `server.py` file: ``` import flwr as fl import torch from collections import OrderedDict from logging import WARNING from flwr.common import ( ndarrays_to_parameters, parameters_to_ndarrays, ) from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer from flwr.server.strategy.aggregate import aggregate from flwr.common.logger import log from transformers import pipeline CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint DEVICE = torch.device("cpu") # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Strategy(fl.server.strategy.FedAvg): def aggregate_fit( self, server_round, results, failures, ): if not results: return None, {} # Do not aggregate if there are failures and failures are not accepted if not self.accept_failures and failures: return None, {} # Convert results weights_results = [ (parameters_to_ndarrays(fit_res.parameters), fit_res.num_examples) for _, fit_res in results ] self.aggr_weights = aggregate(weights_results) parameters_aggregated = ndarrays_to_parameters(self.aggr_weights) # Aggregate custom metrics if aggregation fn was provided metrics_aggregated = {} if self.fit_metrics_aggregation_fn: fit_metrics = [(res.num_examples, res.metrics) for _, res in results] metrics_aggregated = self.fit_metrics_aggregation_fn(fit_metrics) elif server_round == 1: # Only log this warning once log(WARNING, "No fit_metrics_aggregation_fn provided") return parameters_aggregated, metrics_aggregated if __name__ == "__main__": def weighted_average(metrics): accuracies = [num_examples * m["accuracy"] for num_examples, m in metrics] losses = [num_examples * m["loss"] for num_examples, m in metrics] examples = [num_examples for num_examples, _ in metrics] accuracy = sum(accuracies) / sum(examples) loss = sum(losses) / sum(examples) print("Accuracy: ", accuracy) print("Loss: ", loss) return {"accuracy": accuracy, "loss": loss} net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) # Define strategy strategy = Strategy( min_fit_clients=1, min_evaluate_clients=1, min_available_clients=1, fraction_fit=1.0, fraction_evaluate=1.0, evaluate_metrics_aggregation_fn=weighted_average, ) # Start server fl.server.start_server( server_address="localhost:5040", config=fl.server.ServerConfig(num_rounds=2), strategy=strategy, ) params_dict = zip(net.state_dict().keys(), strategy.aggr_weights) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict) classifier = pipeline("sentiment-analysis", model=net, tokenizer=AutoTokenizer.from_pretrained(CHECKPOINT)) positive_1 = "That was amazing!!!" negative_1 = "I feel so sad about this..." positive_2 = "I liked it!!" negative_2 = "I hated it!!" print(positive_1, classifier(positive_1)) print(negative_1, classifier(negative_1)) print(positive_2, classifier(positive_2)) print(negative_2, classifier(negative_2)) # Dutch inference dutch_pos_1 = "ik vond de film leuk" dutch_neg_1 = "Ik haatte de film" print(dutch_pos_1, classifier(dutch_pos_1)) print(dutch_neg_1, classifier(dutch_neg_1)) torch.save(net.state_dict(), "/home/saurav/quickstart_huggingface/server_model.pt") ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the above provided files in 3 separate terminal windows. ### Expected behavior The training should happen without causing an error.
04-24-2023 18:16:31
04-24-2023 18:16:31
This usually means there is bad indexing somewhere in your code. You should really bring up the issue with the persons who wrote the tutorials as it looks like there is a bug in their code. You can try to run the code on the CPU to see where the error stems from, or post us a minimal reproducer that doesn't use third-party libraries.<|||||>But `client_1.py` works well and there isn't much difference in both of their code, only the dataset is different.<|||||>I ran it on a CPU rather than a GPU and have got some more information about the error: ``` Traceback (most recent call last): File "client_2.py", line 140, in <module> main() File "client_2.py", line 136, in main fl.client.start_numpy_client(server_address="localhost:5040", client=IMDBClient()) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client start_client( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client client_message, sleep_duration, keep_going = handle( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 70, in handle return _evaluate(client, server_msg.evaluate_ins), 0, True File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 182, in _evaluate evaluate_res = client.evaluate(evaluate_ins) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 276, in _evaluate results = self.numpy_client.evaluate(parameters, ins.config) # type: ignore File "client_2.py", line 131, in evaluate loss, accuracy = test(net, testloader) File "client_2.py", line 95, in test outputs = net(**batch) File "/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/saurav/.local/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 763, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/saurav/.local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1174, in forward return F.cross_entropy(input, target, weight=self.weight, File "/home/saurav/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 3029, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) IndexError: Target -1 is out of bounds. ```<|||||>This probably means some of your labels are `-1` which is not a valid label. If you were attempting to put a fake label to pad a batch, -100 is the value for this in PyTorch.<|||||>I am using this dataset - https://huggingface.co/datasets/sst2 and I noticed that the test set has -1 value. How do I remove it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,970
closed
TF port of the Segment Anything Model (SAM)
This is a first draft of the SAM port - will update this PR as I port tests and make sure everything is working okay. It's also a first proof-of-concept for full GPT-4 auto-translation from PyTorch: The entire `modeling_tf_sam.py` file was converted from PyTorch by GPT-4 with the exception of the imports at the top, because I haven't written a prompt for those yet. Update: I checked over all of the code and fixed the issues in the GPT port. Equivalence tests all look good! This is almost ready to merge, but there are a few small issues left: - [x] Get saved model creation working and re-enable tests (problem with the serving signature) - [x] Check for duplication in the processor files - I can probably refactor and simplify things a bit - [x] Refactor convolutions - `channels_first` doesn't actually work on CPU in TF
04-24-2023 17:37:10
04-24-2023 17:37:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>This is now almost ready to go and the code should be ready for review! Remaining issues: - I added a `tol` parameter to the TF-PT equivalence test - 1e-5 is too low for SAM (errors are more like 1e-4, but I used 5e-4 in the test to avoid flakiness). This will require a couple of minor tweaks in other models that are calling that test. - Cleanup/refactor in the processor, there's probably some code duplication that I can remove.<|||||>Thanks for the review - about half of the comments relate to the processor code, which is definitely in need of a refactor, yes. Working on that now!<|||||>@amyeroberts @sgugger I refactored all the changes to the common tests, and just overrode `check_pt_tf_outputs` to change the `tol` in the tests instead - this is much cleaner and resolves most of the issues there. I also refactored the processor, removing the duplicated files and merging methods where appropriate. I think all comments have now been addressed!<|||||>@gante I think all comments are now addressed, and I added `training` wherever it touched a layer that had training-specific behaviour (which is literally one dropout call) All comments from @amyeroberts and @sgugger should be addressed too - are you okay with going ahead and merging now once tests pass?<|||||>I think comments are addressed now - are we okay to merge?<|||||>I'm treating silence as agreement, merging!
transformers
22,969
closed
Many places are type-annotated as 1-tuple when should be arbitrary length tuple
### System Info n/a ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've seen this in a couple of disparate places so I guess the problem is endemic. Two examples I can point to are: - https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_outputs.py#L46 the docstring says: > _"Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer)"_ but it is annotated as: `hidden_states: Optional[Tuple[torch.FloatTensor]]` - https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L283 annotated as returning `-> Tuple[torch.Tensor]` but the tuple has a varying number of elements: ```python outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) if self.is_decoder: outputs = outputs + (past_key_value,) return outputs ``` In both cases there are many examples throughout those files. ### Expected behavior Unlike `list` and `dict` etc, typed tuples have a specific size. https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html#useful-built-in-types the annotation `Tuple[torch.Tensor]` means a tuple with a _single element_ of that type. for a tuple of varying size it should be annotated `Tuple[torch.Tensor, ...]`
04-24-2023 17:12:50
04-24-2023 17:12:50
The type annotations should only be read as a doc helper. They are not exact, and will never be checked by a type checker as Python is not a statically typed language anyway. When we have to decide between complexity of the type annotation and ease of the user/readability, we always pick the later.<|||||>I mentioned because it confused me, since they currently describe something different from what is returned<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,968
closed
Change the probability of generation of certain tokens
### Feature request OpenAI has a feature, that allows you to provide list of token ids, together with how much you want to increase or decrease probability of generation of these tokens. For example if we have id = 112 that has logit= 23.72, we make logit = 23.72 + custom_value. Custom value can have positive or negative value. This should be a very usefull feature to have, I read the generation documentation and I am pretty sure you still did not implement it. ### Motivation It is much more flexible that for example, the option to just remove a list of words 100% from generation. And it is useful for increasing the diversity, so input tokens for example are different from output token, even though this can be partially done with encoder_repetition_penalty if I am not wrong. But here you can choose exact words, and sometimes you need to lower or increase probability of generation for just certain words. ### Your contribution It should not be too difficult to implement, (even though there are always some problems) just in the function for totally removing some words from generation, you should instead of assigning logit of -inf, you replace the current logit value the_curent_logit + the_custom_value, where the custom value is the value users specify.
04-24-2023 16:09:11
04-24-2023 16:09:11
cc @gante<|||||>Hi, @Oxi84! I agree with you on the increased flexibility and the possible implementation path. I did something like that for the feature request [here](https://github.com/huggingface/transformers/issues/22168#issue-1624460056). Feel free to react to @gante's [comment](https://github.com/huggingface/transformers/issues/22168#issuecomment-1477998997) if you find it useful<|||||>Hi @Oxi84 ๐Ÿ‘‹ @iiglesias-asapp said it all, see the links shared :) (by the looks of it, it seems the feature will have enough supporters soon!)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,967
closed
Fix `DeepSpeed` CI job link in Past CI
# What does this PR do? Fix missing `DeepSpeed` CI job link in Past CI. See comment in the change.
04-24-2023 15:37:54
04-24-2023 15:37:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,966
closed
fix ValueError message in LlamaAttention
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22941 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-24-2023 15:20:04
04-24-2023 15:20:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,965
closed
๐ŸŒ [i18n-KO] Fixed `tasks/masked_language_modeling.mdx`
# What does this PR do? Fixes #22838 ![image](https://user-images.githubusercontent.com/33839093/234049816-7c69151b-e2d1-4018-9a3c-bf9778723f68.png) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
04-24-2023 15:17:16
04-24-2023 15:17:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM. If you may, please: - **amend your commit message to be more descriptive than just `fixed`.** (e.g. `fix: docs: missing newline before code block`) - (Optional) provide relevant screenshots of your fixes<|||||>> LGTM. If you may, please: > > * **amend your commit message to be more descriptive than just `fixed`.** > (e.g. `fix: docs: missing newline before code block`) > * (Optional) provide relevant screenshots of your fixes Thanks for your review! I amended the last commit message and added a screenshot in my first comment!
transformers
22,964
closed
[WIP] Testing safetensors==0.3.1rc1
# What does this PR do? Testing <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-24-2023 15:10:22
04-24-2023 15:10:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>Done.
transformers
22,963
closed
Install `accelerete@main` in PyTorch Past CI jobs.
# What does this PR do? Install `accelerete@main` in PyTorch Past CI jobs. ### Context In ##22393, we added back `deepspeed` in Past CI docker image. Later in #22859, we decide to use `accelerate@main`, but I forgot to apply the same change in Past CI docker file, as I mistakenly thought Past CI doesn't use `accelerate`. - However, `[deepspeed-testing]` (installed in Past CI docker) includes `accelerate`, and we want it to be `accelerate@main`. - We can't include `acclerate` it in the docker image, as for the TF Past CI, it will break something - there was a remark: `accelerate requires torch, and this causes import issues for TF-only testing`
04-24-2023 15:09:25
04-24-2023 15:09:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,962
closed
Failed to convert 65B llama to hf weights
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Tried to execute this command to convert the 65B llama weights to hf version. ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /directory_contains_a_65B_weights_folder/ --model_size 65B --output_dir /target_directory/65B/ ``` I got a RuntimeError during the execution. The weights have been successfully loaded but failed during saving. I found a similar error message in [here](https://discuss.huggingface.co/t/torch-save-with-hugging-face-models-fails/25034), but there's no answer for that. I have checked my disk, and it should have enough space to save the model (223 GB available). ``` Fetching all parameters from the checkpoint at /scratch/users/xxxxx/65B. Loading the checkpoint in a Llama model. Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 81/81 [03:52<00:00, 2.88s/it] Saving in the Transformers format. Traceback (most recent call last): File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 441, in save _save(obj, opened_zipfile, pickle_module, pickle_protocol) File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 668, in _save zip_file.write_record(name, storage.data_ptr(), num_bytes) RuntimeError: [enforce fail at inline_container.cc:471] . PytorchStreamWriter failed writing file data/59: file write failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 279, in <module> main() File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 267, in main write_model( File "/users/xxxxx/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 230, in write_model model.save_pretrained(model_path) File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1755, in save_pretrained save_function(shard, os.path.join(save_directory, shard_file)) File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 442, in save return File "/users/xxxxx/anaconda3/envs/llama/lib/python3.9/site-packages/torch/serialization.py", line 291, in __exit__ self.file_like.write_end_of_file() RuntimeError: [enforce fail at inline_container.cc:337] . unexpected pos 8497872128 vs 8497872024 ``` ### Expected behavior I had no issue converting the 7B and 13B models with the same process.
04-24-2023 14:19:09
04-24-2023 14:19:09
The error comes directly from `torch.save`, so we can't really help on our side. I have never seen it either :-/<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,961
closed
Contrastive Search does not work at all for Llama 7B
### System Info latest transofrmers and everything, rtx 3090 ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction @yxuansu I tried it with Llama 7B and results are very bad, it does not generate anything. Also i tried with T5 and results are very bad, but it is probably not intended to work with T5. import transformers import torch from transformers import AutoTokenizer,AutoModelForCausalLM,LlamaTokenizer tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b") model = AutoModelForCausalLM.from_pretrained( "huggyllama/llama-7b", load_in_8bit=False, torch_dtype=torch.float16, device_map="auto", ) input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt').to("cuda") beam_output = model.generate( input_ids, penalty_alpha=0.6, top_k=4, max_length=100, ) print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) This generates just a dot. I enjoy walking with my cute dog. ### Expected behavior It should generate text actually
04-24-2023 13:07:04
04-24-2023 13:07:04
Actually it does work very well for text generation. It just does not generate anything with this prompt for some reason.<|||||>Thanks for investigating! <|||||>As always thanks for hard work on implementing such cool thing as this one. I will share how much it helped, seem like it increases accuracy, but I will need to check much more examples to tell.
transformers
22,960
closed
Fix TF example in quicktour
The quicktour example for `prepare_tf_dataset` was passing the `DatasetDict` of all the dataset splits, instead of a single dataset, which threw an error. This PR fixes it!
04-24-2023 12:54:15
04-24-2023 12:54:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Done!
transformers
22,959
closed
[Llama Tokenizer] Fast llama template
# What does this PR do? Adresses #22794 and #22877
04-24-2023 12:11:47
04-24-2023 12:11:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>The test `tests/models/llama/test_tokenization_llama.py::LlamaIntegrationTest::test_conversion` is failing since April 27th, which is likely due to this PR. See issue page #23400
transformers
22,958
closed
Prepare tests for hfh 0.14
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. The plan is to merge this PR right after the new release is made. This will not impact `transformers`'s end users. However, PR contributors will have to rebase their branch once this one is merged. See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack). cc @sgugger @ydshieh
04-24-2023 12:09:28
04-24-2023 12:09:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,957
closed
[CLAP] Doc nits
# What does this PR do? The documentation had a few problems ( "Constrastive Laungaue" -> "Contrastive Language"
04-24-2023 11:39:58
04-24-2023 11:39:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,956
closed
๐ŸŒ [i18n-KO] Translated `fast_tokenizers.mdx` to Korean
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.mdx` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹น --> # What does this PR do? Translated the `fast_tokenizers.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- ๋ฉ”์ธ ์ด์Šˆ์— ๊ธฐ๋ก์ด ๋‚จ์•„์š”! ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ๋ฆฌํฌ๋ฅผ ์‚ฌ์šฉํ•ด ์—ฐ์Šตํ•˜์‹ค๋•Œ๋Š” ์ œ๊ฑฐํ•ด์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- ์ œ์ถœ ์ „ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋กœ, ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ๋งŒ์˜ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋„ <details>๋กœ ๊ฐ์‹ธ์„œ ๋งŒ๋“ค์–ด๋‘๋ฉด ๋” ์ข‹์„ ๊ฒƒ ๊ฐ™์•„์š”. --> ## Who can review? <!-- ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @wonhyeongseo , @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
04-24-2023 11:31:43
04-24-2023 11:31:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
transformers
22,955
closed
Generate: Add exception path for Donut
# What does this PR do? The multimodal generalization added in #22748 added a regression Donut -- Donut is never expecting a BOS token, having a task-specific token in its place. This PR adds an exception code path to handle it. All related slow tests are now passing. cc @NielsRogge
04-24-2023 11:13:03
04-24-2023 11:13:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,954
closed
Add gradient checkpointing to Whisper Flax
It uses `flax.linen.remat` and follows on PRs #13657 and #17994. # What does this PR do? Adds gradient_checkpointing to Flax Whisper models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sanchit-gandhi @peregilk
04-24-2023 10:43:50
04-24-2023 10:43:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review, @sanchit-gandhi! Should be all good now ๐Ÿ˜ƒ.<|||||>Amazing @versae! Requesting final review before we can get this merged ๐Ÿค—<|||||>Thank you! I learnt a lot ๐Ÿค“
transformers
22,953
closed
Decorate `test_codegen_sample_max_time` as flaky
# What does this PR do? Decorate `test_codegen_sample_max_time` as flaky: it fails 0-5 times per month.
04-24-2023 08:49:23
04-24-2023 08:49:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>I agree! Probably we can use something like `@cached_property`: I see this but never use it myself so far.<|||||>OK. But can I keep `cache_proeprty` as in the current version (it at least avoids loading the checkpoint despite it is downloaded). I see there are few modeling test files doing this.<|||||>If you really want to!
transformers
22,952
closed
DFFT
#18004
04-24-2023 08:40:14
04-24-2023 08:40:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22952). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Working<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Working <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,951
closed
Fine-tune T5 model for Casual Language Modeling(CLM)
Dear all, I am new to NLP and has some strange questions, I try to explain them clearly. My goal is to using a specific corpus to fine-tune t5-base model with a casual language modeling, I find this [document](https://huggingface.co/docs/transformers/main/en/tasks/language_modeling#causal-language-modeling) and it use `AutoModelForCasualLM`, but this liabrary just not include series of t5 models. So my question is: 1. How should I do to finetune t5 model for CLM object? In my understanding, CLM is a process of predicting `token_2` from `token_1` , `token_3` from `token_1, token_2` until the end of input sequence, so i am confused how to finish this process myself. 2. I try to spilt one my train data into something like this (ti == token_i, 1 == eos_token): input_ids&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;labels - `[t1, 1, 1, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, 1, 1, 1, 1, ...]` - `[t1, t2, 1, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, 1, 1, 1, ...]` - `[t1, t2, t3, 1, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, t4, 1, 1, ...]` - `[t1, t2, t3, t4, 1, 1, ...]`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`[t1, t2, t3, t4, t5, 1, ...]` The first problem is obvious, the expanded dataset is too large and requires more time to fine-tune; The second problem is that this seems strange, and I don't know if this fulfills the CLM's mission requirements. This is the only idea that i can catch up to solve this problem, does it work? Thanks!!
04-24-2023 07:58:51
04-24-2023 07:58:51
Hi, @nanbeitk thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. <|||||>> Hi, @nanbeitk thanks for raising an issue! > > This is a question best placed in our [forums](https://discuss.huggingface.co/), as we try to reserve the github issues for feature requests and bug reports. Thanks for your remind and i will post it to forums soon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,950
open
GPTNeoX Flax support
@sanchit-gandhi
04-23-2023 16:56:07
04-23-2023 16:56:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22950). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @OhadRubin - sorry for the late reply here! How are you getting on with this PR? I see that a lot of the modelling code has already been implemented - happy to do a first pass of this code if you want a preliminary review? We can also look to adding a test file and also make sure all the imports are properly defined (see https://huggingface.co/docs/transformers/add_new_model#stepbystep-recipe-to-add-a-model-to-transformers)<|||||>Offer for a review still stands if you'd like me to take a look @OhadRubin!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving this one open to the community to complete! Feel free to take up the PR if you come across this and are interested in a Flax model addition. @OhadRubin has made a nice start on porting the model, you can use the Flax GPT Neo code as reference for the fast attention mechanism we use in Transformers Flax: https://github.com/huggingface/transformers/blob/7d150d68ff6eaecc75b446aa06160b6bc8466e38/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L108<|||||>So I suggested to change `__call__` method of FlaxGPTNeoXAttention to below ```python def __call__( self, hidden_states, attention_mask, position_ids, deterministic: bool = True, init_cache: bool = False, output_attentions: bool = False, ): # Compute QKV # Attention heads [batch, seq_len, hidden_size] # --> [batch, seq_len, (num_heads * 3 * head_size)] qkv = self.query_key_value(hidden_states) batch, seq_len, _ = qkv.shape # [batch, seq_len, (num_heads * 3 * head_size)] # --> [batch, seq_len, num_heads, 3, head_size] qkv = qkv.reshape([batch, seq_len,self.num_attention_heads,3,self.head_size]) # [batch, seq_len, num_heads, 3, head_size] # --> [3,batch, seq_len, num_heads, head_size] qkv = jnp.moveaxis(qkv, source=-2, destination=0) # [3, batch, seq_len, num_heads, head_size] # --> [3,batch, num_heads, seq_len, head_size] qkv = jnp.swapaxes(qkv, 3, 2) # [3,batch, num_heads, seq_len, head_size] # --> 3 [batch, num_heads, seq_len, head_size] query, key, value = qkv query_rot = query[..., : self.rotary_ndims] query_pass = query[..., self.rotary_ndims :] key_rot = key[..., : self.rotary_ndims] key_pass = key[..., self.rotary_ndims :] cos, sin = self.rotary_emb(value, seq_len=seq_len) query, key = apply_rotary_pos_embNP(query_rot, key_rot, cos, sin, position_ids) query = jnp.concatenate((query, query_pass), axis=-1) key = jnp.concatenate((key, key_pass), axis=-1) # revert swap query, key, value = jnp.swapaxes(query, 1, 2), jnp.swapaxes(key, 1, 2), jnp.swapaxes(value, 1, 2) query_length, key_length = query.shape[1], key.shape[1] if self.has_variable("cache", "cached_key"): mask_shift = self.variables["cache"]["cache_index"] max_decoder_length = self.variables["cache"]["cached_key"].shape[1] causal_mask = lax.dynamic_slice( self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length) ) else: causal_mask = self.causal_mask[:, :, :query_length, :key_length] batch_size = hidden_states.shape[0] causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:]) attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape) attention_mask = combine_masks(attention_mask, causal_mask) # During fast autoregressive decoding, we feed one position at a time, # and cache the keys and values step by step. if self.has_variable("cache", "cached_key") or init_cache: key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask) # transform boolean mask into float mask attention_bias = lax.select( attention_mask > 0, jnp.full(attention_mask.shape, 0.0).astype(self.dtype), jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype), ) attn_weights = dot_product_attention_weights( query, #jnp.moveaxis(query, source=-3, destination=-2), key, #jnp.moveaxis(key, source=-3, destination=-2), bias=attention_bias, dropout_rng=None, # dropout_rate=self.config.attn_pdrop, deterministic=deterministic, dtype=jnp.promote_types(self.dtype, jnp.float32), precision=None, ) attn_output = jnp.einsum("bhqk,bkhd->bqhd", attn_weights, value) attn_output = self._merge_heads(attn_output) attn_output = self.dense(attn_output) outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) return outputs ```<|||||>This code doesn't differ much from FlaxGPTNeoSelfAttention. Which part is the fast attention mechanism? @sanchit-gandhi <|||||>The logic for constructing a static k/v cache and computing the attention weights efficiently is quite nicely summarised in the Flax GPT Neo attention layer: https://github.com/huggingface/transformers/blob/7d150d68ff6eaecc75b446aa06160b6bc8466e38/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L108 We should strive to match this implementation as closely as possible (rather than optimising it again ourselves). It's largely inspired by the Flax attention implementation from T5x: https://github.com/google-research/t5x/blob/eb08ffbdec78e231aab1c747720ffb076f83bf18/t5x/examples/scalable_t5/layers.py#L196 This logic can be quite different from PyTorch attention layers, but is much better suited to the static nature of Flax and leverages the Flax dot product attention call. It's great if the current code is by-and-large the same as the reference Flax GPT Neo code, that's a big green tick as far as I'm concerned!
transformers
22,949
closed
Generate: assisted generation with sample (take 2)
# What does this PR do? I was writing the blog post about assisted generation and realized that there is a much better way to solve the `do_sample=True` case ๐Ÿ‘€ Apologies for the repeated review request, but I believe this is a significant upgrade. In a nutshell, the existing `temperature` argument provides a natural control mechanism for assisted generation with `do_sample=True`, when controlling how flat the distribution is at the sampling step. Lower temperature = high probability tokens become more likely to be sampled = more predictable = more likely to match the candidate tokens from the assistant model = assisted generation works faster. Compared to the [other PR](https://github.com/huggingface/transformers/pull/22862), which was closed, this approach has the following pros and cons: - Pros: - No new argument; - Behaves exactly like `.sample`, which users already understand well. No new heuristics; - As fast as the other method for similar randomness levels (see numbers below). - Cons: - Internally, more than one sampling step will occur per output token. If we set a seed, the output will be different than `.sample`'s for the same seed. Not a deal breaker per se, but it means subtle bugs may be tough to catch. ## Performance numbers I've run the [benchmark](https://github.com/gante/huggingface-demos/tree/main/experiments/faster_generation) I've been running for assisted generation, but now for several `temperature` values. The values below are for `facebook/opt-6.7b` as the main model, `facebook/opt-125m` as the assistant model, running `.generate` starting from inputs taken from the C4 test set (i.e. quite random, the dataset I tested where assisted generation struggles the most), on a RTX3090. Note that most LLMs nowadays use temperature between 0.7 and 0.9. TL;DR -- it's slower than greedy assisted generation, as it is expected, but it will still secure solid speedups e.g. with INT8. <img width="418" alt="Screenshot 2023-04-23 at 14 58 10" src="https://user-images.githubusercontent.com/12240844/233844046-953999b7-3f9b-4f97-bab6-b4e4f50943ef.png">
04-23-2023 14:05:03
04-23-2023 14:05:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,948
closed
MaskFormerSwin shows as unsupported the index
Hello, is there any reason why the MaskFormerSwin shows as unsupported on the `index` page? https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx?plain=1#L364 ``` | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | MaskFormerSwin | โŒ | โŒ | โŒ | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | ``` I think it is implemented in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/models/maskformer/modeling_maskformer_swin.py). I also found this PR https://github.com/huggingface/transformers/pull/20344 which seemed like it added the model. The model is also missing from the "Supported models" subsection, but I didn't find its paper so is that part of the reason?
04-23-2023 09:47:13
04-23-2023 09:47:13
@joaocmd Huh, that's odd. Thanks for reporting. Following the shared link, on `main` I see that MaskFormer is shown as a supported model. Perhaps you caught it in a weird moment before a patch was applied? <img width="1025" alt="image" src="https://user-images.githubusercontent.com/22614925/233962005-0e4714bc-3b3f-4548-a964-c0fcfe774b2a.png"> <|||||>^sorry, I just realised that the link went to maskformer but it's `MaskFormerSwin` you're referring to. <|||||>MaskFormersSwin is listed as a "private model" [here](https://github.com/huggingface/transformers/blob/3d3204c025b6b5de013e07dd364208e28b4d9589/utils/check_repo.py#L50). I suspect this is because MaskFormerSwin was added in order to be used as a backbone. @NielsRogge - is this correct? <|||||>Yes ideally it shouldn't be in that public list, it's just there to be used for MaskFormer for backwards compatibility purposes. Users can just use our regular Swin in case they want to use it as backbone for MaskFormer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @amyeroberts and @NielsRogge, should anything be changed so that the model doesn't appear on that public list or should we close this issue?<|||||>@joaocmd If you want to open a PR to fix, I'd be very happy to review :) I don't think it's critical to address however.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,947
closed
[Fix Bugs] Fix keys in `_load_pretrained_model`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the bug when `_load_pretrained_model`. `f'{prefix}.key'` is wrong because the variable `key` is not used is this branch case. And this bug will lead to load some models failed like BLOOM-176B. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-23-2023 07:09:16
04-23-2023 07:09:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger
transformers
22,946
closed
One Question about BlipForConditionalGeneration
### System Info I create a Blip model by: `BlipModel = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")` I want to get the hidden state of the model output. I got this problem: I just used the `BlipModel(pixel_values = ipt)` for inference part. I create a dummy input by `ipt = torch.randn((1,3,384,384))`. When the input batch size is 1, everything works fine. However, when i tried to change the input's batch size to other number, like 2, `ipt = torch.randn((2,3,384,384))`. Then i got this kind of error: **ValueError: Expected input batch_size (2) to match target batch_size (1).** ### Who can help? @younesbelkada, @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. create a model by `BlipModel = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")` 2. create a dummy input by `ipt = torch.randn((2,3,384,384))` 3. get the output of the model by `BlipModel(ipt)` It will get the error. ### Expected behavior I want to get the hidden state of the model output. I give you the correct output when i set the batch as 1. When I set the batch to other number, it will get error. ``` BlipForConditionalGenerationModelOutput(loss=tensor(nan, grad_fn=<AddBackward0>), decoder_logits=tensor([[[-2.4062, -2.4062, -2.4062, ..., -2.4061, -2.4062, -2.4062]]], grad_fn=<ViewBackward0>), image_embeds=tensor([[[-0.7771, -0.0999, 0.0320, ..., -0.6212, 0.8770, -0.1978], [-0.9081, -0.1407, 0.1390, ..., -0.4231, 0.5914, -0.1464], [-1.0026, 0.0212, 0.4119, ..., -0.5520, 0.5102, -0.1100], ..., [-1.2060, -0.0290, 0.0165, ..., -0.5280, 0.3483, -0.0130], [-1.0668, 0.4398, 0.3717, ..., -0.7589, 0.0796, 0.1294], [-1.0077, -0.2549, -0.1857, ..., -0.5054, 0.6910, -0.2062]]], grad_fn=<NativeLayerNormBackward0>), last_hidden_state=tensor([[[-0.7771, -0.0999, 0.0320, ..., -0.6212, 0.8770, -0.1978], [-0.9081, -0.1407, 0.1390, ..., -0.4231, 0.5914, -0.1464], [-1.0026, 0.0212, 0.4119, ..., -0.5520, 0.5102, -0.1100], ..., [-1.2060, -0.0290, 0.0165, ..., -0.5280, 0.3483, -0.0130], [-1.0668, 0.4398, 0.3717, ..., -0.7589, 0.0796, 0.1294], [-1.0077, -0.2549, -0.1857, ..., -0.5054, 0.6910, -0.2062]]], grad_fn=<NativeLayerNormBackward0>), hidden_states=None, attentions=None) ```
04-23-2023 06:55:52
04-23-2023 06:55:52
Hi @Yingshu97 Thanks for the issue! there are few things to keep in mind here: 1- If you want to use `BlipForConditionalGeneration` as a standalone model to retrieve the hidden states and loss value, you need to also pass `input_ids` values, as Blip uses cross attention between textual and visual input. In the provided snippet you did not pass any `input_ids`. If I correctly pass pixel values with a batch size of 2 together with random input ids that have a batch size of 2 it works as expected. The only way to generate captions without having to pass `input_ids` is to call `.generate` method that will initialize the `input_ids` with `decoder_input_ids` and `eos_token_id`. 2- Make sure to use at least the latest release of `transformers`. `pip install --upgrade transformers` The snippet I used is: ```python import torch from transformers import BlipForConditionalGeneration ipt = torch.randn((2, 3, 384, 384)) input_ids = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]]) model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") out = model(pixel_values=ipt, input_ids=input_ids) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,945
closed
๐ŸŒ [i18n-KO] Translated `token_classification.mdx` to Korean
# What does this PR do? Translated the `tasks/token_classification.mdx` file of the documentation to Korean. Thank you in advance for your review! ๐Ÿ˜„ Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- ์ œ์ถœ ์ „ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋กœ, ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ๋งŒ์˜ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋„ <details>๋กœ ๊ฐ์‹ธ์„œ ๋งŒ๋“ค์–ด๋‘๋ฉด ๋” ์ข‹์„ ๊ฒƒ ๊ฐ™์•„์š”. --> ## Who can review? <!-- ๊ฐ€์งœ์—ฐ๊ตฌ์†Œ ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
04-23-2023 03:12:50
04-23-2023 03:12:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>> ์ •๋ง ๋งŽ์€ ๋‚ด์šฉ์ด ๋“ค์–ด ์žˆ๋Š” ๋ฌธ์„œ์˜€์ง€๋งŒ ๋•๋ถ„์— ๊ธˆ๋ฐฉ ์ฝ์„ ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿ˜„ `Named Entity Recognition` -> `๊ฐœ์ฒด๋ช… ์ธ์‹`, `dataset`->`๋ฐ์ดํ„ฐ์…‹` ๋‘ ๊ฐ€์ง€๋ฅผ ํฌํ•จํ•˜์—ฌ ๋ช‡ ๊ฐ€์ง€ ์ˆ˜์ • ์‚ฌํ•ญ์„ ์ œ์•ˆ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ์ž˜ ๋ถ€ํƒ ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์„ธ์‹ฌํ•œ ๋ฆฌ๋ทฐ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค ๐Ÿค— ๊ฐ์‚ฌ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ์ „๋ถ€ ๋‹ฌ๊ณ  ์‹ถ์€๋ฐ, ์•Œ๋ฆผ์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฐˆ ๊ฒƒ ๊ฐ™์•„์„œ ํ•˜๋‚˜๋งŒ ๋‹ฌ๊ฒ ์Šต๋‹ˆ๋‹ค ๐Ÿ˜ข - `entity`๋ฅผ `๊ฐœ์ฒด`๋กœ glossary๋ฅผ ํฌํ•จํ•˜์—ฌ ์ˆ˜์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ํ›จ์”ฌ ์ดํ•ด๊ฐ€ ์‰ฝ๊ณ  ์ต์ˆ™ํ•ด์„œ ์ข‹์Šต๋‹ˆ๋‹ค! - `๋ฐ์ดํ„ฐ ์„ธํŠธ`, `์ฝœ๋ ˆ์ดํ„ฐ`, `ํ‰๊ฐ€ ์ง€ํ‘œ` ๋‹จ์–ด๋„ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค! ๊ผผ๊ผผํ•˜๊ฒŒ ๋ด์ฃผ์…”์„œ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค :) - ์˜คํƒ€, ๋งž์ถค๋ฒ•, ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ฌธ์žฅ๋„ ๋ชจ๋‘ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋Šฅ๋™ํƒœ๋กœ ๋ฐ”๊พธ๋‹ˆ๊นŒ ์ž˜ ์ฝํž™๋‹ˆ๋‹ค ๐Ÿ™‚<|||||>May you please review this PR? ๐Ÿ˜„ @sgugger, @ArthurZucker, @eunseojo
transformers
22,944
closed
Auto-download is a security hole.
### System Info I just ran a project and it decided to download a completely unrelated dataset, which I didn't want or need. The extraneous download was https://huggingface.co/datasets/allenai/c4, which upon inspection contains 800+ trojan viruses. Are these false positives? I shouldn't have to care unless I'm interested in this specific dataset. I think any network calls should be strictly opt-in, eg pehaps `HF_NETWORK_ALLOWED=True python whatever.py` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Run any HF model for the first time. It will make network calls, and download datasets and weights. ### Expected behavior 0 network calls are made, unless opted in to.
04-23-2023 01:07:13
04-23-2023 01:07:13
Hi @freckletonj, thanks for raising this issue. Without knowing which code you're running, it's hard to know what specifically triggered the dataset download (or how unrelated it is). Typically, a dataset would be downloaded if it's requested through the `load_dataset` functionality. However, I see that allenai/c4 dataset [needs to be downloaded through `git clone`](https://huggingface.co/datasets/allenai/c4#how-do-i-download-this). In general, if you've spotted malicious content within a dataset, I'd recommend flagging on the repo (there's already an [open discussion here](https://huggingface.co/datasets/allenai/c4/discussions/2)) You can run transformers in a firewalled or offline mode setting `TRANSFORMERS_OFFLINE=1` in your environment. For datasets, this is `HF_DATASETS_OFFLINE=1`. See: https://huggingface.co/docs/transformers/v4.28.1/en/installation#offline-mode. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,943
closed
๐ŸŒ [i18n-KO] Translated `tasks/image_captioning.mdx` to Korean
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated the `tasks/image_captioning.mdx` file of the documentation to Korean. Thank you in advance for your review! - [x] ์ด๋ฏธ์ง€ ์บก์…”๋‹ Image captioning - [x] ํฌ์ผ“๋ชฌ BLIP ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ Load the Pokรฉmon BLIP captions dataset - [x] ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒ˜๋ฆฌ Preprocess the dataset - [x] ๊ธฐ๋ณธ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ Load a base model - [x] ํ‰๊ฐ€ Evaluate - [x] ํ•™์Šต! Train! - [x] ์ถ”๋ก  Inference Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? This is a work on progress. Could you review this PR when I finish this work? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-23-2023 00:41:04
04-23-2023 00:41:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>I quickly done it! ๐Ÿ˜… Would you review this PR?<|||||>LGTM! ๐Ÿค— <|||||>Happy Wednesday! Could you review this PR? ๐Ÿ˜ƒ @sgugger, @ArthurZucker, @eunseojo
transformers
22,942
closed
Raise error if `stride` is too high in `TokenClassificationPipeline`
# What does this PR do? Users were previously not given a warning if they initialized a `TokenClassificationPipeline` with too high a value for `stride` (`stride` is the value that determines how many tokens overlap between chunks if the user choose to split text into chunks). Unfortunately, it's also possible for a `stride` to be too high if the tokenizer happens to introduce special tokens (e.g. `bert-base-cased` has a maximum length of `512`, but each window gets `2` special tokens, so the highest valid `stride` is `509`) , but there's apparently no easy way to check this in advance (i.e. before the tokenizer is run as part of the pipeline). I think it might be worth fixing the error message ("`pyo3_runtime.PanicException: assertion failed: stride < max_len`") when a tokenizer is called with too high a value of `stride`, to clarify to users that added special tokens subtract from the effective window size. I also thought it was worth clarifying slightly the function of the `stride` parameter. The way `stride` works in the context of Huggingface tokenizers is almost the opposite of the way it works in many [other contexts](https://www.kaggle.com/code/ryanholbrook/the-sliding-window/tutorial). Mostly fixes #22789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who should review? @Narsil
04-22-2023 23:15:22
04-22-2023 23:15:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,941
closed
Typo in error message in LlamaAttention
There's a typo in the `ValueError`'s message on line 219: https://github.com/huggingface/transformers/blob/d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c/src/transformers/models/llama/modeling_llama.py#L217-L221 It should be `(bsz, self.num_heads, q_len, kv_seq_len)` as it is in line 217.
04-22-2023 18:33:42
04-22-2023 18:33:42
@othertea, good spot! Would you like to open a PR to fix this? cc @ArthurZucker <|||||>Yes, I've made a PR!
transformers
22,940
open
Add UDOP
# What does this PR do? This PR adds UDOP as described in [Unifying Vision, Text, and Layout for Universal Document Processing](https://arxiv.org/abs/2212.02623). The model can be seen as an encoder-decoder Transformer with LayoutLMv3 as encoder and a T5 text decoder. Fixes #20650 To do: - [ ] fix `tests/models/udop/test_processor_udop.py::UdopProcessorTest::test_save_load_pretrained_default` - [x] include pytesseract decodings in processor test - [ ] check forward signature of the model as we can't change this afterwards - [ ] update organization to `microsoft`, replace `ArthurZ/udop` everywhere by an official UDOP checkpoint
04-22-2023 15:47:25
04-22-2023 15:47:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22940). All of your documentation changes will be reflected on that endpoint.<|||||>hi @NielsRogge thank you for pushing this PR. I haven't had the chance to try yet, but I'm curious if you have an example or have tried to perform a `torch.jit.trace` or `onnx` conversion on UDOP yet? I know with the previous PR that was where I got stuck.<|||||>@plamb-viso My impression was always that tracing Encoder-Decoder models (e.g. BART) works fine but exporting to ONNX is challenging using jit.trace. There's a research example for BART on how to do that: [Bart + Beam Search to ONNX](https://github.com/huggingface/transformers/tree/main/examples/research_projects/onnx/summarization) I think this part of the reason the ONNX export is now offloaded into optimum: https://github.com/huggingface/transformers/issues/14222#issuecomment-1432960827<|||||>Just want to make sure with the UdopProcessor that we need to manually add the task to each input string. For e.g. if I'm doing document classification, I need to add `document classification.` and `[0,0,0,0]` to my words and bboxes before they go through the processor For e.g.: ```python prompt_text = ['document', 'classification.'] prompt_boxes = [[0,0,0,0],[0,0,0,0]] processor.tokenizer(text=prompt_text, boxes=prompt_boxes) ``` And prepend these input_ids/boxes to the input_ids/boxes that come out of the `processor` (Note that i am using apply_ocr=False)<|||||>Also curious how we should encode the label of a training example. Is it a part of the inputs to `UdopProcessor`? The I-Code example appears to do it [like this](https://github.com/microsoft/i-Code/blob/main/i-Code-Doc/core/datasets/collate_supervised.py#L33)<|||||>thanks @dtiarks looks like a key component of that script is the [BartBeamSearchGenerator](https://github.com/huggingface/transformers/blob/main/examples/research_projects/onnx/summarization/run_onnx_exporter.py#L108) which allows you to convert it to torchscript. Will UDOP have something like this? I tried some of the naive steps I tried in [this comment](https://github.com/huggingface/transformers/pull/21239#discussion_r1129957024) for tracing this new UDOP PR. Looks like the same issues remain. Curious if we'll get a test/example of tracing/compiling/onnx exporting the model either here or in optimum? **EDIT** just a naive try at onnx export in optimum: ```KeyError: "udop is not supported yet.``` And just for completeness, a `torch.onnx.export` gives: ```shell RuntimeError: 0 INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":621, please report a bug to PyTorch. We don't have an op for aten::full_like but it isn't a special case. Argument types: Tensor, bool, int, int, Device, bool, NoneType, Candidates: aten::full_like(Tensor self, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor aten::full_like.out(Tensor self, Scalar fill_value, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) ```<|||||>@plamb-viso Here is the guide to add ONNX export support for a new architecture in Optimum: https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute Feel free to open a PR there and we'll help you if you encounter any issue :slightly_smiling_face: <|||||>Highly anticipating this release! :) Keep up the great work<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. > > Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. Definitely still highly interested in this work<|||||>@ArthurZucker does https://github.com/huggingface/transformers/pull/24565 fix the remaining issues of this PR?<|||||>not sure it does no! The added tokens was the issue if I remember correctly <|||||>Ok. The question is how we can move this PR forward? @plamb-viso, @Jordy-VL, I (and probably others) are still definitely interested in this. @NielsRogge are you aware of other issues blocking this PR or do you have other priorities at the moment?<|||||>My current priority is #24629, then it will be the tokenizer PR which seems to be the last blocking factor. In the mean time I think that it should be good to get all the tests green and ask for a review to make it ready for a final one! The tokenizer can be updated after wards ๐Ÿค— sorry for the wait ๐Ÿ˜“ <|||||>No worries @ArthurZucker โ˜บ๏ธ. My comment was not meant to push anyone. I was just interested if I could contribute to speed up the process.<|||||>@ArthurZucker the tokenizer is the only thing left to make all tests green. The PR is ready other than that. The only issue that is remaining are the sentinel tokens that the UDOP author defined (T5 has 100 of them, UDOP a lot more). Those are actually only relevant during pre-training, not during fine-tuning. Hence the model is already perfectly usable. I can only assign core maintainers for review when the CI is more or less green, so will do that once the tokenizer issue is fixed.<|||||>Hi @NielsRogge, are you planning to do one of your wonderful notebook tutorials once this PR is closed? I'm rather curios on how can we approach a token-classification task with a encoder-decoder architecture such as UDOP :)<|||||>> Hi @NielsRogge, are you planning to do one of your wonderful notebook tutorials once this PR is closed? I'm rather curios on how can we approach a token-classification task with a encoder-decoder architecture such as UDOP :) You can already check pix2struct ;) <|||||>Ok! Let me have a second look at the tokenizer then! There are quite a few issues currently with `spm` and `AddedToken` being taken care of! <|||||>You have to manually add the tokens, and that can't be done in the init with the current API, but this allows us to remove the crazy regex in encoding. <|||||>Eagerly anticipating this PR being merged. Is there any information on priority of this work and rough timelines? Thank you @ArthurZucker and @NielsRogge for your great work. <|||||>Regarding the priority, not really sure. I won't really have time to dive deep in this before a few weeks. If a contributor wants to work on this feel free to take over! <|||||>Update: we're down to 2 failing tests: ``` FAILED tests/models/udop/test_processor_udop.py::UdopProcessorTest::test_save_load_pretrained_default - AssertionError: {'โ–backing': 16057, 'โ–Brunswick': 29980, 'S[629176 chars]7501} != {'<pad>': 0, '</s>': 1, '<unk>': 2, 'โ–': 3,[624686 chars]4401} FAILED tests/models/udop/test_tokenization_udop.py::UdopTokenizationTest::test_save_slow_from_fast_and_reload_fast - ValueError: Non-consecutive added token '('<extra_id_99>', 0.0)' found. Should have index 34602 but has index 33201 in saved vocabulary. ``` @ArthurZucker can you clarify how you pushed https://huggingface.co/ArthurZ/udop?
transformers
22,939
closed
AttributeError: 'MarianMTModel' object has no attribute 'generation_config'
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-swc") def translate(text): translated = model22.generate(**tokenizer(text, return_tensors="pt", padding=True).to("cpu")) return [tokenizer.decode(t, skip_special_tokens=True) for t in translated][0] Error when trying to deploy model on streamlit
04-22-2023 14:28:17
04-22-2023 14:28:17
Hi @kigenchesire, thanks for raising an issue! So that we can best help you, can you make sure to follow the issue template and share: * A full traceback of the error * A code example that we can run to reproduce the error * The running environment: run `transformers-cli env` in the terminal and copy-paste the output<|||||>I am finetuning a translation model using pytorch after fine-tuning and saving it using torch.save(model , 'model24.pt') when i try to deploy the model on streamlit using this code tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-swc") def translate(text): translated = model22.generate(**tokenizer(text, return_tensors="pt", padding=True).to("cpu")) return [tokenizer.decode(t, skip_special_tokens=True) for t in translated][0 I run into [MarianMTModel' object has no attribute 'generation_config'] The model does well on colab notebook<|||||>@kigenchesire When saving a transformers model, it's recommended to use `model.save_pretrained(checkpoint_name)`. This ensures everything, including any necessary files such as the model config are saved alongside the weights. <|||||> TL;DR; `pip install transformers<4.28` --- With different project, had same error message. (Maybe unexpectedly?) It seems to break backward compatibility, so that makes issue on several project that was built on old `TrainingArguments`. In my case, with https://github.com/alexa/massive, error log was like ``` Traceback (most recent call last): File "massive/scripts/train.py", line 102, in <module> main() File "massive/scripts/train.py", line 89, in main trainer = trainer_cls( File "massive/src/massive/utils/trainer.py", line 264, in __init__ super().__init__(*args, **kwargs) File "transformers/trainer_seq2seq.py", line 72, in __init__ if self.args.generation_config is not None: AttributeError: 'MASSIVETrainingArguments' object has no attribute 'generation_config' ``` I found `generation_config` concept(?) created with https://github.com/huggingface/transformers/commit/5506d0496957cde19318eee3d34ee682b654abe8, which is I think the cause of this issue. So one who suffers from with this issue, use `transformers<4.28`, which means before the first version applied above commit. Also I suggest `transformers` project to check whether `generation_config` attribute even exists before checking it is `None` or not, if maintainers think it is required change to keep backward compatibility or to be a bit more safer code :) https://github.com/huggingface/transformers/blob/46d2468695a85dfcc2be0524caa912edefcf2391/src/transformers/trainer_seq2seq.py#L72 Of course user may use `Seq2SeqTrainingArguments` to use with `Seq2SeqTrainer`, not `TrainingArguments`. <|||||>@cgbahk It don't believe it's necessary to downgrade the transformers version. If there's a model which was created before generation configs were introduced, then you can load and resave and the generation config will be created e.g. for MarianMT ```python from transformers import MarianMTModel model = MarianMTModel.from_pretrained(my_checkpoint) model.save_pretrained(my_checkpoint) ``` @gante Is this correct? Are there any other considerations for the generation config? <|||||>Oh, I didn't know `MarianMTModel` is of transformer builtin. In my case, `MASSIVETrainingArguments` is custom built class which don't know about `generation_config`. I don't fully understand, transformers internal, but https://github.com/huggingface/transformers/issues/22939#issuecomment-1550749034 resolved my case. Hopefully that resonates with any other who encounters same error :)<|||||>@cgbahk No worries - it's a big library! And commenting on what resolves issues is useful for everyone :) To your first comment, yes, we certainly want to make sure things are backwards compatible. In this case, it seems that the docs aren't clear. It is recommended to use `Seq2SeqTrainingArguments` for `Seq2SeqTrainer`, however the `args` input type is listed as `TrainingArguments` [here](https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/trainer#transformers.Seq2SeqTrainer). For `MASSIVETrainingArguments` - subclassing from `Seq2SeqTrainingArguments` should be enough to resolve the issue if doing seq2seq training. <|||||>Came here to give pretty much the same reply as @amyeroberts just did :) Also, we can't ensure retrocompatibility when classes get overwritten, it's impossible to anticipate all changes in advance ๐Ÿค— <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,938
closed
num_noise_spans should be <= num_items #22246
# What does this PR do? <!-- Remove if not applicable --> Fixes #22246 When `mean_noise_span_length` is set to 1 there are cases (for example `noise_density=.55` when the `num_noise_spans` becomes greater than num_nonnoise_tokens So the correction seems to be to consider also the `num_nonnoise_tokens` in calculation of `num_noise_spans` num_noise_spans = int(np.round(min(num_noise_tokens,num_nonnoise_tokens) / self.mean_noise_span_length)) Demonstration of the buggy behaviour https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8#file-t5_denoising-py Demonstration of the possible correction https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8#file-t5_denoising_corrected-py ## Who can review? @sanchit-gandhi
04-22-2023 14:04:21
04-22-2023 14:04:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks- Done - Did `make style` and pushed the changes to this branch<|||||>Thanks @alexcpn - unfortunately the CI is still unhappy about the code style! Could you try rebasing onto main, run style fix, and then force pushing? ``` git rebase upstream/main make style git push -f issue-22246 ```<|||||>I have rebased and checked and force-pushed. It is actually fine locally. ``` git branch * issue-22246 main $ make style black examples tests src utils setup.py All done! โœจ ๐Ÿฐ โœจ 2380 files left unchanged. ruff examples tests src utils setup.py --fix make: ruff: No such file or directory make: *** [Makefile:69: style] Error 127 ``` Understood the problem - I was having black 22.x and CircleCI is using 23.x - https://github.com/huggingface/transformers/pull/21480<|||||>@sanchit-gandhi CI is green<|||||>Awesome, nice find @alexcpn ๐Ÿ™Œ Let's get a final review and get the PR merged!
transformers
22,937
closed
Add return type hint to AutoModel.from_pretained
### Feature request I think the ergonomics of using, e.g. `AutoModelForSequenceClassification.from_pretrained(...)` can be improved. Consider the following example: ```python model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") ``` It's quite hard to reason about `model` since we don't really know _what_ it is without inspecting it at runtime... More concretely, I wanted to freeze BERT's internal parameters but not those of the classifier layer. ### Motivation The productivity of developers using the automodel API can be improved; code can be checked by linters more thoroughly etc. ### Your contribution I'd like to open a PR; but am not sure what the best return type is for `AutoModelForSequenceClassification.from_pretrained`. If we could discuss this here and reach some sort of consensus that this is desirable I will draft a pull request.
04-22-2023 13:06:15
04-22-2023 13:06:15
Hi @JosephSBoyle, thanks for raising this issue. Could you give some more information about the behaviour you expect and expand on "we don't really know what it is without inspecting it at runtime..." ? As I understand the issue, there's three different points being raised: * Knowing what the model "is" * How to easily modify the model's behaviour * IDE integration i.e. linters Is this correct? <|||||>Hia @amyeroberts, apologies my writing in the original issue is a bit unclear. To elaborate on my "not knowing what it is" statement, basically my point is that it's quite difficult to know the type of whatever model instance is returned when you call `from_pretrained`. The "at runtime" part of that statement was in reference to basically inspecting the returned instance e.g. in a breakpoint; which is what I ended up doing. I think that my points can best be summarized as: "without the return type the `AutoModelForSequenceClassification.from_pretrained` method is difficult to use effectively." <|||||>The `AutoXxx.from_pretained(checkpoint)` API is essentially a factory method, loading the architecture / model specified by `checkpoint`. So, for AutoModelForSequenceClassification, any model which has a [sequence classification head](https://github.com/huggingface/transformers/blob/d6f1da6b7169e3b2bcc2fcdc91a19171ecafeb88/src/transformers/models/auto/modeling_auto.py#L641) can be returned. As such, there isn't a predefined type (other than being a subclass of `PreTrainedModel`). After loading a model, it's possible to check its class: ```python In [1]: from transformers import AutoModelForSequenceClassification In [2]: model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") In [3]: type(model) Out[3]: transformers.models.bert.modeling_bert.BertForSequenceClassification ``` Specific model architectures can be loaded directly too: ```python In [1]: from transformers import BertForSequenceClassification In [2]: model = BertForSequenceClassification.from_pretrained("bert-base-uncased") In [3]: type(model) Out[3]: transformers.models.bert.modeling_bert.BertForSequenceClassification ``` From the model config, it's possible to find which model architecture will be loaded e.g. [here](https://huggingface.co/bert-base-uncased/blob/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json#L3) for `bert-base-uncased`. Note: for this checkpoint, all of the weights for the base model would be loaded in, the weights for the language modeling head discarded, and weights for the classification head randomly initialised. <|||||>Mhmm, I understand that the return type varies based on the first arg. as you describe. Since we know that it's a subclass of `PreTrainedModel` I think we should at least add that in as the return type, something like this perhaps: ```python # auto_factory.py from typing import TYPE_CHECKING if TYPE_CHECKING: from ...modeling_utils import PreTrainedModel ... def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) -> type[PreTrainedModel]: ... ``` **Edit:** corrected the rtype to `type[PreTrainedModel]`, as the returned type will be a subclass of this type. --------------------------------------------------- Your first cell is actually what I ended up doing to find the type of the returned instance, this is actually what I meant by my earlier runtime comment.<|||||>Except that this type hint does not help anyone understand the result, so is it really useful to bloat the code to add it?<|||||>Is it not better to know that the returned instance is a `PreTrainedModel` rather than literally `Any`? @sgugger <|||||>Given the fact that the class is `AutoModel`, I don't think anyone will think in good faith this will return anything else than a model.<|||||>I think there's some misunderstanding here, nobody thinks that this model is returning anything other than a model. The purpose of adding a type hint is to enable things like static analysis tools to work properly, which they can't do without knowledge of the return type. <|||||>I think there is some misunderstanding indeed. Transformers does not support any static analysis tool like Mypy and never will, as it would require us to add type annotations that bloat the code. In all our experiments this makes the code harder to read without ever catching any useful bug. We only use type annotations when useful for the doc (in particular seeing the signature in an IDE with type annotations when an argument's type is not obvious) but that is all.<|||||>I didn't ask for MyPy support, just a single type hint. Static analysis includes things like linting which are what I'm talking about. For a single type hint you get: - Attribute and method completion for attrs of `PreTrainedModel` - Linting, which can for instance, identify when any of the methods are called with the incorrect signature. - Better readability: programmers don't have to dig through the library to figure out that no matter what the first argument to `from_pretrained` is, they will recieve a subclass of the same class. Moreover, you reduce the cognitive burden on users who don't have the entire API of `PreTrainedModel` memorized to heart.<|||||>Except that it's not just a single type hint. The `from_pretrained` method in `auto_factory` can either return a `PreTrainedModel`, a `TFPreTrainedModel` or a `FlaxPreTrainedModel` depending on the class it was used with. If you find a way to have a simple type hint, we will of course merge such a PR, but I don't think it's easy to add.<|||||>Hi @sgugger, thank you for the explanation - I wasn't aware that there were multiple possible return types.
transformers
22,936
closed
Avoid invalid escape sequences, use raw strings
# What does this PR do? Fixes invalid escape sequences in strings, they are illegal in python, and will throw a SyntaxError if run with "-W error", and always throw a syntax error starting with python-3.12 (planned). With python between 3.6 and 3.11 and "-W default" they produce a DeprecationWarning: ``` > python -W error -c '"(.*?)-\d{5}-of-\d{5}"' ๎‚ฒ โœ” ๎‚ฒ File "<string>", line 1 "(.*?)-\d{5}-of-\d{5}" ^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: invalid escape sequence '\d' ``` This has been fixed in the past, e.g. https://github.com/huggingface/transformers/pull/4924 - but missing linter support and the fact that python only fails this with "-W" flag set has let the issue be re-introduced. This PR fixes those occurrences and enables ruff "W605" error, which will prevent this for the future. ## Who can review? Maybe @sgugger who added ruff in the first place or (Why was "W605" disabled when switching to ruff?) @patrickvonplaten wo recently introduced some invalid escape sequences @LysandreJik who merged the linked fix from 2020
04-22-2023 11:53:53
04-22-2023 11:53:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failures are unrelated to this PR and due to the last release of huggingface_hub. All is fixed on main so merging :-)<|||||>Rebased on top of current main branch to pass the tests. <|||||>> Rebased on top of current main branch to pass the tests. Does not seem to help the tests, let me know if I should do anything else for this PR<|||||>The Hub is currently having high-response times due to some abusive traffic, which is why the tests are all red. I just forgot to push the merge button yesterday, so merging this as it shouldn't have any negative impact on main.
transformers
22,935
open
[Doc] `add_special_tokens`'s documentation is ambigus
### System Info - `transformers` version: 4.28.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.13.2 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("EleutherAI/pythia-70m") print(tok.bos_token) print(tok.eos_token) print(tok.bos_token_id) print(tok.eos_token_id) print(tok("the dog walked", add_special_tokens=True)) ``` outputs ``` <|endoftext|> <|endoftext|> 0 0 {'input_ids': [783, 4370, 7428], 'attention_mask': [1, 1, 1]} ``` ### Expected behavior I expect it to output `[0, 783, 4370, 7428, 0]`. Or am I misunderstanding what `add_special_tokens` is supposed to do?
04-22-2023 00:40:55
04-22-2023 00:40:55
The `add_special_tokens`, when set to `True` is used to add special tokens at the beginning and at the end of the input sequence. In your case, since you are using a single input sequence, the tokenizer will add the special tokens `[CLS]` and `[SEP]` respectively at the beginning and at the end of the sentence. Note that not all tokenizers support adding special tokens. If a tokenizer does not support adding special tokens, setting `add_special_tokens` to `True` will have no effect. You are using the "**EleutherAI/pythia-70m**" tokenizer which does not have a specific token for `[CLS]` and `[SEP]`. These tokens are represented by the `bos_token` and `eos_token`, respectively. Hence, the output you are seeing is correct and corresponds to the tokenized input sequence with the added special tokens. If you want to add `[CLS]` and `[SEP]` tokens to your input sequence using this tokenizer, you can do so by explicitly specifying the token IDs for these tokens, like this: ```python input_ids = tok.encode("the dog walked", add_special_tokens=False) input_ids = [tok.bos_token_id] + input_ids + [tok.eos_token_id] attention_mask = [1] * len(input_ids) output = {"input_ids": input_ids, "attention_mask": attention_mask} print(output) ```<|||||>Thanks for explaining. Can this behavior be added to the docs for the transformer tokenizer class? Nowhere on the [API docs](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer) does it say that `add_special_tokens=True` will add the cls and sep tokens. One might naturally assume that BOS and EOS would be the natural ones to place before and after a sequence!<|||||>You can also define these tokens when initialising the model or after. `tokenizer.cls_token = "[CLS]"` should be working. I agree that the doc should be clearer. Thanks for reporting the confusion <|||||>I am waiting until the added tokens refactoring is finish to make sure this is fixed, and update the doc!
transformers
22,934
closed
llm finetuning is overfitting?
So far all my attempts, with different models (bloom, gpt), sizes, accelerate framework, datasets have led to one issue: the evaluation loss keeps increasing. plz see my log (deepspeed)![image](https://user-images.githubusercontent.com/738834/233752055-0f4fbda2-3641-4e6d-9889-291c21464e4c.png)
04-22-2023 00:30:07
04-22-2023 00:30:07
Hard to really tell without specific dataset info, training procedure, and the model parameter count BUT: I can't speak for your other attempts but this picture doesn't seem unusual. The eval loss decreases until epoch=0.94 but increases at epoch=1.25 and onwards. That implies that training is good for one epoch. Depending on the size of the dataset, models can easily start overfitting after one finetuning epoch (since it's just repeating the data). I assume this is finetuning, not pretraining? Finetuning with adapters may work better. <|||||>Hi, @paulcx thanks for raising an issue! This is probably a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. If you suspect that the issue is coming from the library itself, could you follow the issue template and give more information about what is being run (environment and reproducible code snippet) so that we can best help you? <|||||>> Hard to really tell without specific dataset info, training procedure, and the model parameter count BUT: > > > > I can't speak for your other attempts but this picture doesn't seem unusual. The eval loss decreases until epoch=0.94 but increases at epoch=1.25 and onwards. That implies that training is good for one epoch. Depending on the size of the dataset, models can easily start overfitting after one finetuning epoch (since it's just repeating the data). I assume this is finetuning, not pretraining? > > > > Finetuning with adapters may work better. > > That's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation.<|||||>> That's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation. Yes, one epoch seems to be enough for this run. Going any further would likely require hyperparameter tuning and/or a larger dataset. Some of my models also begin overfitting after one finetuning epoch (around ~900k samples in my dataset - I don't know how large your dataset is). Other projects may be using a different/larger dataset? Even if not, that's not too uncommon. They can finetune for a few more epochs than needed and then evaluate their checkpoints on a test set. The best performing checkpoint is then selected (which could be from a few epochs prior to the latest).<|||||>> > That's right. I'm trying finetuning. I knew pretraining and Lora finetuning works as expected. I just wonder if anyone have same issue. Does that mean one epoch is about overfitting? I saw a lot of open source projects and they finetuned 3 or 4 epoches with no explanation. > > > > Yes, one epoch seems to be enough for this run. Going any further would likely require hyperparameter tuning and/or a larger dataset. Some of my models also begin overfitting after one finetuning epoch (around ~900k samples in my dataset - I don't know how large your dataset is). > > > > Other projects may be using a different/larger dataset? Even if not, that's not too uncommon. They can finetune for a few more epochs than needed and then evaluate their checkpoints on a test set. The best performing checkpoint is then selected (which could be from a few epochs prior to the latest). my dataset is only about 90K samples. one epoch 'theory' is quite interesting. It seems that people does not talk about this issue but ignoring overfitting.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,933
closed
Flan-T5-small and T5-small have different number of layers?
### System Info Hi there, it appears that `google/flan-t5-small` and `t5-small` have different number of layers: - Flan-T5 config: https://huggingface.co/google/flan-t5-small/blob/main/config.json - T5 config: https://huggingface.co/t5-small/blob/main/config.json I only find this inconsistency with *small*. The rest of the sizes (base/large/3b/11b) seem to match up for these two sets of models. I have not been able to find much information on this. Is there a reason for Flan-T5-small to have more layers than its non instruction-tuned counterpart? I assume they should be equal. Thank you! ### Who can help? CC: @sgugger
04-21-2023 21:50:48
04-21-2023 21:50:48
cc @younesbelkada <|||||>@sgugger Ah perhaps the authors inherited the flan-t5-small checkpoint from the improved `google/t5-v1_1-small` instead of `t5-small`. Its config file is a better match.<|||||>Hi @taidnguyen, This is absolutely correct, According to the `t5x` repository, flan-t5 are derived from the `t5-v1_1` family as their config files refer to the config files of the `t5-v1_1` models. This information can be found on the original repository that hosts the original flan-t5 weights: https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints Hope this helps!<|||||>@younesbelkada That helps - thank you! They documented this in their repo (as you show) but not in their paper, so definitely a surprise to me assuming that the inherited checkpoint of Flan-T5 is the original T5. I will close the issue.
transformers
22,932
closed
LlamaTokenizer should follow signature of PreTrainedTokenizer
### Feature request PreTrainedTokenizer has a signature such that if the eos/bos tokens shouldn't be applied, they're set to None in the constructor. LlamaTokenizer (which is a subclass of PreTrainedTokenizer) instead always sets these, and adds a `add_eos/bos_token` field to enable/disable them. This breaks code that depended on the behavior of the base class in order to detect how to form sequences, eg when doing custom tokenizations using `tokenizer(stuff, add_special_tokens=False)` to build pieces of a sequence and then manually adding the EOS/BOS tokens. cc @zphang @ArthurZucker from git blame ### Motivation - ### Your contribution -
04-21-2023 20:37:36
04-21-2023 20:37:36
The default `eos_token` and `bos_tokens` are there because the `sentence piece` model has these set, which means we are following the `llama` implementation. Having `add_eos` and `add_beo` gives the flexibility of enabling the addition, while not having to set the tokens to do so. This might not fit your specific usage, but most of our tokenizer work that way! I am not sure I understand why it would break in your case, but you can easily set the `eos` and `bos` to `None`, the same goes with the `add_eos` and `add_bos` that you can set to `False`<|||||>This is even more confusing after I was told that the normal transformers tokenizers add CLS and SEP to sequences by default when `add_special_tokens=True`, but in this class, you're (optionally) instead adding BOS and EOS instead. Simply setting eos/bos to None works, but a user isn't expecting to have to do this to get behavior that's compatible with the base class. And tokenization bugs tend to be very subtle and take a lot of time to track down - this issue didn't crash my code, it just silently inserted an extra token (which caused havoc downstream). The point of inheritance is that a subclass should have the same public interface as the parent, so that a user just has to conform with the interface of the parent and can expect all subclasses to just work. This isn't the case here. <|||||>Thanks for educating me on inheritance, I understand your use-case and how this can be confusing. The problem is that in order to keep the information of the content of these tokens, while not necessarily adding them prevents us from setting them to `None`. Indeed this breaks your code, but it also allows people to use the tokenizer in another way. They can decide whether to add or not the eos and bos depending on the usage. Overall it's a design choice and lots of other tokenizer don't respect this either. I am sorry that it broke your pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,931
closed
Using decoder_input_ids with Seq2SeqTrainer.predict()
Hi, Is there a way to use `decoder_input_ids` in `Seq2SeqTrainer.predict()` as in `model.generate()`? The goal is to generate sentences with both the encoder input and decoder input to initialize the generation. Thank you very much!
04-21-2023 20:27:22
04-21-2023 20:27:22
cc @sgugger <|||||>cc @gante <|||||>Hey @zhenduow ๐Ÿ‘‹ [This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`). Could you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :)<|||||>> Hey @zhenduow ๐Ÿ‘‹ > > [This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`). > > Could you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :) Hi @gante , Thank you very much for the reply! I have checked the PR and I have a further question. I pass the `decoder_input_ids` to `model.generate()` by `inputs['decoder_input_ids']` within `Seq2SeqTrainer`, is that right? By doing this, I need to batch the `decoder_input_ids` to a tensor, which requires padding or truncating my `decoder_input_ids`. However, my generation task has various length of `decoder_input_ids`, which causes error when batching `decoder_input_ids` into a tensor. For example, my `decoder_input_ids` looks like: [ [1,2,3], [4,5], [6] ] It cannot create a tensor because the lengths of the three lists do not match. Is there a way to solve this problem? Thank you very much!<|||||>@zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation) BTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐Ÿค—<|||||>> @zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation) > > BTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐Ÿค— Thank you! I should ask this in the forum.<|||||>> Hey @zhenduow ๐Ÿ‘‹ > > [This PR](https://github.com/huggingface/transformers/pull/22772), which allows passing `decoder_input_ids` as part of the input to the `Seq2SeqTrainer`, was merged after the latest release (`v4.28`). > > Could you try installing from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`), and check whether it works correctly on your use case? :) > @zhenduow you probably need to pad `decoder_input_ids` -- see [this guide](https://huggingface.co/docs/transformers/main/en/pad_truncation) > > BTW, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐Ÿค— Thank you! I solved the tensor problem with padding and got results. However, my results do not start with the `decoder_input_ids`. I want to double check in case this is a bug that: Do I need to pass any additional argument to `Seq2SeqTrainer` (which will tell the decoder to start with the given ids) besides adding `decoder_input_ids` as a key in the dataset dictionary? <|||||>Try passing `labels` and `decoder_input_ids`: if my memory is correct, the former will be used to obtain the evaluation metrics, and the later as the prompt for the decoder<|||||>> Try passing `labels` and `decoder_input_ids`: if my memory is correct, the former will be used to obtain the evaluation metrics, and the later as the prompt for the decoder Thank you for the suggestion! I try to pass the `decoder_input_ids` to the forward function, but because I use trainer, I don't have control over the `model()` function. I only can add `decoder_input_ids` as a key in the model input dictionary. That does not seem to work. I dive into the code and find that there is this line of code in the `predict()` in `trainer.py`: https://github.com/huggingface/transformers/blob/15f260a82f98788354d55cb2788e9f0b5131fb77/src/transformers/trainer.py#LL3101C1-L3101C1 `test_dataloader = self.get_test_dataloader(test_dataset)` This line of code changes my `test_dataset['decoder_input_ids']` from my custom decoder prompts to shifted `labels`. Can you please check if this is intended or a bug? Why is this the case?<|||||>I was not sure of the behavior, it seems my memory was incorrect :) Alternatively, this one will work for sure: you can set `forced_decoder_ids` ([docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.forced_decoder_ids)), which will force the tokens you specify in the position you define. You can use it to force a starting sequence, assuming it is the same for all members of the batch.<|||||>Thanks! Can you please explain how I can use `forced_decoder_ids` with `trainer`? It seems like I cannot call the `generate()` function anywhere, only the `model()` function. Can I use `forced_decoder_ids` with `model()`? <|||||>@zhenduow you can define a generation config ([docs 1](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig) [docs 2](https://huggingface.co/docs/transformers/main/en/generation_strategies#default-text-generation-configuration)) and pass it to the trainer (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args_seq2seq.py#L47)). If you parameterize `forced_decoder_ids` in the generation config, it will be passed to `.generate` at evaluation time<|||||>> @zhenduow you can define a generation config ([docs 1](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig) [docs 2](https://huggingface.co/docs/transformers/main/en/generation_strategies#default-text-generation-configuration)) and pass it to the trainer (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args_seq2seq.py#L47)). > > If you parameterize `forced_decoder_ids` in the generation config, it will be passed to `.generate` at evaluation time I did as you suggested and printed: `print(trainer.model.generation_config)` , which shows me that ``` GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "forced_decoder_ids": [ [ 1, 123 ] ], "pad_token_id": 0, "transformers_version": "4.29.0.dev0" } ``` The [1,123] is for testing. However, the generation is still the same as before. Is there anything wrong here?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,930
closed
vilt_model
# What does this PR do? as per the issue #22561 ,model parallelism is implemented for vilt model. @sgugger pls review it..is any changes is there pls let me knew
04-21-2023 20:18:34
04-21-2023 20:18:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks!
transformers
22,929
closed
SAM example code does not work
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-3.10.0-957.12.2.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores ### Expected behavior --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-5-abdc2d7068b8> in <module> 4 5 inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) ----> 6 outputs = model(**inputs) 7 8 masks = processor.image_processor.post_process_masks( ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, pixel_values, input_points, input_labels, input_boxes, input_masks, image_embeddings, multimask_output, output_attentions, output_hidden_states, return_dict, **kwargs) 1331 ) 1332 -> 1333 sparse_embeddings, dense_embeddings = self.prompt_encoder( 1334 input_points=input_points, 1335 input_labels=input_labels, ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, input_points, input_labels, input_boxes, input_masks) 669 if input_labels is None: 670 raise ValueError("If points are provided, labels must also be provided.") --> 671 point_embeddings = self._embed_points(input_points, input_labels, pad=(input_boxes is None)) 672 sparse_embeddings = torch.empty((batch_size, point_batch_size, 0, self.hidden_size), device=target_device) 673 sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=2) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in _embed_points(self, points, labels, pad) 619 padding_point = torch.zeros(target_point_shape, device=points.device) 620 padding_label = -torch.ones(target_labels_shape, device=labels.device) --> 621 points = torch.cat([points, padding_point], dim=2) 622 labels = torch.cat([labels, padding_label], dim=2) 623 input_shape = (self.input_image_size, self.input_image_size) RuntimeError: Expected object of scalar type double but got scalar type float for sequence element 1.
04-21-2023 19:51:38
04-21-2023 19:51:38
Hello @YubinXie Thanks for the issue! I did not managed to reproduce your issue with `torch==1.13.1`, and here is the snippet I used: ```python from PIL import Image import requests import torch from transformers import AutoModel, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = AutoModel.from_pretrained("facebook/sam-vit-base").to(device) processor = AutoProcessor.from_pretrained("facebook/sam-vit-base") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) ``` I can see that you are using `torch==1.5.x`. Note that `transformers` has a minimum required version of `1.9` for `torch`: https://github.com/huggingface/transformers/blob/main/setup.py#L180 - hence I have tried to run that script with `torch==1.9.1` and did not encountered the issue. I strongly recommend you to install a greater version of `torch` (i.e. use at least the version `1.9`). Could you try to update `torch` and let us know if you still face the issue?<|||||>> Hello @YubinXie Thanks for the issue! I did not managed to reproduce your issue with `torch==1.13.1`, and here is the snippet I used: > > ```python > from PIL import Image > import requests > import torch > > from transformers import AutoModel, AutoProcessor > > device = "cuda" if torch.cuda.is_available() else "cpu" > > model = AutoModel.from_pretrained("facebook/sam-vit-base").to(device) > processor = AutoProcessor.from_pretrained("facebook/sam-vit-base") > > img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" > raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") > input_points = [[[450, 600]]] # 2D location of a window in the image > > inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) > with torch.no_grad(): > outputs = model(**inputs) > ``` > > I can see that you are using `torch==1.5.x`. Note that `transformers` has a minimum required version of `1.9` for `torch`: https://github.com/huggingface/transformers/blob/main/setup.py#L180 - hence I have tried to run that script with `torch==1.9.1` and did not encountered the issue. I strongly recommend you to install a greater version of `torch` (i.e. use at least the version `1.9`). Could you try to update `torch` and let us know if you still face the issue? Hi @younesbelkada Thank you for your response. I updated my torch and now the model works! However, I got another error the the post process: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-abdc2d7068b8> in <module> 6 outputs = model(**inputs) 7 ----> 8 masks = processor.image_processor.post_process_masks( 9 outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() 10 ) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/image_processing_sam.py in post_process_masks(self, masks, original_sizes, reshaped_input_sizes, mask_threshold, binarize, pad_size) 404 interpolated_mask = F.interpolate(masks[i], target_image_size, mode="bilinear", align_corners=False) 405 interpolated_mask = interpolated_mask[..., : reshaped_input_sizes[i][0], : reshaped_input_sizes[i][1]] --> 406 interpolated_mask = F.interpolate(interpolated_mask, original_size, mode="bilinear", align_corners=False) 407 if binarize: 408 interpolated_mask = interpolated_mask > mask_threshold ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/functional.py in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias) 3957 if antialias: 3958 return torch._C._nn._upsample_bilinear2d_aa(input, output_size, align_corners, scale_factors) -> 3959 return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors) 3960 if input.dim() == 5 and mode == "trilinear": 3961 assert align_corners is not None TypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType), but expected one of: * (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors) didn't match because some of the arguments have invalid types: (Tensor, list of [Tensor, Tensor], bool, NoneType) * (Tensor input, tuple of ints output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out) ``` The code is from hugging face SAM page. I wonder if it is code issue or, other package issue. <|||||>Hi @YubinXie Thanks for iterating, it seems that this is a duplicate of https://github.com/huggingface/transformers/issues/22904 Could you try to uninstall `transformers` and re-install it from source? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,928
closed
Update tiny models and a few fixes
# What does this PR do? - Update tiny models, including: - Sam - BigCode - (recent) GPTNeoXForSequenceClassification - Fix wrong condition introduced in my PR #22774 (it doesn't break things, but it will affect the creation of `pipeline_to_model_mapping` for new model types) - Fix import in `test_pipelines_mask_generation.py` 2 fixes need @ArthurZucker .
04-21-2023 19:35:35
04-21-2023 19:35:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>The result of `Check Tiny Models / Check tiny models (push)` could be ignored.
transformers
22,926
closed
add pref_train_gpu_one.mdx
See issue #17459 Good evening. I didn't translate technical terms and I preferred to keep them in english. So I hope it's all ok. I had some problems in previous pull requests; if it fails again, please can you say how to resolve? Good bye.
04-21-2023 16:04:02
04-21-2023 16:04:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22926). All of your documentation changes will be reflected on that endpoint.
transformers
22,925
closed
mxmax/Chinese_Chat_T5_Base ๆจกๅž‹ๆ€Žไนˆ็”จtorch.jit.trace่ฟฝ่ธช
ๆˆ‘ไฝฟ็”จไธ‹้ข็š„ไปฃ็ ่ฟ›่กŒๆจกๅž‹่ฝฌๆข tokenizer = AutoTokenizer.from_pretrained('./outputs/model_files/') model = AutoModelForSeq2SeqLM.from_pretrained('./outputs/model_files/') device = torch.device("cpu") model.to(device) model.eval() tokenized_dict = tokenizer( ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], return_tensors="pt" ) input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'], torch.Tensor([[2]]).long()) traced_model = torch.jit.trace(model, input_tuple) traced_model.save("./model.pt") ไฝ†ๆ˜ฏๅพ—ๅˆฐ่ฟ™ๆ ทไธ€ๅ †้”™่ฏฏไฟกๆฏ D:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py:701: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]: Traceback (most recent call last): File "E:\Python\project\Chinese_Chat_T5_Base-main\convertModel.py", line 25, in <module> traced_model = torch.jit.trace(model, input_tuple) File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 759, in trace return trace_module( File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 976, in trace_module module._c._create_method_from_trace( RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput(loss=None, logits=tensor([[[-10.4197, 6.3242, 8.7392, ..., -10.0839, -7.8809, -8.4109]]], grad_fn=<UnsafeViewBackward0>), past_key_values=((tensor([[[[-9.3662e-02, -2.6494e-01, 2.7725e-01, 3.5019e-01, 5.3944e-01, -2.6313e-01, -5.9071e-01, 5.1579e-01, -5.2901e-01, -5.9420e-01, -9.2730e-02, 1.2436e-03, -8.6124e-01, -1.4801e-01, -6.9207e-01, ...... [ 2.7600e-02, -2.4005e-02, -7.1618e-02, ..., 1.9455e-01, 1.0591e-02, -8.1877e-02], [ 5.6630e-02, -2.8372e-03, 3.5540e-02, ..., 1.0443e-01, 3.7175e-02, -5.7037e-02], [-5.6965e-04, 1.0548e-04, 9.4504e-04, ..., -1.7588e-04, 8.6722e-04, -8.3949e-04]]], grad_fn=<MulBackward0>), encoder_hidden_states=None, encoder_attentions=None) :Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor]]
04-21-2023 15:53:17
04-21-2023 15:53:17
Hi @ling976, thanks for raising this issue! Unfortunately, I don't speak Chinese :/ , is it possible to share the issue description in english? Could you also follow the issue template and share information such that this can be reproduced, including: * The running environment: run `transformers-cli env` in the terminal and copy-paste the output * The line in the code the error is triggered on - is it the model save? * The checkpoint or architecture being run? <|||||>transformers ็‰ˆๆœฌๆ˜ฏ4.22.1 ๆˆ‘็Žฐๅœจ้œ€่ฆๅฐ†.binๆ ผๅผ็š„ๆจกๅž‹ๆ–‡ไปถ่ฝฌๆขๆˆ.ptๆ ผๅผ. ็Žฐๅœจๅพ—ๅˆฐไธ€ไธช้”™่ฏฏไฟกๆฏ File "D:\Program Files\Python310\lib\site-packages\torch\jit\_trace.py", line 976, in trace_module module._c._create_method_from_trace( RuntimeError: Tracer cannot infer type of Seq2SeqLMOutput(loss=None, logits=tensor([[[-8.0331, -0.6127, 1.7029, ..., -6.0205, -4.9355, -7.5521]]],<|||||>@ling976 Please try passing `torchscript=True` as an argument when loading the model i.e `model = AutoModelForSeq2SeqLM.from_pretrained('./outputs/model_files/', torchscript=True)` <|||||>ๆˆ‘ๅŠ ไธŠtorchscript=TrueๅŽๆŠฅไบ†ไธ€ไธชๆ–ฐ็š„้”™่ฏฏไฟกๆฏ D:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py:701: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]:<|||||>@ling976 [As mentioned above](https://github.com/huggingface/transformers/issues/22925#issuecomment-1518062046), could you please follow the issue template and the necessary information such that we can replicate the issue? <|||||>่ฟ™ไธชๅทฒ็ปๅฏไปฅไบ†,ๅ‰้ข็š„่ฏ•่ฟ‡่ญฆๅ‘Š่€Œไธๆ˜ฏ้”™่ฏฏ,ๆ˜ฏๆˆ‘็œ‹้”™ไบ†<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,924
closed
add perf_train_gpu_one.mdx
See issue #17459 Good evening. I didn't translate technical terms and I preferred to keep them in english Good bye.
04-21-2023 15:45:14
04-21-2023 15:45:14
transformers
22,923
open
Need support for Sentence Similarity Pipeline
### Feature request HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines ### Motivation HuggingFace now has a lot of Sentence Similarity models, but the pipeline does not yet support this: https://huggingface.co/docs/transformers/main_classes/pipelines ### Your contribution I can write a PR, but might need some one else's help.
04-21-2023 14:27:18
04-21-2023 14:27:18
cc @Narsil <|||||>Hi @timxieICN , Thanks for the suggestion. In general, sentence-similarity like https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 are served by `SentenceTransformers` which is a library on top of `transformers` itself. https://huggingface.co/sentence-transformers Sentence transformers adds a few configuration specifically on how to do similarity with a given model as there's several ways to do it. From a user point of view it should be relatively easy to do this: ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer( model_id ) embeddings1 = model.encode( inputs["source_sentence"], convert_to_tensor=True ) embeddings2 = model.encode(inputs["sentences"], convert_to_tensor=True) similarities = util.pytorch_cos_sim(embeddings1, embeddings2) ``` This is exactly the code that is actually running to calculate those on the hub currently: https://github.com/huggingface/api-inference-community/blob/main/docker_images/sentence_transformers/app/pipelines/sentence_similarity.py Adding this directly in `transformers` would basically mean incorporating `sentence-transformers` within `transformers` and I'm not sure it's something desired. Maybe @amyeroberts or another core maintainer can confirm/infirm this. Does this help ? <|||||>We definitely don't want a circular dependency like that! As the example you shared @Narsil is so simple, I think it's a good replacement for a pipeline. Let's leave this issue open and if there's a lot of interest or new use case we can consider other possible options.
transformers
22,922
closed
[CI] clap patch fusion test values
# What does this PR do? Fixes CI on clap cc @sgugger
04-21-2023 14:24:46
04-21-2023 14:24:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,921
closed
Bring back PartialState DeepSpeed
# What does this PR do? This PR brings back the DeepSpeed implementation. After thorough help and investigation with @pacman100 we've determined the cause of the test failures is an issue on the DeepSpeed side, and an issue will be opened to track this. As a result, to maintain tests passing this PR should not be merged until after it is completed Said issue: https://github.com/microsoft/DeepSpeed/issues/3341 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @pacman100
04-21-2023 14:05:41
04-21-2023 14:05:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello Zach, after further deep dive, I found that we need to use DeepSpeed utils for initializing distributed setup in Accelerate's Partial State as done in the above-linked PR. This should solve the issues with the DeepSpeed tests.<|||||>Thank you for the fix! Confirmed it works https://github.com/huggingface/transformers/actions/runs/4815347341/jobs/8573997198
transformers
22,920
closed
Small sam patch
# What does this PR do? Fixes #22904 . It is backward compatible and prevents having to modify any of the notebooks we shared
04-21-2023 13:38:26
04-21-2023 13:38:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22920). All of your documentation changes will be reflected on that endpoint.
transformers
22,919
closed
Fix: Seq2SeqTrainingArgs overriding to_dict for GenerationConfig json support
# What does this PR do? `Seq2SeqTrainingArguments` override the `to_dict()` method from `TrainingArguments`. This is a fix to #22831 (solution 2), solving an error that happened when saving to json a `Seq2SeqTrainingArguments` object with a `GenerationConfig` attribute. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #22831 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @gante
04-21-2023 13:16:49
04-21-2023 13:16:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>Just updated<|||||>Thanks for adding this @Natooz ๐Ÿ’›
transformers
22,918
closed
Add an attribute to disable custom kernels in deformable detr in order to make the model ONNX exportable
As per title and reported in https://github.com/huggingface/transformers/issues/22330 and https://github.com/huggingface/optimum/pull/931 This option will allow us to patch the model on the fly during the export to avoid going into the try/catch logic that is not supported by PyTorch ONNX export.
04-21-2023 12:17:06
04-21-2023 12:17:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>@fxmarty Following up on this, I agree with @sgugger's suggestion and think that a config argument would be a better alternative. <|||||>Thank you will update!<|||||>@amyeroberts Let me know if this is better!
transformers
22,917
closed
Place static llama variables for multigpu
# What does this PR do? When using accelerate, attention_mask and position_ids were being retransferred for every layer after the first device. This change transfers them once in advance. ## Who can review? @ArthurZucker @pacman100
04-21-2023 11:48:22
04-21-2023 11:48:22
Not really in favor of this, if the problem is with the way accelerate handles a for loop, should be solved in accelerate. cc @sgugger <|||||>I am imagining accelerate might be able to store a cache of variables by object id so as to not repeatedly transfer the same one. When to empty such a cache is unclear to me.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This is already done by Accelerate behind the scenes, so there is no need for this PR.<|||||>@sgugger Accelerate moves the weights prior to the model forward function. Since `attention_mask` and `position_ids` (unlike `hidden_states`) are never returned back from a forward function, it moves them again and again for every layer.<|||||>I'm not sure the actual time you lose for that is worth changing the code in Transformers however.<|||||>Iโ€™m on a system with iommu=soft where data transfer is very slow. I wanted to provide numbers for the speed change on my system, which was significant enough that I opened this PR, before closing it out. However, I am busy for a day or two. Regardless it is clear that you would prefer a solution be found for accelerate than transformers. I opened this after seeing the various PP commits adding similar code, although they are addressing a more serious issue. Iโ€™ll come back to this to add my numbers or feel free to close it out for now.<|||||>Running llama 65b with [software iommu](https://github.com/pytorch/pytorch/issues/1637#issuecomment-338268158), this change drops my inference time from 25.11 s/token to 19.45 s/token, which is 22.5% of the inference delay. Thoughts?<|||||>I discussed it more internally with other core maintainers and we decided this falls into specific-hardware optimizations that we don't accept to avoid bloating the code of the models. You can still use your changes locally and share them with others via our code in the Hub API though.<|||||>Thanks for your consideration and ideas of other approaches.
transformers
22,916
closed
Add inputs_embeds functionality when generating with GPT-Neox
This PR extends https://github.com/huggingface/transformers/pull/21405 and #21889 by @gante to GPTNeox models (which also includes the recent Pythia Suite models), , making it accept inputs_embeds when generating. ## Who can Review? @gante @sgugger
04-21-2023 10:32:06
04-21-2023 10:32:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,915
closed
Make sam ONNX exportable
As per title, would be great to have this PR on the next release so that we can support the ONNX export (see https://github.com/huggingface/optimum/pull/995). This piece is the only blocking one.
04-21-2023 08:41:56
04-21-2023 08:41:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker The PR is here: https://github.com/huggingface/optimum/pull/995
transformers
22,914
closed
beam_sample throws a nan error on long generations
### System Info - `transformers` version: 4.29.0.dev0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.4 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It seems that `beam_sample` throws a NaN exception when generating long sequences. Specifically the call `next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)`. Example generate call that causes the bug: ``` output_sequences = model.generate( input_ids=encoded_prompt, max_length=512 + len(encoded_prompt[0]), temperature=0.7, num_return_sequences=1, num_beams=2, do_sample=True, ) ``` Reliably throws a NaN on my system and @diegomontoya 's system. In my testing this occurs when the requested number of new tokens is roughly >=256. In the example above I use 512 just to be sure. Based on the debugging I've done so far, what's happening is `beam_scores` increases exponentially with each iteration of the inner beam search loop. It does this until it reaches a very large negative number, causing `next_token_scores` to contain all `-inf`, which causes `probs` to be all `nan` and then `multinomial` throws. As for why this occurs, a rough summary of the inner loop elucidates: ``` while next_token_scores = ... next_token_scores = next_token_scores + beam_scores next_token_scores = logits_warper(..., next_token_scores) beam_scores = beam_scorer.process(..., beam_scores, next_token_scores) ``` Specifically, beam_scores feeds back into itself with every iteration. If the inner loop was additive only, this would be fine, and `beam_scores` would increase linearly with length. But this is not the case. `logits_warper` makes the loop non-additive. In the example above it behaves as approximately multiplying `next_token_scores` by 1.5. Hence `beam_scores` goes exponential and the function eventually throws. I don't know enough about how `beam_sample` is meant to function to analyze further. It does seem odd to me, though, that the sampling is dependent on the current beam score. Since the beam score is a scalar value, it affects the probabilities of all tokens equally, so ... it shouldn't have any effect at all? So why apply it to the sampling logic? It seems more reasonable to me, and would indeed fix this bug, if it were added after sampling and before handing the scores off to the BeamScorer for processing. ### Expected behavior `generate` shouldn't throw a `nan` error under reasonable circumstances.
04-21-2023 08:28:49
04-21-2023 08:28:49
Hey @fpgaminer ๐Ÿ‘‹ My first recommendation would be to use "normal" `sample`, perhaps with a slightly lower temperature. If you think about it, `beam_sample` is a sample-based strategy that greedily picks the best scores among the drawn sequences, which is similar to `sample` with a lower temperature (which also favors high-scoring tokens). `sample` is also faster (no beam-related operations), and subject to much more maintenance :) If you still want to use `beam_sample`, my recommendation would be to add the `remove_invalid_values` flag ([docs](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationConfig.remove_invalid_values)).<|||||>Hello @gante, Thanks for the response. I have no intention of using beam sampling myself. I'm bubbling up a bug report by @diegomontoya from my GPTQ-triton repo, that turned out to just be a bug in `transformers` itself. It was a curious enough bug that I got nerd-sniped by it... > If you still want to use beam_sample, my recommendation would be to add the remove_invalid_values flag ([docs](https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationConfig.remove_invalid_values)). I don't think that would work. The bug results from `beam_scores` exploding, which drives all the scores down to `-inf`. Invalid tokens are removed in the `logits_processor` pass, before `beam_scores` is added. Even if it were applied after, it would just set _all_ tokens to `max` which I think would cause softmax->multinomial to just throw anyway. ------ I've looked at the code more, and read up on beam search more. I think my initial take is correct. I see no reason to feed the beam_scores to the logit processors. It's a scalar value added to all the logits/probs, so what effect could it possibly have? Temperature, for example, is completely unaffected as proven like so: ``` Suppose we have a vector `x` Softmax is `e**x / sum(e**x)` Suppose we add a scalar `b`: `x + b` Softmax is now: `e**(x + b) / sum(e**(x + b))` Exponential law: `e**x * e**b / sum(e**x * e**b)` Simplify: `e**x * e**b / (sum(e**x) * e**b)` Simplify: `e**x / sum(e**x)` Q.E.D. ``` It's possible that `b`, aka the beam score, has an effect on other logit processors, but I can't fathom what effect one would _want_ it to have on things like top p, top k, typical, etc. I'd have to go through each in more detail to have a stronger opinion here. It just feels wrong, since I think all those logit processors were introduced in the context of greedy sampling. They weren't designed to take a global scalar like beam score into account. So I argue that `beam_sample` should be modified to _not_ include the `beam_scores` when calling `logits_warper`, and when doing multinomial sampling. It should be added after the tokens have been sampled. ------- I also think there is other oddness to the way `beam_sample` samples. Consider the simplified forms of `sample` vs `beam_sample`: sample: ``` next_token_logits = outputs.logits[:, -1, :] next_token_scores = logits_processor(input_ids, next_token_logits) next_token_scores = logits_warper(input_ids, next_token_scores) probs = nn.functional.softmax(next_token_scores, dim=-1) next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) ``` beam_sample: ``` next_token_logits = outputs.logits[:, -1, :] next_token_scores = log_softmax(next_token_logits, dim=-1) next_token_scores_processed = logits_processor(input_ids, next_token_scores) next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) next_token_scores = logits_warper(input_ids, next_token_scores) probs = nn.functional.softmax(next_token_scores, dim=-1) next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) ... beam search stuff ... ``` Why does `beam_sample` apply a `log_softmax` to the logits before feeding them to `logits_processor` when the sample method doesn't? That seems odd, especially when all the logit processors are expecting, well, logits, not the log softmax of logits. The same goes for `logits_warper`, which also applies a sequence of LogitProcessors. They aren't likely to be expecting log softmaxed values. And then `softmax` gets applied afterwards to values in the log softmax domain... very confusing. ---- So I propose for beam_sample (simplified/pseudo): ``` next_token_logits = outputs.logits[:, -1, :] next_token_scores = logits_processor(input_ids, next_token_logits) next_token_scores = logits_warper(input_ids, next_token_scores) probs = nn.functional.softmax(next_token_scores, dim=-1) next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) ... gather tokens, scores ... ... add beam_scores to respective scores ... ... beam processing ... ``` --- > If you think about it, beam_sample is a sample-based strategy that greedily picks the best scores among the drawn sequences, which is similar to sample with a lower temperature (which also favors high-scoring tokens). sample is also faster (no beam-related operations), and subject to much more maintenance :) My quick take: sure, maybe. But in theory beam search and beam sampling still provide potential value over low temp sampling. They can explore the landscape more thoroughly and potentially find more globally optimal sequences that a greedy sampling method usually won't. I dunno. I'm personally in the "better logit processors" and "better models" camp than futzing with beam search. But since HF includes beam sampling, might as well make it work as well as possible?<|||||>@gante I am not qualified to comment on the internal code itself so I will only report from a user level perspective: 1. Adding `remove_invalid_values=True` does not resolve the issue. I am still getting the exact same nan/inf exceptions with num_beams = 2 on input+output (expected) total token values > 256. I added it to both generate_config and directly to generate() method and it still threw exceptions. Am I using it correctly? ```probability tensor contains either `inf`, `nan` or element < 0``` 2. Having read the naive concepts of beam search and also huggingface's own interpretations of the beam search, I don't understand why user have to care about a `remove_invalid_values` toggle. Isn't it implied that generate wrapper, which most user and external libs use, should auto remove and bypass any invalid values during gen stages? This add another chicken and egg problem, if we don't add `remove_invalid_values`, only a runtime generate will find out that inf/nan tokens are generated and then we apply a `remove_invalid_values` pass which negates any performance. As result, as an end-user, I will always set `remove_invalid_values` with `num_beams` >1, but if the both options are symbiotic, they should be done internally by the library and not exposed to user. 3. I am using beam search because I believe it may resolve an issue that is outlined by the beam search principle. I can lower the the temperature but that requires that: * I can detect my result from higher temperature is wrong, very difficult for my problem set. * Even if I can detect error due to higher temp, I need re-run pass in lower temp which is basically beams in operation. * Not possible to predetermine whether lower/higher temp result in better answer. In my test case use of beam-search. I am relying on the idea that `num_beams=2` select two paths, and only until the end, compare the prob score of the result and give me the best one. <|||||>@fpgaminer @diegomontoya Let me split my comment in three: `remove_invalid_values`, how beam sample is implemented, and a suggestion based on @diegomontoya 3rd point in the last comment :) ___________________________________________________________________________________________ `remove_invalid_values` was created to avoid errors with extreme numbers, as a last resort. When it needs to be used, it means that there is something unstable in the process. I was double-checking it and it is missing the `-inf` case, which is probably why it didn't immediately solve your case (I'll open a PR). However, it should still be avoided, and the cases where you actually need it are very very uncommon. > Isn't it implied that generate wrapper, which most user and external libs use, should auto remove and bypass any invalid values during gen stages? Definitely not. Our guiding principles for building blocks like `.generate()`, sorted by priority, are 1. keep retrocompatibility (unless it is to fix bugs) and 2. build a default behavior that works in most cases and minimizes black-box behavior. Having `remove_invalid_values` on by default would go against 2 -- if there is something wrong in the generation strategy, we'd rather show it up to the user. ___________________________________________________________________________________________ The same discussion and arguments you wrote about `beam_sample` were also written in the past, by myself included :) (a few examples: [1](https://github.com/huggingface/transformers/pull/5420#discussion_r449779867) [2](https://github.com/huggingface/transformers/pull/21341#discussion_r1089223478)). TL;DR: I agree with your point of view, but a) `beam_sample` is not an official implementation so the order of operations is not right or wrong, it is a matter of taste of its creator b) because of the principles I wrote above, ensuring retrocompatibility > individual opinion. Our codebase is fully open, so feel free to monkey patch on your end any different perspective ๐Ÿค— And my apologies for the nerd snipe, beam methods are indeed a strong magnet! __________________________________________________________________________________________ @diegomontoya if beam sample keeps failing after I add the `-inf` case and monkey patching is not an option, try the following: 1. Use `sample` 2. Set `num_return_sequences` to an integer, which will make `generate` return these many sequences per input 3. Set `output_scores` and `return_dict_in_generate` to `True`, so you have access to the scores 4. Pick the output with the highest score ([this function may help](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores)) This is essentially a poor man's version of beam sample. While beam sample greedily optimizes the score in the intermediary steps, this will retain full randomness. ___________________________________________________________________________________________ I hope this (long) comment helps understanding why we make certain decisions, even if you don't agree with them :) <|||||>@gante Thank you. Got much more info than I had hoped in return and not only did it clarify it for me but your poor-man's beam really opened up my mind about how I should properly use and approach my future usage of generate as a whole. <|||||>btw, the error you've seen is very likely related to this one: https://github.com/huggingface/transformers/issues/22979 TL;DR -- pytorch's sampling function is buggy atm, being able to pick tokens with 0 probability ๐Ÿ‘€ <|||||>Just adding that it could be CUDA, bitsandbytes and pytorch related. The same error happens for me as well on `torch==1.13.1` with model call: `tokens = model.generate(**inputs, max_new_tokens=500, do_sample=True, temperature=0.9, streamer=streamer)` This call does not throw the error, but returns gibberish: `tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, num_beams=1, temperature=0.9, streamer=streamer, remove_invalid_values=True)` returns for example: `ovรกBit}")VAjem ubuntu็ฑณ alwaysicago connectingselection Rewrite perceMillBLoll Forschavano economic pygindi Pent รถss fs file` For me the issue happens on my multi gpu ubuntu 22.04 system with CUDA 12.0 (python detects 11.8 interestingly). It does not happen on my single gpu ubuntu 20.04 system with CUDA 11.6. Also, this only happens when I load the model in 8-bit with `bitsandbytes`. Loading the model without `load_in_8bit=True` is very slow (5-10 seconds per token), but returns text that makes sense and does not throw any error. Further testing shows that after downgrading from CUDA 11.8 to CUDA 11.6, I no longer receive this error when using `load_in_8bit=True` and `tokens = model.generate(**inputs, max_new_tokens=25, do_sample=True, temperature=0.9, streamer=streamer)`. However, I still get gibberish results: `ั‚ะพะบ hastICEyk char sunnyๅฐ‘ hardwareington chi GraphSecondsesserๅผ• conser conformygieneOriuvimplughtub`. The winning combo for 'no error and words that make sense' seems to be either: - CUDA 11.6, `load_in_8bit=True` and a single GPU system. - or CUDA 11.6, `load_in_8bit=False` and a multi GPU system. **Update: ** it's not pytorch related, happens for both 2.0.1 and 1.13.1. See https://github.com/huggingface/transformers/issues/23989<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,913
closed
Fix counting in Slack report for some jobs
# What does this PR do? Fix counting in Slack report for some jobs. ### Context For the additional jobs (i.e. not model testing jobs), the number of failed tests are being summed across over all machine types (single/multi - gpu). We have strange things like single gpu deepspeed CI has only 1 failure but 86 was shown on the report, see: <img width="812" alt="image" src="https://user-images.githubusercontent.com/2521628/233582320-81c3b61d-add4-4ad5-b007-2522ad0f44b3.png">
04-21-2023 08:16:01
04-21-2023 08:16:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,912
closed
Top 100
This PR celebrates the upcoming 100k stars for `transformers` by highlighting 100 open-source repositories that use or have integrated `transformers` in their projects. This list should not be limited to 100 (which we use as a mirror to the 100k stars), so we're looking forward to having libraries that integrate `transformers` open PRs against this document.
04-21-2023 08:07:47
04-21-2023 08:07:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,911
closed
Skip a failing test on main for now
# What does this PR do? On CircleCI, we have a failure on `main`: ``` FAILED tests/models/roberta/test_modeling_roberta.py::RobertaModelTest::test_assisted_greedy_search_matches_greedy_search ``` (BTW, it works on daily CI GPU runners)
04-21-2023 08:05:20
04-21-2023 08:05:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>Merge as it just skip the failing test.
transformers
22,910
closed
Expose AutoModelForMaskGeneration
As per title, with this PR `from transformers import AutoModelForMaskGeneration` works. An alternative could be to remove `AutoModelForMaskGeneration` (as `AutoModel` already does the job), but currently for sam Hub metadata it is `AutoModelForMaskGeneration` that is used and not `AutoModel`: https://huggingface.co/datasets/huggingface/transformers-metadata/blob/main/pipeline_tags.json#L576
04-21-2023 07:03:51
04-21-2023 07:03:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>Either way is fine for me - it's just to have consistency between the transformers Hub metadata and transformers. An either fix would be to just change the metadata, and remove the `AutoModelForMaskGeneration`.<|||||>So which one do you want @ArthurZucker ? I'm fine either way.<|||||>Let's go with `AutoModelForMaskGeneration` ๐Ÿ˜‰
transformers
22,909
closed
Moved labels to enable parallelism pipeline in Luke model
# What does this PR do? As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561),moved labels to the same device as logits for the luke model. @sgugger can u pls review this pr ,there is mistake in the [#22907](https://github.com/huggingface/transformers/pull/22907),I have made necessary changes..
04-21-2023 06:42:51
04-21-2023 06:42:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,908
closed
added GPTNeoForTokenClassification
# What does this PR do? It adds the class GPTNeoForTokenClassification, which allows using GPT Neo models for token classification tasks. The implementation follows the one for other models (such as GPT2) closely and simply adds a linear layer after the hidden states. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada
04-21-2023 06:16:54
04-21-2023 06:16:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! Could you make sure the CI tests are green? Can review then! <|||||>@ArthurZucker Sure. I'm getting the hang of it. Now, the only failing tests are connected to flax and seem unrelated to this pull request.<|||||>If the flax errors are not due to the PR, this is ready to be reviewed, @ArthurZucker and @younesbelkada :-)<|||||>I just checked the logs for the remaining errors one more time. The errors are related to the import of the optax library, where jax.Array is used in a type. Apparently there is no name "Array" in the top-level namespace of the jax module. I cannot see how this could be related to my PR.<|||||>The jax version used in the examples_flax test is 0.3.6: Collecting jax!=0.3.2,<=0.3.6,>=0.2.8 (from transformers==4.28.0.dev0) Using cached jax-0.3.6-py3-none-any.whl This version clearly has no Array class. I am unsure why such an old version should be used?<|||||>Figured out that optax <= 0.1.4 is needed. And found out that upstream/main has that change already ๐Ÿ‘ Now everything should be cleared for review.<|||||>Definitely ready for review, @ArthurZucker and @younesbelkada :-) <|||||>Cool! Reviewing now<|||||>All done and ready to be merged, @ArthurZucker and @younesbelkada ๐Ÿ‘ <|||||>I implemented the same change as for GPTNeoXForTokenClassification, i.e., I removed the hasattr etc. and just use config.classifier_dropout directly.<|||||>@sgugger Ready to merge when the checks complete. Thanks for the fast action ๐Ÿ‘ ... and more to come in the next weeks!
transformers
22,907
closed
Moved labels to enable parallelism pipeline in Luke model
# What does this PR do? Fixes #22561 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Please let me know if there's anything I need to correct! Thanks @sgugger
04-21-2023 03:09:10
04-21-2023 03:09:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22907). All of your documentation changes will be reflected on that endpoint.<|||||>@katiele47 - thanks for the PR! Apologies, I reviewed another PR implementing the changes for Luke quickly this morning without properly reading the issue and realising this PR was also open. As the PR #22909 notes, there was just a small change that needed to happen on L2232, updating `logits` -> `reshaped_logits`. Otherwise the PR all looked good and after updating would have been merged :) I'm sorry for my mistake - I hope this doesn't discourage you from contributing and we welcome any PRs that you'd like to open in the future. @sushmanthreddy Anyone in the community is able to review PRs. If you spot something in the code that needs updating, could you comment directly on the PR instead of opening another one? <|||||>@amyeroberts . I don't see, where I have gone wrong. I am new to open source and actually have worked on this issue before the @katiele47 did, but I haven't just mentioned a reviewer to review. this is the proof for that [link](https://github.com/huggingface/transformers/pull/22900/files), actually, I have made the same changes needed just due to branch conflicts that haven't kept the proper pr. anyways sorry for that if my mistake is there, I hope u understand <|||||>Hi @amyeroberts thanks for spotting the small change and no worries! Now that it has been fixed by @sushmanthreddy PR #22909 should I close this PR? <|||||>@katiele47 Yes, this PR can now be closed. Thanks again for opening - we look forward to future contributions! @sushmanthreddy You haven't made a mistake, don't worry :) It's just a request to make it easier for maintainers to keep track of issues and PRs in the codebase.
transformers
22,906
closed
Fix a minor bug in CI slack report
# What does this PR do? #22798 added code to show the difference between 2 CI runs. However, the previous CI run(s) may not yet produced the artifact `test_failure_tables`, and we got `KeyError: 'model_failures_report.txt'` in the last run. This PR adds some check.
04-21-2023 02:28:15
04-21-2023 02:28:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,905
closed
JukeBox Model Parallelism by moving labels to same devices for logits
# What does this PR do? This is a draft PR that moves labels to same devices as logits for accomplishing model parallelism for JukeBox model. Since `src/transformers/models/jukebox/modeling_jukebox.py` does not contain conditional statements where label is not None, I would like to ask you for helps how I can implement features of moving labels to same device as logits for accomplishing model parallelism as mentioned in Issue 22561. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22561 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-21-2023 01:20:24
04-21-2023 01:20:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22905). All of your documentation changes will be reflected on that endpoint.<|||||>It seems there is actually nothing to do for this model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,904
closed
SAM: Notebook example not working
### System Info - `transformers` version: 4.29.0.dev0 - Platform: macOS-13.2-arm64-arm-64bit - Python version: 3.10.6 - Huggingface_hub version: 0.13.4 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 1.13.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO Dependencies - torch = 1.13.0 - numpy = 1.23.4 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Pull [SAM Notebook example](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) 2. Run notebook up until ``` masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()) ``` 3. Get error ``` TypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType), but expected one of: * (Tensor input, tuple of SymInts output_size, bool align_corners, tuple of floats scale_factors) didn't match because some of the arguments have invalid types: (Tensor, !list!, bool, !NoneType!) * (Tensor input, tuple of SymInts output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out) ``` ### Expected behavior original_sizes/output_sizes to be of the expected type, is this a dependency issue?
04-20-2023 22:54:51
04-20-2023 22:54:51
I have similar issue when i run ``` img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) outputs = model(**inputs) ``` ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-6-abdc2d7068b8> in <module> 4 5 inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) ----> 6 outputs = model(**inputs) 7 8 masks = processor.image_processor.post_process_masks( ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, pixel_values, input_points, input_labels, input_boxes, input_masks, image_embeddings, multimask_output, output_attentions, output_hidden_states, return_dict, **kwargs) 1331 ) 1332 -> 1333 sparse_embeddings, dense_embeddings = self.prompt_encoder( 1334 input_points=input_points, 1335 input_labels=input_labels, ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in forward(self, input_points, input_labels, input_boxes, input_masks) 669 if input_labels is None: 670 raise ValueError("If points are provided, labels must also be provided.") --> 671 point_embeddings = self._embed_points(input_points, input_labels, pad=(input_boxes is None)) 672 sparse_embeddings = torch.empty((batch_size, point_batch_size, 0, self.hidden_size), device=target_device) 673 sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=2) ~/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/sam/modeling_sam.py in _embed_points(self, points, labels, pad) 619 padding_point = torch.zeros(target_point_shape, device=points.device) 620 padding_label = -torch.ones(target_labels_shape, device=labels.device) --> 621 points = torch.cat([points, padding_point], dim=2) 622 labels = torch.cat([labels, padding_label], dim=2) 623 input_shape = (self.input_image_size, self.input_image_size) RuntimeError: Expected object of scalar type double but got scalar type float for sequence element 1. ``` ``` - `transformers` version: 4.29.0.dev0 - Platform: Linux-3.10.0-957.12.2.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```<|||||>cc @younesbelkada @ArthurZucker <|||||>Thanks for reporting! Will fix this asap<|||||>Same here. TypeError: upsample_bilinear2d() received an invalid combination of arguments - got (Tensor, list, bool, NoneType), but expected one of: * (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors) didn't match because some of the arguments have invalid types: (Tensor, !list!, bool, !NoneType!) * (Tensor input, tuple of ints output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out)<|||||>Hi @antoinemacia @xiao2mo I can confirm now the colab scripts works as expected if you re-install the library from source! @YubinXie could you open another ticket for your issue to keep track of it? Have a great weekend everyone!<|||||>@younesbelkada @ArthurZucker its working on my end, thanks for looking at it so promptly ๐Ÿ˜„ Good week end yall!
transformers
22,903
closed
Pix2Struct: unable to overfit on a single training sample
### System Info - `transformers` version: 4.28.0 - Platform: Linux-5.4.0-1037-aws-x86_64-with-glibc2.27 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here's the minimal training loop: ``` import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, AutoProcessor from torch.optim import AdamW import torch torch.manual_seed(42) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base") processor = AutoProcessor.from_pretrained("google/pix2struct-base") dummy_target = "The model should overfit this sentence" image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(image_url, stream=True).raw) encoded_image = processor(images=image, return_tensors="pt") encoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20) optimizer = AdamW(model.parameters(), lr=1e-4) model.train() device = 'cuda' if torch.cuda.is_available() else 'cpu' model.to(device) flattened_patches=encoded_image.flattened_patches.to(device) attention_mask=encoded_image.attention_mask.to(device) labels=encoded_text.input_ids.to(device) for i in range(1000): outputs = model( flattened_patches=flattened_patches, attention_mask=attention_mask, labels=labels ) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() if i % 50 == 0: model.eval() prediction = model.generate( flattened_patches=flattened_patches, attention_mask=attention_mask) print(f'step: {i} train_loss: {loss.item()} prediction: {processor.batch_decode(prediction)}') model.train() ``` Here's the output I got: ``` step: 0 train_loss: 8.259493827819824 prediction: ['<pad> <img_src=cropped-img-20180924'] step: 50 train_loss: 1.9695181846618652 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 100 train_loss: 2.071323871612549 prediction: ['<pad> <The model should overfit this sentence should overfit this sentence should overfit this sentence should'] step: 150 train_loss: 2.0366554260253906 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 200 train_loss: 1.8225889205932617 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 250 train_loss: 1.6568734645843506 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 300 train_loss: 1.6770282983779907 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence'] step: 350 train_loss: 1.688515067100525 prediction: ['<pad> The model should overfit this sentence sentence overfit this sentence sentence overfit this sentence sentence over'] step: 400 train_loss: 1.6118296384811401 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 450 train_loss: 1.6204414367675781 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence should overfit this sentence should'] step: 500 train_loss: 1.59645676612854 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 550 train_loss: 1.5818239450454712 prediction: ['<pad> The model should overfit this sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence'] step: 600 train_loss: 1.5775129795074463 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 650 train_loss: 1.561257243156433 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 700 train_loss: 1.5319150686264038 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 750 train_loss: 1.646193504333496 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 800 train_loss: 1.533736228942871 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 850 train_loss: 1.6203268766403198 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over'] step: 900 train_loss: 1.5132172107696533 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence'] step: 950 train_loss: 1.491452693939209 prediction: ['<pad> The model should overfit this sentence The model should overfit this sentence The model should overfit'] ``` ### Expected behavior I've been trying to fine-tune Pix2Struct starting from the base pretrained model, and have been unable to do so. The model collapses consistently and fails to overfit on that single training sample. I noticed a comment about this on the fine-tuning notebook: https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb > Let's train the model! Run the simply the cell below for training the model. We have observed that finding the best hyper-parameters was quite challenging and required a lot of trials and errors, as the model can easily enter in "collapse-model" (always predicting the same output, no matter the input) if the HP are not chosen correctly. In this example, we found out that using AdamW optimizer with lr=1e-5 seemed to be the best approach. To dig a little deeper, I've been trying to train on a single training sample with a minimal training loop, and see whether the model was able to correctly learn that single training sample. It seems that it's not able to overfit on a single training sample after 1000 training steps. Unless I missed something in my training loop, that seems like a weird behavior and might be a symptom of a bug somewhere?
04-20-2023 22:29:53
04-20-2023 22:29:53
Hi thanks for the detailed report, indeed this seems weird. I will have a look at it once I am back on Tuesday. cc also @NielsRogge and @nbroad1881 for visibility as they have been also working on fine-tuning Pix2struct<|||||>Thank you! Let me know if there's anything I can help with :) <|||||>Yeah I had a hard time fine-tuning Pix2Struct myself. However looking at your code snippet, when you encode the target sequence: ``` from transformers import Pix2StructProcessor processor = Pix2StructProcessor.from_pretrained("google/pix2struct-base") dummy_target = "The model should overfit this sentence" encoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20) ``` then when decoding back to text: ``` processor.decode(encoded_text.input_ids.squeeze()) ``` prints: ``` 'The model should overfit this sentence' ``` So this target sequence doesn't contain an EOS (end-of-sequence) token nor a BOS (beginning-of-sequence) token. Hence, when generating text using the `generate()` method, it will just continue predicting tokens, at this method only stops generating text when the model predicts the EOS token. As the model is trained to not produce the EOS token, it simply will keep on generating text (hence you're getting '<pad> The model should overfit this sentence should overfit this sentence' etc.). Also it looks like the first token is `<pad>` since the model's BOS token is equal to the pad token, so you'll need to add `skip_special_tokens=True` to the `batch_decode` method. So cc @younesbelkada we'll need to check that, in case the user sets the max length to 20, then the tokenizer should set the EOS token as last token appropriately. It looks like the processor's tokenizer has this set: ``` >>> processor.tokenizer.eos_token '</s>' ``` <|||||>Oh yeah, you're right! Completely missed it, and it does solve the generation issue after 50 steps basically. ``` step: 0 train_loss: 8.3875150680542 prediction: ['<pad> <img_alt=Tokyo is the cure for everything. img_src='] step: 50 train_loss: 2.020235300064087 prediction: ['<pad> The model should overfit this sentence</s>'] step: 100 train_loss: 2.0110490322113037 prediction: ['<pad> The model should overfit this sentence</s>'] step: 150 train_loss: 1.728605031967163 prediction: ['<pad> The model should overfit this sentence</s>'] step: 200 train_loss: 1.678179144859314 prediction: ['<pad> The model should overfit this sentence</s>'] step: 250 train_loss: 1.6586235761642456 prediction: ['<pad> The model should overfit this sentence</s>'] step: 300 train_loss: 1.6816842555999756 prediction: ['<pad> The model should overfit this sentence</s>'] step: 350 train_loss: 1.6198171377182007 prediction: ['<pad> The model should overfit this sentence</s>'] step: 400 train_loss: 1.6187334060668945 prediction: ['<pad> The model should overfit this sentence</s>'] step: 450 train_loss: 1.6846977472305298 prediction: ['<pad> The model should overfit this sentence</s>'] step: 500 train_loss: 1.6047543287277222 prediction: ['<pad> The model should overfit this sentence</s>'] step: 550 train_loss: 1.585425853729248 prediction: ['<pad> The model should overfit this sentence</s>'] step: 600 train_loss: 1.5750995874404907 prediction: ['<pad> The model should overfit this sentence</s>'] step: 650 train_loss: 1.5516695976257324 prediction: ['<pad> The model should overfit this sentence</s>'] step: 700 train_loss: 1.5205081701278687 prediction: ['<pad> The model should overfit this sentence</s>'] step: 750 train_loss: 1.600045919418335 prediction: ['<pad> The model should overfit this sentence</s>'] step: 800 train_loss: 1.5451548099517822 prediction: ['<pad> The model should overfit this sentence</s>'] step: 850 train_loss: 1.602522373199463 prediction: ['<pad> The model should overfit this sentence</s>'] ``` I think what remains weird is that the loss doesn't decrease below 1.5 even with that single training sample. Anecdotally, I've been trying to fine-tune for some information extraction tasks, and I haven't been able to make it properly learn anything (I did check that there's an eos token in my labels when fine-tuning :) ) <|||||>Indeed, the loss should go down to 0. I notice 2 things here: * I see label smoothing is used which is pretty uncommon: https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1557 According to PyTorch's [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html): "The targets become a mixture of the original ground truth and a uniform distribution" Might explain this behaviour. @younesbelkada I assume you included this to comply to the original implementation? * [this line](https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1558) should be removed: it's the user's responsability to set the labels to -100 for padding tokens. To comply to the design of any other model in the library, this line should not be there<|||||>Good catch, just tried without the label smoothing and the losses now look much more normal: ``` step: 0 train_loss: 7.458827972412109 prediction: ['<pad> <img_alt=Towards a New Vision: A Vision for a New World Order'] step: 50 train_loss: 0.12852047383785248 prediction: ['<pad> The model should overfit this sentence</s>'] step: 100 train_loss: 0.010209576226770878 prediction: ['<pad> The Model should overfit this sentence</s>'] step: 150 train_loss: 0.0012781125260517001 prediction: ['<pad> The model should overfit this sentence</s>'] step: 200 train_loss: 0.014641670510172844 prediction: ['<pad> The model should overfit this sentence</s>'] step: 250 train_loss: 6.366522575262934e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 300 train_loss: 0.0005338654736988246 prediction: ['<pad> The model should overfit this sentence</s>'] step: 350 train_loss: 0.004032869823276997 prediction: ['<pad> The model should overfit this sentence</s>'] step: 400 train_loss: 3.196050602127798e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 450 train_loss: 1.0058114639832638e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 500 train_loss: 1.513927782070823e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 550 train_loss: 4.767631980939768e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 600 train_loss: 0.005966411903500557 prediction: ['<pad> The model should overfit this sentence</s>'] step: 650 train_loss: 9.983758673115517e-07 prediction: ['<pad> The model should overfit this sentence</s>'] step: 700 train_loss: 2.6761419576359913e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 750 train_loss: 0.03052591346204281 prediction: ['<pad> The model should overfit this sentence</s>'] step: 800 train_loss: 0.00021442778233904392 prediction: ['<pad> The model should overfit this sentence</s>'] step: 850 train_loss: 4.1449759009992704e-05 prediction: ['<pad> The model should overfit this sentence</s>'] step: 900 train_loss: 0.0005854590563103557 prediction: ['<pad> The model should overfit this sentence</s>'] step: 950 train_loss: 6.643687083851546e-05 prediction: ['<pad> The model should overfit this sentence</s>'] ```<|||||>Damn not sure why I didn't check the code of the loss calculation before training a model myself ๐Ÿ™ˆ hopefully this will also solve the fine-tuning runs on larger datasets<|||||>Trying it right now! Will keep you updated once I got the results back :) <|||||>From my experiment, the training loss on larger datasets is indeed getting much lower (expected) but it doesn't seem to be solving the issue. <|||||>Thanks everyone for digging into that, I feel we are closing solving the issue, so I propose we first address https://github.com/huggingface/transformers/issues/22903#issuecomment-1518275840 Into a PR, so that at least the loss behaves more "normally". @arnaudstiegler , how much lower does the loss decreases compared than previous runs? Any curves/stats you can share? Thinking it loud, I was wondering if your ultimate issue is not a hyper parameter issue. <|||||>Losses overall look okay (with and without the label smoothing), but there seems to be some disconnect between the loss (both training and validation) value I'm getting and the actual quality of the predicted string. A priori, that might indicate a bug somewhere in my training workflow, but I did check it thoroughly. I also did a bunch of experiments on a single training batch, and as you reported in the notebook, the model can collapse with the wrong hyperparameters, esp. if the target is a long string. Adding some warmup seems to help, but it still behaves in a surprising way even on a single training sample. I'm actually trying to swap out Donut for Pix2Struct, and the Donut model hasn't shown any of the behavior or brittleness I'm seeing with Pix2Struct. You're probably right that there might be some hyperparameter issue, but given the "limited" size of the model, I'm really surprised that it's so sensitive to HPs. Would love to hear other people experience with fine-tuning Pix2Struct<|||||>I have also been trying to finetune pix2struct. I find that the losses go to zero very quickly which made me suspect that the attention masks are not being set properly. What I see is that in the `Pix2StructText` module, `self.config.is_decoder` is set to `False`, causing [this line](https://github.com/huggingface/transformers/blob/7579a52b55611ba7651b6d05cba6f45539a6089d/src/transformers/models/pix2struct/modeling_pix2struct.py#L1452) to output a non-causal attention mask. If I add the line `self.config.is_decoder = True` to the line above that to force it to be a decoder things look more normal.<|||||>Interesting! @arnaudstiegler can you try on your side this potential fix and let us know how it goes?<|||||>Yeah, the model seems to be learning well on >3k images dataset with the change on the decoder config. This seems to be the root cause. Really good catch @gbarello-uipath :) <|||||>Glad its working for you @arnaudstiegler! I don't have a lot of experience in the guts of the transformers repo (hence my hacky fix inside the forward function :) - could someone point me to the "right" place to make that fix? I looked into the `configuration_pix2struct.py` file, but haven't found the time yet to really dig down and actually fix it properly.<|||||>This is really cool! @gbarello-uipath , I believe you would need to add `is_decoder=True` key word argument here: https://github.com/huggingface/transformers/blob/c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f/src/transformers/models/pix2struct/configuration_pix2struct.py#L121 And also add it here as well (`is_decoder=is_decoder`) to fix the failing CI issues: https://github.com/huggingface/transformers/blob/c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f/src/transformers/models/pix2struct/configuration_pix2struct.py#L147 Then `get_attention_mask` should be called properly as expected. I would also advise to double check again everything works just in case<|||||>Let us know when you will open a Pull Request for that! Otherwise happy to do it as well<|||||>I would love to be an official contributor, even if its just a one-line code change ๐Ÿ˜… I will put together a PR shortly.<|||||>Awesome! Thanks again for the fix<|||||>Ok so I am working on this PR. It works fine when instantiating a brand new model, but when loading any of the pretrained models the `is_decoder=False` flag is saved in them already so the default kwarg gets overwritten. I suppose there isn't really a way for me to fix that directly. Only thing I can think of is to load the model, manually fix the config, and then push that new model to the hub. Is that the best way to fix the pretrained models? <|||||>I see, the other solution would probably to update the `get_extended_mask` method to accept a new optional argument to force the decoder-lik behavior , but I am not sure if this is the right fix. If the only solution is to update the models that are on the Hub I am happy to update them, cc @sgugger <|||||>I think the pretrained model configs should be fixed directly.<|||||>Ok @younesbelkada I created the PR: https://github.com/huggingface/transformers/pull/23051 Hopefully I have done everything correctly :) If there is a way for me to also fix the pre-trained model configs let me know, otherwise let me know when they are fixed!<|||||>Let's close this issue as we merged #23051 ! @NielsRogge has also made a nice tutorial in https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Pix2Struct Thanks everyone<|||||>@younesbelkada I shared a [notebook](https://www.kaggle.com/code/alejopaullier/benetech-matcha-train-0-74) on how to train Matcha/Pix2Struct model for Kaggle's Benetech competition, in case anyone is interested. This model achieved silver zone and includes the updates with the fix. <|||||>Thanks very much for sharing! It is really cool to see Matcha/Pix2Struct being using for winning notebooks in major kaggle competitions ๐Ÿ”ฅ
transformers
22,902
closed
Running GLUE example failed since Apr 17
### System Info We have continuous monitoring that runs latest huggingface models to benchmark performance, and below script failed since Apr 17 python -m torch.distributed.launch --nproc_per_node=8 /workspace/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path microsoft/deberta-large --task_name MRPC --max_seq_length 128 --learning_rate 3e-6 --do_train --output_dir /dev/shm --overwrite_output_dir --max_steps 200 --logging_steps 20 --per_device_train_batch_size 32 --fp16 It should be a check in between Apr 16 9:33 PM and Apr 17 4:01 PM PST. torch 1.14.0.dev20221213+cu116 hugginface install from source at whatever timestamp ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Will add conda environment info later python -m torch.distributed.launch --nproc_per_node=8 /workspace/transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path microsoft/deberta-large --task_name MRPC --max_seq_length 128 --learning_rate 3e-6 --do_train --output_dir /dev/shm --overwrite_output_dir --max_steps 200 --logging_steps 20 --per_device_train_batch_size 32 --fp16 ### Expected behavior above program succeed, instead here's the error Traceback (most recent call last): File "/workspace/transformers/examples/pytorch/text-classification/run_glue.py", line 626, in <module> main() File "/workspace/transformers/examples/pytorch/text-classification/run_glue.py", line 217, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/hf_argparser.py", line 332, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 110, in __init__ File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1255, in __post_init__ and (self.device.type != "cuda") File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1615, in device return self._setup_devices File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/utils/generic.py", line 54, in __get__ cached = self.fget(obj) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers-4.29.0.dev0-py3.8.egg/transformers/training_args.py", line 1549, in _setup_devices self.distributed_state = PartialState(backend=self.xpu_backend) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/accelerate/state.py", line 129, in __init__ torch.distributed.init_process_group(backend="nccl", **kwargs) TypeError: init_process_group() got multiple values for keyword argument 'backend'
04-20-2023 20:31:45
04-20-2023 20:31:45
This is the commit that seems to cause the issue: https://github.com/huggingface/transformers/commit/03462875cc2d6506eb66a74de7d19b93ce968596<|||||>@jingyanwangms Thanks for raising this issue! There was an issue that occurred on the development branch with the introduction of PartialState from accelerate and was reported here: #22816, which is likely related. Could you share more information about the running environment, specifically sharing the output of running `transformers-cli env`?<|||||>For me, updating `accelerate` via `pip install git+https://github.com/huggingface/accelerate ` solved it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,901
closed
Fix Slack report for Nightly CI and Past CI
# What does this PR do? For these 2 CI, so for we get ```bash Single | Multi | Category 0 | 0 | [Errored out] Examples directory 0 | 0 | [Errored out] PyTorch pipelines 0 | 0 | [Errored out] TensorFlow pipelines ... ``` But they don't have these 3 jobs in their workflow. We just need to update the notification script.
04-20-2023 20:10:14
04-20-2023 20:10:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,900
closed
Luke
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-20-2023 18:38:03
04-20-2023 18:38:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22900). All of your documentation changes will be reflected on that endpoint.
transformers
22,899
closed
Revert DeepSpeed stuff
# What does this PR do? During the integration some deepspeed items that need further looks into before it can be part of the integration need to be addressed/looked at more carefully. This PR reverts the base deepspeed logic done in `setup_devices` and `parallel_mode` to restore the original deepspeed behavior Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, cc @pacman100 so you're aware.
04-20-2023 18:05:42
04-20-2023 18:05:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22899). All of your documentation changes will be reflected on that endpoint.<|||||>I see this was included in transformers 4.29.0 (https://github.com/huggingface/transformers/releases/tag/v4.29.0). Could you share more about how this changes the Transformers + DeepSpeed integration? I don't quite understand the diff. Does this disable some deeper level of integration of DS with Transformers?<|||||>@jli this pr just reverted a small portion of Accelerate handling the deepspeed part when we weren't ready for that yet. CC @pacman100 if you could explain accelerates deepspeed integration vs the transformers one were replacing in terms of features? :)
transformers
22,898
closed
moved labels to the same device as logits for LILT model
# What does this PR do? As suggested in the [#22561](https://github.com/huggingface/transformers/issues/22561) moved labels to the same device as logits for the lilt model. @sgugger pls review and merge it in the main branch.
04-20-2023 17:59:34
04-20-2023 17:59:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,897
closed
Flax whisper gradient checkpointing
It uses `flax.linen.remat` and follows on PRs #13657 and #17994. # What does this PR do? Adds gradient_checkpointing to Flax Whisper models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sanchit-gandhi @peregilk
04-20-2023 16:05:08
04-20-2023 16:05:08
At the moment, the model loads fine but I then get a weird error when training or generating: ```python โ”‚ /data/venvflax/lib/python3.8/site-packages/transformers/models/whisper/modeling_flax_whisper.py: โ”‚ โ”‚ 520 in __call__ โ”‚ โ”‚ โ”‚ โ”‚ 517 โ”‚ โ”‚ โ”‚ residual = hidden_states โ”‚ โ”‚ 518 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 519 โ”‚ โ”‚ โ”‚ hidden_states = self.encoder_attn_layer_norm(hidden_states) โ”‚ โ”‚ โฑ 520 โ”‚ โ”‚ โ”‚ hidden_states, cross_attn_weights = self.encoder_attn( โ”‚ โ”‚ 521 โ”‚ โ”‚ โ”‚ โ”‚ hidden_states=hidden_states, โ”‚ โ”‚ 522 โ”‚ โ”‚ โ”‚ โ”‚ key_value_states=encoder_hidden_states, โ”‚ โ”‚ 523 โ”‚ โ”‚ โ”‚ โ”‚ attention_mask=encoder_attention_mask, โ”‚ โ”‚ โ”‚ โ”‚ /data/venvflax/lib/python3.8/site-packages/transformers/models/whisper/modeling_flax_whisper.py: โ”‚ โ”‚ 256 in __call__ โ”‚ โ”‚ โ”‚ โ”‚ 253 โ”‚ โ”‚ elif self.causal: โ”‚ โ”‚ 254 โ”‚ โ”‚ โ”‚ attention_mask = causal_mask โ”‚ โ”‚ 255 โ”‚ โ”‚ elif attention_mask is not None: โ”‚ โ”‚ โฑ 256 โ”‚ โ”‚ โ”‚ attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) โ”‚ โ”‚ 257 โ”‚ โ”‚ โ”‚ โ”‚ 258 โ”‚ โ”‚ # During fast autoregressive decoding, we feed one position at a time, โ”‚ โ”‚ 259 โ”‚ โ”‚ # and cache the keys and values step by step. โ”‚ โ”‚ โ”‚ โ”‚ /data/venvflax/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:896 in expand_dims โ”‚ โ”‚ โ”‚ โ”‚ 893 axis = _ensure_index_tuple(axis) โ”‚ โ”‚ 894 if hasattr(a, "expand_dims"): โ”‚ โ”‚ 895 โ”‚ return a.expand_dims(axis) โ”‚ โ”‚ โฑ 896 return lax.expand_dims(a, axis) โ”‚ โ”‚ 897 โ”‚ โ”‚ 898 โ”‚ โ”‚ 899 @_wraps(np.swapaxes, lax_description=_ARRAY_VIEW_DOC) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ ValueError: axis -3 is out of bounds for array of dimension 2 ``` I'm not sure what's happening. So I thought maybe @sanchit-gandhi could provide some feedback :) <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I've been digging and the only difference I can find is that for some reason the parameters for calling `FlaxWhisperDecoderLayerCollection.__call__()` in `FlaxWhisperDecoder.__call__()` are different in this PR's model than in the original implementation. I tested this using a tiny model Original model ```python encoder_attention_mask=None deterministic=True output_hidden_states=False ``` This PR's model: ```python encoder_attention_mask=True deterministic=False output_hidden_states=True ``` The rest of params are the same: `hidden_states`, `attention_mask`, `encoder_hidden_states`, `init_cache`, `output_attentions` and `return_dict`. The problem is that while the first decoder layers loads fine, the second one gets an `attention_mask` value of `True` for some reason, making any tensor operation to fail.<|||||>All passing! The main issue was a missing `self.gradient_checkpointing` in the `FlaxWhisperPreTrainedModel.__init__()` function. Took me forever to debug it. I'll clean up the git history mess, but other than that I think it's finally ready :) <|||||>Closing in favor of #22954.
transformers
22,896
closed
don't pass None kwargs to accelerate as it doesn't handle it nicely
# What does this PR do? Fixes an issue when using deepspeed where `self.xpu_backend` is None, and by passing it to accelerate, it doesn't handle it well. ``` File "/opt/conda/lib/python3.8/site-packages/transformers/training_args.py", line 1550, in _setup_devices self.distributed_state = PartialState(backend=self.xpu_backend) File "/opt/conda/lib/python3.8/site-packages/accelerate/state.py", line 117, in __init__ torch.distributed.init_process_group(backend="nccl", **kwargs) TypeError: init_process_group() got multiple values for keyword argument 'backend' ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
04-20-2023 15:43:01
04-20-2023 15:43:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @muellerzr <|||||>We've already fixed this in accelerate via https://github.com/huggingface/accelerate/pull/1342. (Now there are more deepspeed things failing, but we're looking into that). For now, as we're working on a very large migration in the trainer, please use the pip release of transformers for stability :) Or, install `accelerate` via github with `pip install git+https://github.com/huggingface/accelerate`<|||||>thanks!
transformers
22,895
closed
Pin flax & optax version
# What does this PR do? Failing on [main](https://app.circleci.com/pipelines/github/huggingface/transformers/62534/workflows/d270e074-306d-4a8f-9434-fcdd979fae1b/jobs/770753) because of a new release of [optax](https://github.com/deepmind/optax/releases/tag/v0.1.5). Pinning until compatible versions with jax resolved. Fixes # (issue)
04-20-2023 15:34:56
04-20-2023 15:34:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,894
closed
Fix `FillMaskPipelineTests`
# What does this PR do? For some BPE tokenizers, `</w>` is removed during decoding, so `token_str` won't be the same as in `targets`. We need to adjust the test logic.
04-20-2023 15:26:49
04-20-2023 15:26:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,893
closed
Update Swin MIM output class
# What does this PR do? - Replaces incorrectly named `logits` output of `SwinMaskedImageModelingOutput` and `SwinV2MaskedImageModelingOutput` classes with the `reconstruction` attribute - Sets `logits` as a property for backward compatibility and adds a deprecation warning ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
04-20-2023 15:11:33
04-20-2023 15:11:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,892
closed
Hardcode GELU as the intermediate activation for ESM
One more issue revealed by the nucleotide transformer port! This time it's the activation function - ESM uses a hardcoded GELU, which the PyTorch port gets right, but the TF port used an intermediate block copied from BERT which reads `config.hidden_act`. This value was set to `gelu` for all of the original ESM checkpoints, so the bug was silent until we tried making some new checkpoints from scratch. This PR replaces `config.hidden_act` with a hardcoded `gelu`. All ESM tests (including slow / cross-tests) pass locally.
04-20-2023 14:59:08
04-20-2023 14:59:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Just to understand a bit better, does this mean that the nucleotide model has a different activation set in its config used for other layers? <|||||>Actually, no! It also always expects gelu, which matches the original ESM (both the port to HF and the original repo at Meta). The issue here is that the TF version is reading `config.hidden_act`, but that isn't even set by default - the bug slipped in because whatever way we constructed the original ESM checkpoints, that value was always set in the configs, so the issue was silent until we tried to make new configs for nucleotide transformer and they suddenly broke in TF.<|||||>@Rocketknight1 Would a possible solution to this be to update the ESM configuration to have `hidden_act` as `"gelu"` by default? If I've understood correctly, the original ESM model configs have the `hidden_act` attribute. In which case, as a user, if I updated this I'd expect it to be propagated when constructing a model from the config. <|||||>@amyeroberts I'm not sure that's the best course here! `hidden_act` is actually a parameter on the base config that `EsmConfig` inherits from. As such, it's not included in the documentation for `EsmConfig` at all. The attribute just happened to be set (I think by the ESM team) when they created the ESM checkpoints, which masked the bug. I think the right solution is to just not read the attribute at all in the code for either framework. Also, I spotted one minor issue with the weight tying fix I made for ESM, and I'm sneaking a fix for it into this PR. (decoder should be a layer when it's untied to make sure weight crossloading works properly, not a bare weight matrix).<|||||>No probs, I think they helped!
transformers
22,891
closed
[`SAM`] Change to `facebook/sam-vit-base`
# What does this PR do? Changes the checkpoint name to `sam-vit-base` instead of `sam-vit-big` which was slightly confusing for users. The checkpoints are sorted with their size - `sam-vit-base`: 350MB - `sam-vit-large`: 1GB - `sam-vit-huge`: 2GB Which now makes more sense. The repo on the Hub have been updated accordingly cc @amyeroberts @ArthurZucker @sgugger
04-20-2023 11:56:28
04-20-2023 11:56:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,890
open
`prefix_allowed_tokens_fn` do not constrain when all allowed tokens have scores of `-inf`
### System Info transformer 4.25.1 python 3.8.16 ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When using `generate()` with `prefix_allowed_tokens_fn`, (more precisely, when using `PrefixConstrainedLogitsProcessor`), when all tokens returned by `prefix_allowed_tokens_fn` have scores of `-inf`, the model does not comply with the constraints and picks the token which is not on the allowed token list. ### Expected behavior Even if all allowed tokens have score of `-inf`, the model should pick tokens from allowed token list by `prefix_allowed_tokens_fn`. I think it can be solved by using some clamp function or adding epsilon value to this code. https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/generation/logits_process.py#L692-L698 This is my own code to solve it. However, it might cause other bugs. ```python def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: masked_score = torch.full_like(scores, -math.inf) for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])): for beam_id, sent in enumerate(beam_sent): allowed_idx = batch_id * self._num_beams + beam_id, self._prefix_allowed_tokens_fn(batch_id, sent) filtered_scores = torch.clamp(scores[allowed_idx], min=-10 ** 6) masked_score[allowed_idx] = filtered_scores return masked_score ``` Edit: The model works well on `torch.clamp()` with `min=-10 ** 6`, not `min=-10 ** 8`, when all allowed token's score is -inf. Too low score token in the sequence may have affected the decoding step. I updated the above code.
04-20-2023 11:33:17
04-20-2023 11:33:17
Hey @ksh108405 ๐Ÿ‘‹ Constrained generation has several issues at the moment, and I'm out of bandwidth. I'm adding this to the list of things keep an eye on when revisiting constrained generation :)
transformers
22,889
closed
fix warning function call creating logger error (max_length and max_new_tokens)
# What does this PR do? PR #21347 introduced a bug in the warning we display, calling the wrong warn function. There is a bug open about this error: #22636 it's either `warnings.warn("msg", UserWarning,)` or `logger.warning("msg")` In this case we have `logger.warn` which is deprecated, and `logger.warn("msg", category)` doesn't exist and throws an error: ``` --- Logging error --- Traceback (most recent call last): File "/python-path/python3.9/logging/__init__.py", line 1083, in emit msg = self.format(record) File "/python-path/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/python-path/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/python-path/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante ?
04-20-2023 10:32:06
04-20-2023 10:32:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>@QuentinAmbard @gante , could you please tell how to fix this bug? I still see "logging error message".<|||||>You need to wait for the next release I guess, or apply the fix directly?<|||||>> You need to wait for the next release I guess, or apply the fix directly? Yep, or you can [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) <|||||>@QuentinAmbard okay, thanks for quicky response. I will better remove the logging from source code.<|||||>@amyeroberts how to install from source, if new fix isn't merged with main branch? <|||||>@IamExperimenting The fix is merged into the main branch. Installing from source means installing the version of the library currently on main. The instructions were in the link I shared above: https://huggingface.co/docs/transformers/installation#install-from-source<|||||>@amyeroberts thanks for the information, I installed from source but its throwing error ``` from transformers import pipeline ``` error: ``` RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name `PartialState` from `accelerate` ```<|||||>You need to upgrade your `accelerate` library: `pip install --upgrade accelerate`.<|||||>@amyeroberts @sgugger it worked after upgrading. However, it removed only `logging error` not the `warning message`.
transformers
22,888
closed
fix: GPTNeoX half inference error
norm_factor is still torch.float32 after using model.half So I changed it to register_buffer so I can change it to torch.float16 after using model.half This error does not occur in all cases, but it does happen occasionally. Thanks! Error Message: ``` File "/data2/sblee/anaconda3/lib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 206, in _attn attn_scores = torch.baddbmm( RuntimeError: expected scalar type Half but found Float ``` Error Code: ``` attn_scores = torch.baddbmm( attn_scores, query, key.transpose(1, 2), beta=1.0, alpha=(torch.tensor(1.0, dtype=self.norm_factor.dtype, device=self.norm_factor.device) / self.norm_factor), ) ``` before Code: ``` class GPTNeoXAttention(nn.Module): def __init__(self, config): super().__init__() self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_attention_heads self.rotary_ndims = int(self.head_size * config.rotary_pct) max_positions = config.max_position_embeddings self.register_buffer( "bias", torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( 1, 1, max_positions, max_positions ), ) self.register_buffer("masked_bias", torch.tensor(-1e9)) self.rotary_emb = RotaryEmbedding( self.rotary_ndims, config.max_position_embeddings, base=config.rotary_emb_base ) self.norm_factor = torch.sqrt(torch.tensor(self.head_size, dtype=torch.float32)).to(torch.get_default_dtype()) self.query_key_value = nn.Linear(config.hidden_size, 3 * config.hidden_size) self.dense = nn.Linear(config.hidden_size, config.hidden_size) ``` after Code: ``` class GPTNeoXAttention(nn.Module): def __init__(self, config): super().__init__() self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_attention_heads self.rotary_ndims = int(self.head_size * config.rotary_pct) max_positions = config.max_position_embeddings self.register_buffer( "bias", torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)).view( 1, 1, max_positions, max_positions ), ) self.register_buffer("masked_bias", torch.tensor(-1e9)) self.rotary_emb = RotaryEmbedding( self.rotary_ndims, config.max_position_embeddings, base=config.rotary_emb_base ) self.register_buffer("norm_factor", torch.sqrt(torch.tensor(self.head_size, dtype=torch.float32)).to(torch.get_default_dtype())) self.query_key_value = nn.Linear(config.hidden_size, 3 * config.hidden_size) self.dense = nn.Linear(config.hidden_size, config.hidden_size) ``` before: ``` model = GPTNeoXForCausalLM.from_pretrained(F_MODEL_PATH, config=model_config) model.half() model.to("cuda") model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float32 ``` after: ``` model = GPTNeoXForCausalLM.from_pretrained(F_MODEL_PATH, config=model_config) model.half() model.to("cuda") model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float16 ```
04-20-2023 10:03:02
04-20-2023 10:03:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi! @younesbelkada It works fine when executed in the way you described. However, what do you think about considering the case of using model.half? Thanks! before: ``` model = GPTNeoXForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16) model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float16 ``` after: ``` model = GPTNeoXForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16) model.gpt_neox.layers[0].attention.norm_factor.dtype output: torch.float16 ````<|||||>The canonical way is indeed to use `torch_dtype=torch.float16` as it saves a lot of memory (otherwise you instantiate your model in float32 so take a lot of space, then convert it to half). But `model.half()` should work nethertheless. Using a buffer seems like a good solution, but make it non-persistent so it doesn't get inside the `state_dict` (with `persistent=False`).<|||||>Added a parameter "persistent=False" Thanks!<|||||>Of course. Thanks!<|||||>Failure of the FLax test is unrelated and already fixed on main, so merging!
transformers
22,887
closed
Fix SAM example in documentation
As per title
04-20-2023 09:18:39
04-20-2023 09:18:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,886
closed
[`SAM`] Correct arxiv link
# What does this PR do? This PR fixes the link of SAM paper with the correct arxiv link cc @ArthurZucker @amyeroberts @osanseviero
04-20-2023 08:58:45
04-20-2023 08:58:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
22,885
closed
KeyError: eval_loss when using Trainer (SpeechT5 fine-tuning)
### System Info current main branch of Transformers (4.29.0.dev0, 20 Apr 2023) ### Who can help? @hollance ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We recently published a Colab notebook for fine-tuning SpeechT5 for TTS. https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ This notebook worked fine previously but now it gives an error in `trainer.py` because the `eval_loss` is not part of the metrics. This happens when saving the checkpoint. The progress bar in the notebook shows "No log" for the Validation Loss. I will look into this issue myself first and try to get a smaller reproducible case. My hunch is that something changed in Trainer in between the time I wrote the notebook and now (for example, it now requires Accelerate). ### Expected behavior The notebook should work as before.
04-20-2023 08:44:03
04-20-2023 08:44:03
Added a Colab that demonstrates the issue with a minimal amount of code: https://colab.research.google.com/drive/12AFpcCE96C22-IxRRJjDIo1s4wDsP0v_?usp=sharing I still had a copy of the SpeechT5 TTS fine-tuning changes on 4.28.dev and that works fine, so something that changed between 4.28 and 4.29 has broken this. Still investigating.<|||||>OK the issue seems to be that `stop_labels` is not present in the input (since we're not actually using them) and as a result, the evaluation loop thinks the model doesn't have labels (even though it does) and doesn't report the loss. I had initially removed `stop_labels` from `model.forward` when implementing the TTS fine-tuning logic, but had put it back at the last minute to keep backwards compatibility with the publicly released version of SpeechT5. That's why the fine-tuning Colab used to work but is now broken. The question now is: why does the Trainer believe `stop_labels` are labels? And how can I tell it to ignore them? <|||||>The workaround is to add the following when creating the training arguments: ```python training_args = Seq2SeqTrainingArguments( ... label_names=["labels"], ) ``` The Trainer looks at the signature of `model.forward()` and anything with `labels` in it is assumed to be labels, which in this case includes `stop_labels`. We'll remove this argument in a future version of Transformers. But until then you can override this by supplying your own `label_names` that does not include `stop_labels`.
transformers
22,884
closed
Change schedule CI time
# What does this PR do? **This PR changes the test CI to be scheduled 2 hours after the image build CI.** Despite #22859 changed to use `accelerate@main`, the last DeepSpeed CI docker image still has `accelerate==0.18.0`. This is because that image build takes ~1h30m to finish, but the test CI is scheduled (only) 1 hour after the image build CI. Although from the next run, the schedule test CI will start to use `accelerate@main`, there will be a gap - i.e. it will use the `accelerate@main` **one day before** the current `main`.
04-20-2023 08:22:34
04-20-2023 08:22:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22884). All of your documentation changes will be reflected on that endpoint.
transformers
22,883
closed
Add FlaxWhisperForAudioClassification model
null
04-20-2023 07:25:00
04-20-2023 07:25:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sanchit-gandhi I have opened new PR. <|||||>This actually broke a lot of tests on Flax Whisper, so reverting. Can you re-open the PR and rebase on main so we can see what went wrong?<|||||>@sanchit-gandhi Request you to open this PR. <|||||>Hey @raghavanone - unfortunately a PR can't be re-opened after it's been merged. The best thing to do is add commits to the branch and create a new pull request, copying all the details over and providing a link to the original pull request. See https://stackoverflow.com/questions/12674304/github-reopening-a-merged-pull-request for details.
transformers
22,882
closed
` Device to device copy is unsupporte` RuntimeError
### System Info transformer: 4.20.1 platform: docker Ubuntu python: 3.8.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am using pretrained GPT2 model to inference, and set the device to GPU in the first: ``` device = torch.device("cuda") mode.to(device) ``` Then I got: ``` File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1048, in forward transformer_outputs = self.transformer( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 891, in forward outputs = block( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 391, in forward attn_outputs = self.attn( File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 332, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "/opt/conda/envs/test_environment/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 201, in _attn causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].to(torch.bool) RuntimeError: Device to device copy is unsupported ``` Why this errror ommitted? Thank you very much. ### Expected behavior I didn't googled out the same problem, so I posted here. I expect this error gone.
04-20-2023 06:48:42
04-20-2023 06:48:42
Hi @lms-mt, thanks for reporting this issue! So that we can best help you, could you share the following: * The running environment: run `transformers-cli env` in the terminal and copy-paste the output * A minimal code snippet to reproduce the error<|||||>cc @ArthurZucker <|||||>I am using main and I cannot reproduce this : ```python >>> from transformers import GPT2Model >>> import torch >>> model = GPT2Model.from_pretrained("gpt2") >>> device = torch.device("cuda") >>> model.to("cuda") ``` works as expected<|||||>@amyeroberts Sorry to reply later. ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.13.4 - PyTorch version (GPU?): 2.0.0a0+gitc263bd4 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ``` 2. Sorry I can't. because we are working on a torch backend some kind like cuda. And it should be kept private for legal. @ArthurZucker Thanks for your reply. Actually, I want to know is this message ommited from HuggingFace or torch? If I have this clue, I may resolve it myself.<|||||>@lms-mt Since `.to` method works for our supported devices e.g. `"cuda"` or `"cpu"`, then I suspect the issue is arising from the custom backend. As for where the error comes from, searching in both the [Hugging Face](https://github.com/search?q=org%3Ahuggingface++%22copy+is+unsupported%22&type=issues) and [PyTorch](https://github.com/search?q=org%3Apytorch+%22copy+is+unsupported%22&type=code) orgs returns no results. It's peculiar that it's raised on a line in the Hugging Face module. Is this possible that the custom backend is causing an early termination and raising the error? <|||||>> @lms-mt Since `.to` method works for our supported devices e.g. `"cuda"` or `"cpu"`, then I suspect the issue is arising from the custom backend. > > As for where the error comes from, searching in both the [Hugging Face](https://github.com/search?q=org%3Ahuggingface++%22copy+is+unsupported%22&type=issues) and [PyTorch](https://github.com/search?q=org%3Apytorch+%22copy+is+unsupported%22&type=code) orgs returns no results. It's peculiar that it's raised on a line in the Hugging Face module. Is this possible that the custom backend is causing an early termination and raising the error? @amyeroberts Really thanks for your reply. From what I search and your advices, I think this error may raised by the custom backend module. I will check it out. I will close this issue because this issue is not in well fromat. Have a good day.
transformers
22,881
closed
Question about Bloom pretrain
### Feature request Hi, a question about bloom pretrain In pretrain phase, I prepared set of unlabeled text in a .txt file. Each line is a paper or a paragraph in a paper. Each line should be independnet. So, the next line or next text is not relevant to previous one. The run_clm.py script can read those text line by line and concatenate all texts from our dataset and generate blocks by user-defined block_size param or a default value(which is 1024). I have a question about the concatenation. If each line (or text) in my .txt file describe different thing (means each text or paraphs are independent), then the concatenation will merge them all without an explicit 'end of text/end of paper' mark. How the Bloom model predicts next token based on previous context. How the model can predict the first token in the new paragraph by seeing previous context (the previous context describe different context). I tried to make one block only contain one paragraph or text, but they do not have same length and get an error. If I use the concatenation mechanism, I feel like it is totally wrong. Can anyone help me to figure out these. ### Motivation Trying to make each line of text in .txt file as an individual block. ### Your contribution I am not sure about which one is right
04-20-2023 06:32:41
04-20-2023 06:32:41
Hi, @ZeyuTeng96 thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
22,880
closed
tests: Fix flaky test for NLLB-MoE
# What does this PR do? Fixes #22464 (and added some light docs edits I happened to notice) From my comment in the issue: Looked into this and I think the flakiness is caused by the natural variability in the sparse MoE layers. Specifically that when they calculate which experts to use in the gating logic, theyโ€™re computing probabilities imperfectly for two different sets of inputs: one with prior inputs concatenated with the past key values and one with just the past key values. The test usually passes cause magnitude of the difference is usually likely to be small. Notably, when the vocab size is increased this pass rate goes up (and vice versa) since the increased representational capacity can help the model make more accurate decisions about which experts to use for each input. For example, increasing the vocab size in the config from its current 99 to 999 increases the pass rate from ~80% to ~95%. I think this flakiness is inherent in the sparse layers, but if I understand right the point of the test is to check the decoder uses the past properly, so I edited the test to use dense layers and moved to the rtol down to 1e-3 to be in line with the other modelsโ€™ version of this check. Wrote a loop to test this on a 1000 passes and they all passed. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @ArthurZucker, @amyeroberts
04-20-2023 03:22:50
04-20-2023 03:22:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ydshieh <|||||>Happy to!
transformers
22,879
closed
[Examples/TensorFlow] minor refactoring to allow compatible datasets to work
This PR removes the hard-coded "wikitext" values from the scripts so that they can be used in conjunction with any compatible dataset. @Rocketknight1 FYI.
04-20-2023 02:32:01
04-20-2023 02:32:01
_The documentation is not available anymore as the PR was closed or merged._