repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,557
closed
Segformer ouput size
### System Info thanks for this repo, Segformer `output size` is `input_size/4,` as mentionned here https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py#L780 However, this line of documentation is wrong: https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py#L850 By the way, what would be the easiest way to augment the ouput size, adding upsampling layers at the end ? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction read the code :) there is an inconsitency in the doc :) ### Expected behavior the doc. should be consistent
08-10-2022 14:37:17
08-10-2022 14:37:17
cc @NielsRogge -- is the right logits shape `(batch_size, num_labels, height, width)` or `(batch_size, num_labels, height/4, width/4)`? @joihn depending on @NielsRogge answer, would you like to open a PR to fix the documentation? :) The PyTorch model has the same comments, that may need to be fixed.<|||||>Hi, Yes it should be `(batch_size, num_labels, height/4, width/4)`.<|||||>PR merged
transformers
18,556
closed
[Title]: Fix the ner example for tenforflow
[Detail]: The MODEL_MAPPING should change to TF_MODEL_MAPPING in tensorflow platform. [To do]: None # What does this PR do? To fix the problem that ner example in tensorflow dir runs failed. the error message is : (tensorflow) ➜ token-classification git:(main) ✗ python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner Traceback (most recent call last): File "/Users/qcc/OpenSource/transformers/examples/tensorflow/token-classification/run_ner.py", line 57, in <module> MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-10-2022 13:22:51
08-10-2022 13:22:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18556). All of your documentation changes will be reflected on that endpoint.
transformers
18,555
closed
TensorFlow MobileViT
This PR implements the MobileViT model in TensorFlow. ## Interesting points * The classification and segmentation models provided with MobileViT are fully compatible with TensorFlow Lite. Therefore, I have included sample code in the model documentation showing how to perform the TensorFlow Lite conversion (~4 lines of code). * TFLlite versions of the smallest checkpoints for classification and semantic segmentation are 1MB and 2MBs, respectively. I believe this will be quite beneficial to the TinyML community. ## TODOs - [x] Hosting of the TF checkpoints on the Hub. (Can I do it now? If so, I need resources that show how to do that.) - [x] Remove `from_pt` wherever needed. @amyeroberts @gante @sgugger up for review!
08-10-2022 12:58:55
08-10-2022 12:58:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Didn't realize that re-requesting a review from @gante would result in removing @amyeroberts and @sgugger from the reviewer list. Please know that it was completely unintentional. <|||||>@sayakpaul no worries :)<|||||>Thanks for another great model addition @sayakpaul ! <|||||>@sayakpaul assuming it is passing the slow tests, it is ready for the TF weights. The super complex instructions to do it are as follows: 1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`) 2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D 3. In the Hub PR, tag `@joaogante, @lysandre` <|||||>Super simple (complex?) question: What is the format of `foo/bar`?<|||||>The same as the model name on the hub, e.g. [this model](https://huggingface.co/apple/mobilevit-small/tree/main) would be `apple/mobilevit-small` P.S.: I edited the comment above with a 3rd step :D<|||||>The CLI might fail due to the conversion error being above the threshold -- let us know if that happens. There is a PR open with a flag to overwrite the error threshold.<|||||>> The CLI might fail due to the conversion error being above the threshold -- let us know if that happens. There is a PR open with a flag to overwrite the error threshold. Failing due to this. Need that flag. <|||||>@sayakpaul it is now merged (https://github.com/huggingface/transformers/pull/18752). You can use `--max-error` to change the limit. This flag should be used with care. What are the differences you're seeing?<|||||>@gante, I think I have a clue as to why the `5e-5` threshold is being crossed during cross-loading. MobileViT model has these two components: unfolding and folding. They interpolate the intermediate feature maps. I checked ([Colab Notebook](https://colab.research.google.com/gist/sayakpaul/be24f152d91d0f1cbe95d5cea9ae8b14/scratchpad.ipynb)) the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values. You'd notice that the outputs assert when `atol` is 1e-5, otherwise (higher `atol`) it fails. I suspect this inconsistency has a compounding effect and is the major reason the cross-loading fails with `5e-5`. I created PRs for adding the TF weights. Navigable from here: https://huggingface.co/apple. Cc: @amyeroberts <|||||>@gante @hollance merged my PRs for the TF weights of MobileViT (thanks!). https://github.com/huggingface/transformers/pull/18555/commits/82079a74268c8b633e61064058362a0e6e53294c removes the `from_pt` argument. Nothing seems to be remaining now. Up to you (or anyone having merging privileges) to take the reigns. <|||||>> [...] the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values. This might be due to the `align_corners` option. I once wrote a long blog post about this difference between PyTorch and TF. https://machinethink.net/blog/coreml-upsampling/ Not sure if that's the same issue but it seems likely.<|||||>> > [...] the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values. > > This might be due to the `align_corners` option. I once wrote a long blog post about this difference between PyTorch and TF. https://machinethink.net/blog/coreml-upsampling/ Not sure if that's the same issue but it seems likely. Very well! If we need to deal with the inconsistencies between `tf.image.resize` and `nn.functional.interpolate` I suggest we do that in a separate PR 'cause various vision models would benefit from that (ViT for example). <|||||>@gante WDYT?<|||||>@sayakpaul regarding the PR, all good on my end, but we still need approval from @sgugger :D As for the `tf.image.resize` -- yeah, it would be nice to standardize for all models. Would you be interested in working on it? In any case, I'd like to ask you to open an issue, so we don't forget to track it! <|||||>> As for the tf.image.resize -- yeah, it would be nice to standardize for all models. Would you be interested in working on it? In any case, I'd like to ask you to open an issue, so we don't forget to track it! On it, sir!<|||||>@amyeroberts @gante Please take note of the changes in https://github.com/huggingface/transformers/pull/18555/commits/32cfd30cee185a090a80a6604b850c639b04203b. Initially, when I tested TFLite conversion it didn't require any spec for [SELECT operations](https://www.tensorflow.org/lite/guide/ops_select) but now they're failing with a specification for the SELECT ops. What is more surprising is that the TFLite interpreter is treating `tf.Conv2D` to be a SELECT op. Hence I have raised https://github.com/tensorflow/tensorflow/issues/57550. <|||||>(retriggered failing job, seems like a spurious failure)<|||||>Yeah probably nothing related to the PR? <|||||>The build doc job failure is not spurious. There seems to be a problem with an example bloc introduced by this PR.<|||||>Let me see if removing comments from the example block does the trick. Because when the job wasn't failing the example block didn't have any comments. <|||||>No, it didn't help :( Any suggestions to try out? <|||||>> The build doc job failure is not spurious. There seems to be a problem with an example bloc introduced by this PR. My bad :D read the failure bottom to top, so I didn't notice the `mobilevit` errors
transformers
18,554
closed
Illegal instruction: 4 error when importing TextClassificationPipeline
### System Info ``` Name: transformers Version: 4.22.0.dev0 Name: tensorflow Version: 2.5.0 Name: torch Version: 1.12.1 Python 3.9.12 ``` I keep getting the error 'Illegal instruction: 4 ' when trying to import TextClassificationPipeline from transformers, does this have anything to do with the versions of the dependencies? It is quite confusing which versions of the packages are compatible with TextClassificationPipeline, I had the same error when with tensorflow version 2.9 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - ### Expected behavior -
08-10-2022 11:55:56
08-10-2022 11:55:56
hi @ehsong , Are you on Mac M1 ? Do you mind running `transformers-cli env` and printing the output here ? What code are you using the trigger the issue ? I googled and found this : https://stackoverflow.com/questions/14268887/what-is-the-illegal-instruction-4-error-and-why-does-mmacosx-version-min-10 I can't tell you exactly what's the issue, but it seems to be the environment you're running in that's causing this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,553
closed
OWL-ViT outputs are offset for non-square images
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.10.43.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: NA ### Who can help? @alaradirik @sgugger @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the [code snippet](https://huggingface.co/google/owlvit-base-patch32) for OWL-ViT on a large Unsplash image ([https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c](https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c)) gives an incorrect result. The bounding boxes seem offset. When cropping the image, the result is actually correct. ```python import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c" image = Image.open(requests.get(url, stream=True).raw) texts = [["flag", "car", "person", "sidewalk", "bicycle"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] ``` This is the result: note that the yellow flag is detected, but the bounding box is offset. ![image](https://user-images.githubusercontent.com/89590365/183857750-74f624b3-d852-46db-a1ba-9da04c02600f.png) ### Expected behavior The `post_process()` method should correctly rescale the bounding boxes to the original image size. See the Spaces demo (which uses cropping), which shows the flag detection at the right position. ![image](https://user-images.githubusercontent.com/89590365/183858925-200d1c58-c851-4577-8518-47c8b79c7d88.png)
08-10-2022 08:57:24
08-10-2022 08:57:24
I just saw that @alaradirik acknowledged this issue in the [Community tab of the Spaces demo](https://huggingface.co/spaces/adirik/OWL-ViT/discussions/1), but I'll keep this issue open, so it's easier for others to find.<|||||>I can also confirm that @cceyda 's finding works for me, i.e. doing ```python image = Image.open(requests.get(url, stream=True).raw) input_image = image.resize((768, 768)) inputs = processor(text=texts, images=input_image, return_tensors="pt") ``` while all other code is kept the same. It's thus not a bug in the `post_process()` method. ![image](https://user-images.githubusercontent.com/89590365/183862229-10f48f5d-9847-42b8-a6b8-a74d5ef603bd.png) <|||||>Hi @segments-tobias, thank for opening the PR! @cceyda's PR fixed the demo and I confirmed that the`post_process()` method works fine. The following code prints the boundary boxes correctly: ``` import cv2 import numpy as np import torch from urllib.request import urlopen from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") # Download image url = "https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c" array = np.asarray(bytearray(urlopen(url).read()), dtype=np.uint8) image = cv2.cvtColor(cv2.imdecode(arr, -1), cv2.COLOR_BGR2RGB) # Text queries texts = [["flag", "car", "person", "sidewalk", "bicycle"]] # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.shape[:2]]) img_input = cv2.resize(image, (768, 768), interpolation = cv2.INTER_AREA) inputs = processor(text=texts, images=img_input, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] font = cv2.FONT_HERSHEY_SIMPLEX score_threshold = 0.05 for box, score, label in zip(boxes, scores, labels): box = [int(i) for i in box.tolist()] if score >= score_threshold: image = cv2.rectangle(image, box[:2], box[2:], (255,0,0), 5) if box[3] + 25 > 768: y = box[3] - 10 else: y = box[3] + 25 image = cv2.putText( image, text[label], (box[0], y), font, 1, (255,0,0), 2, cv2.LINE_AA ) ``` I think there is an issue in `OwlViTFeatureExtractor` as omitting the manual resizing line causes unexpected outputs. I'll double check this and open a fix PR shortly.<|||||>Great! Yes, would be great to be able to leave out the resizing line<|||||>yes, the `OwlViTFeatureExtractor ` is already supposed to be doing resizing according to this line [here](https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/owlvit/feature_extraction_owlvit.py#L197) but it isn't working for some reason I haven't debugged.<|||||>@segments-tobias @cceyda thank you both for your input! The issue was due to defining the size as a single value instead of a tuple (768 instead of (768, 768)) in `OwlViTFeatureExtractor`. This led to the image/s getting resized along only one dimension and getting cropped along the other dimension later on in the preprocessing pipeline. The configuration files are updated and the `OwlViTProcessor` can correctly resize the input images now. I'll open another PR to update the default values in `OwlViTFeatureExtractor` but I'm closing this issue as it is fixed.
transformers
18,552
closed
wav2vec2 : No MSELoss implementation
### System Info modeling_wav2vec2.py : 1822 lines only implements CrossEntropyLoss() if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = AutoModelForAudioClassification.from_pretrained( "facebook/wav2vec2-base", num_labels=1) ### Expected behavior Calculate MSELoss
08-10-2022 06:55:02
08-10-2022 06:55:02
Hey @LaurenceYozi Indeed! The MSE loss is not implemented. Would you like to open a PR to add this? You can refer to BERT for the loss function/labels logic: https://github.com/huggingface/transformers/blob/cfd623a859890c6d106610d3c688064eadc7bd61/src/transformers/models/bert/modeling_bert.py#L1578-L1598 You should be able to copy this almost one-for-one from BERT to Wav2Vec2!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,551
closed
PEGASUS-X
# What does this PR do? Adds [PEGASUS-X](https://arxiv.org/abs/2208.04347) implementation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @patil-suraj --- Note: The models are currently hosted on https://huggingface.co/zphang but should be transferred to the Google organization shortly.
08-10-2022 00:15:30
08-10-2022 00:15:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker <|||||>On it 🤗<|||||>I can follow up on the rest of the feedback this weekend / early next week: most of it looks manageable. One comment on `DimensionInfo`: I use it to capture all of the shape-related attributes I need for the various reshapes/padding: it felt cleaner/more manageable for me to keep it all in one data structure than to pass them around individually. I can expand the attributes to the full readable names as you mention above, but I think it's useful to keep the dataclass. Let me know what you think: I'm fine either way.<|||||>> I can follow up on the rest of the feedback this weekend / early next week: most of it looks manageable. > > One comment on `DimensionInfo`: I use it to capture all of the shape-related attributes I need for the various reshapes/padding: it felt cleaner/more manageable for me to keep it all in one data structure than to pass them around individually. I can expand the attributes to the full readable names as you mention above, but I think it's useful to keep the dataclass. Let me know what you think: I'm fine either way. Thanks for the quick comment! For me it's mostly the single uppercase letters that I would like to change. Ok for me to keep the class, even if we haven't done it before for models like LongT5, Longformer or BigBird. Think overall I'd prefer to not have the class at all, but ok for me to leave it if you feel stongly about it @zphang :-) Just it'd be super nice to write out the single upper-case letters<|||||>Let me know if there is anything else I need to address!<|||||>Let me ping Peter Liu on this. He should be able to pull and push to the Google org. I will update the paths in the PR when it is ready.<|||||>Thanks for making the change! Test failures seem unrelated :-) Merging!<|||||>Hi @zphang Thank you for adding this model! We have a few failing tests for this model, which could be found on [this CI job run page](https://github.com/huggingface/transformers/runs/8173676224?check_suite_focus=true). You can click [View raw logs] on the icon at the top-right corner. - One issue is the missing checkpoint `pegasus-x-base`: ```bash pegasus-x-base is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' ``` Do you know where is the correct checkpoint? - Another test failure is `test_seq_to_seq_generation`, where the model outputs ``` PEGASUSX PEGASUS PEGASUS PEGASUS-X PEGASUS PEGASUS-X PEGASUS PEGASUS-X PEGASUS PEGASUS PEGASUS ``` Could you check if you get the expected values `we investigate the performance` on your side, and/or (if possible) why this non-sense output occurs? - For the remaining failure `test_torchscript_output_attentions`, we will fix it on our side. Thank you in advance!<|||||>Here the PR to correct the naming: https://github.com/huggingface/transformers/pull/18896/files<|||||>Fix in #19025<|||||>Thanks for sharing this model @zphang! Do you intend to release the fine-tuned checkpoints? (pubmed-large, arxiv-large, govreport-large, etc)?<|||||>The FLAX weights of the fine-tuned models can be found here https://github.com/google-research/pegasus/tree/main/pegasus/flax#checkpoints And the FLAX to HF conversion script can be found here https://github.com/google-research/pegasus/blob/main/pegasus/flax/checkpoint_conversion/convert_from_flax_to_hf.py I'll try to convert the models over and upload them to HF hub this week.
transformers
18,550
closed
Update philosophy to include other preprocessing classes
This PR removes the emphasis on NLP and focuses more on `transformers` being designed for all modalities.
08-09-2022 23:36:28
08-09-2022 23:36:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,549
closed
fail to import import transformers.trainer due to libssl.so.10: cannot open shared object file: No such file or directory
### System Info Traceback (most recent call last): File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1002, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/trainer.py", line 66, in <module> from .data.data_collator import DataCollator, DataCollatorWithPadding, default_data_collator File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/data/__init__.py", line 19, in <module> from .data_collator import ( File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/data/data_collator.py", line 21, in <module> from ..models.bert import BertTokenizer, BertTokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/mt5/__init__.py", line 40, in <module> from ..t5.tokenization_t5_fast import T5TokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 23, in <module> from ...tokenization_utils_fast import PreTrainedTokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 24, in <module> import tokenizers.pre_tokenizers as pre_tokenizers_fast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: libssl.so.10: cannot open shared object file: No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 3, in <module> from transformers import Trainer, TrainingArguments, EvalPrediction File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 992, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1004, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): libssl.so.10: cannot open shared object file: No such file or directory ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Expected behavior I installed transformers following the official online website. steps: I create a new env using conda. And install it using conda. it give this error. i also tried from pip to install. the same error appear. From the error message, it seems tokenizer package may be the problem. But I am not sure how to solve it.
08-09-2022 21:02:31
08-09-2022 21:02:31
I am not exacly sure how solve it. But bascially I tried all combination from pip and conda and somehow it works.<|||||>I don't think this should be closed as I'm getting the same error on `continuumio/anaconda3` after `conda install -c huggingface transformers` but `pip install transformers` did work.<|||||>Got the same error with `conda install -c huggingface transfformers`. And thank you @Utopiah , the pip works.<|||||>I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`<|||||>> I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)` I worked when I did `pip uninstall tokenizers` and `pip install transformers`<|||||>Got the same error with conda install -c huggingface transfformers. <|||||>I was able to solve this by uninstalling torch<|||||>> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)` > > I worked when I did `pip uninstall tokenizers` and `pip install transformers` Worked for me<|||||>> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)` > > I worked when I did `pip uninstall tokenizers` and `pip install transformers` These two commands worked for me with the error 'libssl.so.10: cannot open shared object file'.
transformers
18,548
closed
Update documentation build section
This PR updates the `build_doc` with the `build_pr_documentation` job and how to see where things went wrong if the job fails.
08-09-2022 20:56:49
08-09-2022 20:56:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,547
closed
Fix memory leak issue in `torch_fx` tests
# What does this PR do? ~~**Question**: On GPU VMs, we have to use `spawn`, see [here](https://github.com/huggingface/transformers/pull/18547#issuecomment-1210375716). However, it still hangs with `spawn` (I can't figure out this yet). Should we have 2 branches: one using new process for CPU VM (on CircleCI), and another one using the original approach (no new process - for GPU VM, like on scheduled CI)?~~ **I might have a solution!** --> send the model to the child process in CPU and send to CUDA device there. ~I am going to try `torch.multiprocessing` first.~ not working neither ---- Run torch_fx tests in a spawn process to avoid [memory issue](https://github.com/huggingface/transformers/issues/18525#issue-1331914135). - See [this comment](https://github.com/huggingface/transformers/pull/18547#issuecomment-1210260525) for the effect - The reason to use `JoinableQueue` instead of `Queue` for the outputs: https://discuss.pytorch.org/t/using-torch-tensor-over-multiprocessing-queue-process-fails/2847
08-09-2022 16:35:43
08-09-2022 16:35:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>- without new process - 2~3 minutes for 100 runs - 15 MB leak per run - with `fork` - 5 minutes for 100 runs - 1 MB leak per run - hangs if `MKL_NUM_THREADS` > 1 - with `spawn` - 30 minutes for 100 runs - 1 MB leak per run<|||||>When , using the new process approach, in some cases, setting `ulimit -n 2048` is necessary. (For example, running the same test with a loop) Otherwise, we might get the following error: ```bash tests/models/bart/test_modeling_bart.py::BartModelTest::test_torch_fx Traceback (most recent call last): File "/usr/lib/python3.9/multiprocessing/queues.py", line 245, in _feed File "/usr/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 358, in reduce_storage RuntimeError: unable to open shared memory object </torch_46201_690006289_939> in read-write mode: Too many open files (24) ``` More details: ```bash > ??? tests/test_modeling_common.py:769: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_modeling_common.py:866: in _create_and_check_torch_fx_tracing ??? /usr/lib/python3.9/multiprocessing/process.py:121: in start ??? /usr/lib/python3.9/multiprocessing/context.py:277: in _Popen ??? /usr/lib/python3.9/multiprocessing/popen_fork.py:19: in __init__ ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <multiprocessing.popen_fork.Popen object at 0x7fa12a499820>, process_obj = <ForkProcess name='ForkProcess-10' parent=46201 initial> > ??? E OSError: [Errno 24] Too many open files /usr/lib/python3.9/multiprocessing/popen_fork.py:64: OSError ``` This seems to relate to torch multiprocessing: https://discuss.pytorch.org/t/runtimeerror-unable-to-open-shared-memory-object-depending-on-the-model/116090 Another related issue (not torch): https://github.com/lava-nc/lava/issues/71<|||||>With GPU, we have to use `spawn`, otherwise ``` Process ForkProcess-1: Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/transformers/tests/test_modeling_common.py", line 143, in _run_torch_jit model, input_names, filtered_inputs = in_queue.get(timeout=30) File "/usr/lib/python3.8/multiprocessing/queues.py", line 116, in get return _ForkingPickler.loads(res) File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/reductions.py", line 112, in rebuild_cuda_tensor torch.cuda._lazy_init() File "/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py", line 207, in _lazy_init raise RuntimeError( RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ```<|||||>I think it's safe to only run those tests on CPU. Also, when running locally it takes ~ 1 min (althought I agree my machine might be more powerful).<|||||>@michaelbenayoun I move (almost) the whole testing logic to the child process. On more advantage here is to create the model in the child process, so we don't need to pass it between the process. Now running 100 times, we have only (per run) `0.15 MB increase of memory usage`.<|||||>@michaelbenayoun You are right, some model overwrites `_create_and_check_torch_fx_tracing`. This won't fail this PR however: those models will just run the `test_torch_fx*` tests in the current manner (i.e. not in the child process). I will take a look if those overwritting are necessary. In any case, we can merge this PR as it is (if you are happy with it), and I will work on those models later.<|||||>I think it's okay now with the changes you've made!<|||||>> I think it's okay now with the changes you've made! Would love to have a approval from you, @michaelbenayoun. But no need to rush - as long as you finally happy with the change and click the button.<|||||>ready for @sgugger and/or @LysandreJik to have a final check 🚀 <|||||>I will merge this afternoon, after adding a short command in `_create_and_check_torch_fx_tracing` explaining why we need this change, with a link to #18525<|||||>Hi @michaelbenayoun, I just saw that I fixed a similar issue a few months ago https://github.com/huggingface/transformers/blob/fbf382c84da4506484a23e85bd8540da5192ff4e/tests/test_modeling_common.py#L719 (for `_create_and_check_torchscript`). I am going to change this PR to simply apply that fix. Is it OK for you?<|||||>Changed the PR to simply call `clear_torch_jit_class_registry`. Test failure is irrelevant to this PR - merge now.
transformers
18,546
closed
TF: XLA-trainable DeBERTa v2
# What does this PR do? As discussed in https://github.com/huggingface/transformers/issues/18476 and https://github.com/huggingface/transformers/issues/18239, there are two problems while training DeBERTa v2 with TensorFlow: 1. `TFDebertaV2StableDropout` doesn't work at training time (actually, its logic is only triggered at training time, so it doesn't work at all :D) 2. TF complains about unknown shapes in `take_along_axis` (forward and backward passes, when the batch dim is `None`) This PR fixes both problems above :) Problem 1. is got a straightforward fix. The gradient propagation code didn't have the right gradient shapes -- this PR simplifies and fixes it by moving all functions inside the special dropout class (compare to the original PT implementation [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L173) -- also notice how much more elegant TF's code is ;)). Problem 2. is tricker. The exception gets fixed with the addition of a `shape_list`, but the code is super slow on TPU. This PR adds an if/else pair of branches, one that is efficient on TPU, the other on GPU :) _____________________________________________________ ⚠️ These exceptions were not caught because deberta v2 and v3 rely on special config options -- e.g. https://huggingface.co/microsoft/deberta-v3-base/blob/main/config.json#L14 How can we ensure we properly test these configurations?
08-09-2022 15:57:34
08-09-2022 15:57:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante I think it's better to replace ```python flat_x = tf.reshape(x, (-1, x.shape[-1])) flat_indices = tf.reshape(indices, (-1, indices.shape[-1])) gathered = tf.gather(flat_x, flat_indices, batch_dims=1) gathered = tf.reshape(gathered, shape_list(indices)) ``` with ```python gathered = tf.gather(x,indices,batch_dims=2) ``` which gives the same numerical results and the same performance according to my tests https://github.com/huggingface/transformers/issues/18239#issuecomment-1193126061<|||||>@WissamAntoun thank you for pointing it out, I completely missed it in the original thread! 🙏 Will make the change EDIT: this change also makes it ~10% faster 👍
transformers
18,545
closed
Preserve hub-related kwargs in AutoModel.from_pretrained
# What does this PR do? As was reported in #18537, when using `AutoConfig` inside the `AutoModel.from_pretrained` method, some kwargs are deleted and not passed to the `from_pretrained` method of the model. This PR makes sure they are preserved for those calls. Fixes #18537
08-09-2022 15:11:20
08-09-2022 15:11:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,544
closed
german docs translation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger i am not sure about using "du / ihr" or "sie". "Du" is more personal, while "sie" is more formal
08-09-2022 14:01:02
08-09-2022 14:01:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think it's preferable to use the more formal option, but in most cases, I'd prefer to reformulate the sentences to use the first person plural (wir) unless the sentence actually describes an action the user has to take. We're using the same style for the French translation which also has two pronouns for "you".<|||||>okay, then i will rewrite to "wir" and "sie" At the moment its still mixed with du und sie<|||||>Let me know when you're done and thanks a lot for diving into German translation! (Sorry should have begun with that!)<|||||>Thank you, @flozi00, for starting the German translation! 🤗 We created a new issue to track German translations (#18564). @sgugger LGTM once the translation is done. <|||||>ready to review @sgugger
transformers
18,543
closed
Typo in configuration
Hey @NielsRogge I found an inconsistency in the documentation and the code for configuration of GroupViT. The default for `num_output_groups` is `[64, 8, 8]` (notice the last element in the list), while that documented is `[64, 8, 0]`. It would be great if we could make the two consistent. https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/groupvit/configuration_groupvit.py#L158 https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/groupvit/configuration_groupvit.py#L204
08-09-2022 11:09:18
08-09-2022 11:09:18
Hi, thanks for spotting, feel free to fix it in #18020 <|||||>Will do!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,542
closed
[Test] Fix redirected links issue
# What does this PR do? This PR tries to address the issue of loading a model when the original link is redirected. This happened for BLOOM models where the repo ids has been changed but the code does not take into account redirected links. I am not sure how to properly test if this does not break anything so I am putting this PR as a test PR, so feel free to ignore it. Now loading BLOOM models with old naming works ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-350m" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` Also this can be done probably on the `huggingface_hub` level but I am not sure Related to #18531
08-09-2022 10:19:20
08-09-2022 10:19:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>This should be fixed upstream (there's an open issue IIRC)<|||||>Okay I see! I think that you are referring to this issue: https://github.com/huggingface/transformers/issues/17582 posting it here for visibility! I can't see any related PR to this issue for now, maybe it is hidden in another PR? EDIT: it will be fixed once `transformers` will use `huggingface_hub` behind the scenes for loading the models<|||||>It should be fixed on the Hugging Face Hub side at this stage (the issue reported incorrectly that it works for `huggingfCe_hub` tools but it does not), there is nothing left to do in Transformers.<|||||>note that in the meantime you can always opt to re-rename your repos if it's a big issue
transformers
18,541
closed
Minor update of `run_call_with_unpacked_inputs`
# What does this PR do? Use `type(self).__name__` instead of `str(self).lower()`. This is a follow-up of [this comment](https://github.com/huggingface/transformers/pull/18097#discussion_r926907848) by @gante.
08-09-2022 09:46:24
08-09-2022 09:46:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for making the change 👍
transformers
18,540
closed
BART - Fix attention mask device issue on copied models
# What does this PR do? This PR fixes a small issue when combining `device_map=auto` and OPT. When running the script below (tested it on my VM + Google Colab) (`pip install accelerate && pip install transformers`) ``` from transformers import AutoModelForCausalLM, AutoTokenizer MAX_NEW_TOKENS = 128 model_name = "facebook/opt-2.7b" text = "Hello my name is" tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer(text, return_tensors="pt").input_ids model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto') generated_ids = model.generate(input_ids, max_length=MAX_NEW_TOKENS) print(model.hf_device_map) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` We are getting: ``` 8 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) 533 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) 534 combined_attention_mask = ( --> 535 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask 536 ) 537 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ``` This is because `_expand_mask` creates the mask on the cpu whereas `combined_attention_mask` is always created on the same device as `inputs_embeds`. This PR fixes this issue Thanks @ArthurZucker ! cc @sgugger All OPT slow tests are passing with this fix!
08-09-2022 08:35:55
08-09-2022 08:35:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>I guess the reason we did not see it yet for other models using the same attention mask pre-processing function is that those models does not support `device_map=auto` yet (tried it with PegasusForCausalLM only) <|||||>BART slow tests are passing! Merging now
transformers
18,539
closed
Thoughts on updating package metadata
### Feature request Switch to modern Python packaging standards. ### Motivation The Python packaging ecosystem has standardized on the interface for build backends ([PEP 517](https://peps.python.org/pep-0517/)/[PEP 660](https://peps.python.org/pep-0660/)) and the format for metadata declaration ([PEP 621](https://peps.python.org/pep-0621/)/[PEP 631](https://peps.python.org/pep-0631/)). As a result, the execution of `setup.py` files is now [deprecated](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html). So, I'm spending my free time updating important projects so that they are modernized and set an example for others 😄 ### Your contribution I'll open a PR to show what that would look like.
08-09-2022 06:38:14
08-09-2022 06:38:14
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,538
closed
AttributeError: 'LayoutLMForTokenClassification' object has no attribute 'config'
### System Info Adding image embeddings to layoutLM makes the model unconvertable After following the - https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb I wanted to convert the .pt model to onnx. The issue is that the changes made in the notebook do not allow for the model conversion to work. New model - --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- import torch.nn as nn from transformers.models.layoutlm import LayoutLMModel, LayoutLMConfig from transformers.modeling_outputs import TokenClassifierOutput import torchvision from torchvision.ops import RoIAlign class LayoutLMForTokenClassification(nn.Module): def __init__(self, output_size=(3,3), spatial_scale=14/224, sampling_ratio=2 ): super().__init__() # LayoutLM base model + token classifier self.num_labels = len(label2idx) self.layoutlm = LayoutLMModel.from_pretrained("microsoft/layoutlm-base-uncased", num_labels=self.num_labels) self.dropout = nn.Dropout(self.layoutlm.config.hidden_dropout_prob) self.classifier = nn.Linear(self.layoutlm.config.hidden_size, self.num_labels) # backbone + roi-align + projection layer model = torchvision.models.resnet101(pretrained=True) self.backbone = nn.Sequential(*(list(model.children())[:-3])) self.roi_align = RoIAlign(output_size, spatial_scale=spatial_scale, sampling_ratio=sampling_ratio) self.projection = nn.Linear(in_features=1024*3*3, out_features=self.layoutlm.config.hidden_size) def forward( self, input_ids, bbox, attention_mask, token_type_ids, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, resized_images=None, # shape (N, C, H, W), with H = W = 224 resized_and_aligned_bounding_boxes=None, # single torch tensor that also contains the batch index for every bbox at image size 224 output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ return_dict = return_dict if return_dict is not None else self.layoutlm.config.use_return_dict # first, forward pass on LayoutLM outputs = self.layoutlm( input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] # next, send resized images of shape (batch_size, 3, 224, 224) through backbone to get feature maps of images # shape (batch_size, 1024, 14, 14) feature_maps = self.backbone(resized_images) # next, use roi align to get feature maps of individual (resized and aligned) bounding boxes # shape (batch_size*seq_len, 1024, 3, 3) device = input_ids.device resized_bounding_boxes_list = [] for i in resized_and_aligned_bounding_boxes: resized_bounding_boxes_list.append(i.float().to(device)) feat_maps_bboxes = self.roi_align(input=feature_maps, # we pass in a list of tensors # We have also added -0.5 for the first two coordinates and +0.5 for the last two coordinates, # see https://stackoverflow.com/questions/60060016/why-does-roi-align-not-seem-to-work-in-pytorch rois=resized_bounding_boxes_list ) # next, reshape + project to same dimension as LayoutLM. batch_size = input_ids.shape[0] seq_len = input_ids.shape[1] feat_maps_bboxes = feat_maps_bboxes.view(batch_size, seq_len, -1) # Shape (batch_size, seq_len, 1024*3*3) projected_feat_maps_bboxes = self.projection(feat_maps_bboxes) # Shape (batch_size, seq_len, hidden_size) # add those to the sequence_output - shape (batch_size, seq_len, hidden_size) sequence_output += projected_feat_maps_bboxes sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fct = nn.CrossEntropyLoss() if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels)[active_loss] active_labels = labels.view(-1)[active_loss] loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ERROR ------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------- 3 from transformers.onnx import export 4 def save_onnx(save_path): 5 onnx_config = LayoutLMOnnxConfig(model.config) 6 export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name) 1206 return modules[name] 1207 raise AttributeError("'{}' object has no attribute '{}'".format( 1208 type(self).__name__, name)) 1209 1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'LayoutLMForTokenClassification' object has no attribute 'config' Please help.@NielsRogge ### Who can help? @NielsRogge @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Step 1 . Run this notebook - https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb?authuser=4#scrollTo=Vr4sG80hu6rC Step 2 - Run the model conversion code - from pathlib import Path from transformers.models.layoutlm import LayoutLMOnnxConfig from transformers.onnx import export def save_onnx(save_path): onnx_config = LayoutLMOnnxConfig(model.config) export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) print("Save model as ONNX") save_onnx('/content/data/model/model.onnx') I have also tried this method, but the output is blank.------------------------------------------------------- def save_onnx(save_path): configuration = LayoutLMConfig() onnx_config = LayoutLMOnnxConfig(configuration) export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) Please let me know if you will need anything else. ### Expected behavior The converted onnx model is produced in the instructed directory
08-09-2022 05:07:33
08-09-2022 05:07:33
Hi, This question seems better suited for our [forum](https://discuss.huggingface.co/). Would you be able to post your question there? Thanks!<|||||>ok sir
transformers
18,537
closed
AutoModel(s) do not respect the `revision` flag while loading custom models
### System Info - `transformers` version: 4.21.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.10.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForImageClassification m = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision="ed94a7c6247d8aedce4647f00f20de6875b5b292" ) # It will print: # Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. ``` I stepped through the code and observed that `AutoConfig.from_pretrained` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L423) swallows the `revision` from `kwargs`, meaning that later on line [433](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L433) it's no longer there. I believe the same issue applies to `use_auth_token`. ### Expected behavior I think the revision should propagate to both the configuration and model.
08-09-2022 00:49:37
08-09-2022 00:49:37
cc @sgugger <|||||>Thanks for flagging! The PR linked above should solve this.<|||||>Appreciate the quick turnaround :)
transformers
18,536
closed
Propose file change
I am looking to start contributing to OSS on GitHub and trying it out first with some simple grammar fixes.
08-08-2022 23:21:12
08-08-2022 23:21:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18536). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,535
closed
Update Metrics in docs with Evaluate
This PR updates the fine-tuning tutorial to use Evaluate instead of Metrics 🙂
08-08-2022 23:01:00
08-08-2022 23:01:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,534
closed
Use commit hash to look in cache instead of calling head
# What does this PR do? This PR tries to limit the calls to requests.head made for cached models every time we try to load them. Currently on the main branch, a call to the following objects results to the following number of underlying calls to the API: - AutoConfig: 1 (fiou) - AutoModel: 2 (model + config) - AutoTokenizer: 9 (multiple tokenizer files and multiple calls to config) - pipeline: 13 (all of the above + one extra call to config) - a sharded model: number of shards + 2 This is a bit excessive, so this PR reduces this to the maximum it can by using the commit hash of the first file downloaded: if it's the same as something we have in the cache, then all files in that subfolder with the same commit hash are up to date. As you can see in the tests it does not completely succeed, because we can't detect with this reasoning if a file does not exist in the repo: if it's not in the cache, it could be because it's still not downloaded yet. But still it reduces the number of calls seen above to: - AutoConfig: 1 - AutoModel: 1 - AutoTokenizer: between 2 and 4 depending on the tokenizer - pipeline: between 2 and 4 depending on the tokenizer - a sharded model: 2
08-08-2022 21:13:39
08-08-2022 21:13:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,533
closed
Add ConvNeXt Mask R-CNN
# What does this PR do? This PR is an initial draft for implementing the classic Mask R-CNN framework with ConvNeXt as backbone. The framework is implemented in a single script, with the exception of 3 files (for now): * assign_result.py * losses.py * mask_target.py As we have a one model, one file policy, I'm reimplementing ConvNeXT leveraging Copied from statements. So `ConvNextMaskRCNNModel` is almost identical to `ConvNextModel`. This way, the backbone used for object detection stays independent from the original one. In this case for instance, extra layernorms are added after each stage. There's a dependency on torchvision, which is used for NMS (non-maximum suppression, a postprocessing algorithm used by both the RPN head and the RoI head). To do: - [x] update NumPy logic to pure PyTorch (i.e. channels first everywhere) - see branch `add_convnext_maskrcnn_torch_shapes` - [ ] update outputs of model to have channels first (no NumPy)
08-08-2022 17:26:22
08-08-2022 17:26:22
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,532
closed
Update perf_train_gpu_one.mdx
Fixes doc newlines (which is causing markdown parser errors) preview rendering correctly: <img width="500" alt="Screenshot 2022-08-08 at 18 35 08" src="https://user-images.githubusercontent.com/11827707/183468189-0be58ab5-b1fa-4a98-a4f4-ae4751960933.png">
08-08-2022 16:06:19
08-08-2022 16:06:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,531
closed
Update BLOOM parameter counts
Update parameter counts of BLOOM models. The original counts were incorrect & have already been updated on the hub. I can't add reviewers, but @younesbelkada @thomasw21 may want to review Script for counting: ```python def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) count_parameters(AutoModelForCausalLM.from_pretrained("bigscience/bloom-350m")) ``` 🌸🤗
08-08-2022 16:01:10
08-08-2022 16:01:10
Hi @Muennighoff ! Thanks for the fix, just FI the original model sizes were taken from: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml/smaller_models And I am afraid changing model names can lead to some breaking changes (thinking especially of all the Spaces that are using these models) I think maybe it's safer to rename the models as they were and discuss how we can fix that here <|||||>I think it's fine as old links still work ``` New: Automatic Redirection All links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL} ``` <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok if this is the case sounds good to me! 💪 Thanks for the fix!<|||||>Note that the spaces will probably still break; As e.g. `AutoTokenizer.from_pretrained("bigscience/bloom-350m")` no longer works<|||||>Wait I think you might have broken old links. ``` Traceback (most recent call last): File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py", line 619, in _get_config_dict resolved_config_file = cached_path( File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/utils/hub.py", line 285, in cached_path output_path = get_from_cache( File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/utils/hub.py", line 509, in get_from_cache raise OSError( OSError: Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/models/auto/auto_factory.py", line 423, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/models/auto/configuration_auto.py", line 731, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py", line 557, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py", line 659, in _get_config_dict raise EnvironmentError( OSError: Can't load config for 'bigscience/bloom-350m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bigscience/bloom-350m' is the correct path to a directory containing a config.json file ``` I'm using `transformers=4.21.0`<|||||>Yes I can confirm this breaks loading the model using `pipeline` and tokenizers as well (using transformers=4.21.0 and Google Colab). ``` from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline MAX_NEW_TOKENS = 128 model_name = "bigscience/bloom-350m" text = "Hello my name is" pipe = pipeline(task="text-generation", model=model_name) ``` ``` OSError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in _get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 655 except EnvironmentError: 656 raise EnvironmentError( --> 657 f"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from " 658 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " 659 f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " OSError: Can't load config for 'bigscience/bloom-350m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bigscience/bloom-350m' is the correct path to a directory containing a config.json file ``` Does not work also for models ``` model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") ``` Could you point us on how you got: ``` New: Automatic Redirection All links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL} ``` We can probably fix it through a PR <|||||>> I think it's fine as old links still work > > ``` > New: Automatic Redirection > All links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL} > ``` This just means that the old URLs still work, i.e. https://huggingface.co/bigscience/bloom-350m (It's from the Settings screen on the Hub). The model names need to be updated (which is not a bug I think).<|||||>I'd say this is a breaking change. @sgugger does the `from_pretrained` method not take in account redirection?<|||||>I addressed a potential fix in: https://github.com/huggingface/transformers/pull/18542 now I can load BLOOM models with old links but I am not sure if this breaks anything else (maybe let's wait for a review and the results of the CI tests there)<|||||>`huggingface_hub` does not take into account redirections in its download methods. The issue was given low priority from what I understand, you can bug folks internally to show it's a bit important :-)<|||||>Let's merge this? I think the damage is done & reverting now would just cause more damage. I will communicate such a change more extensively next time, sorry for the inconveniences caused.
transformers
18,530
closed
[New Model] Donut: Document Understanding Transformer
### Model description Donut doughnut, Document understanding transformer, is a new method of document understanding that utilizes an OCR-free end-to-end Transformer model. Donut does not require off-the-shelf OCR engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing). In addition, we present SynthDoG dog, Synthetic Document Generator, that helps the model pre-training to be flexible on vairous languages and domains. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code @clovaai : https://github.com/clovaai/donut Weights: - https://huggingface.co/naver-clova-ix/donut-base - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v1-2560 - https://huggingface.co/naver-clova-ix/donut-base-finetuned-zhtrainticket - https://huggingface.co/naver-clova-ix/donut-base-finetuned-rvlcdip - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v1 - https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2
08-08-2022 15:33:10
08-08-2022 15:33:10
See #18488 <|||||>Cool to see you working there, thank you very much =D
transformers
18,529
closed
Fix ORTTrainer failure on DeBERTa(base/v2/sew_d) fp16 training
# What does this PR do? __Context__ It was reported in optimum https://github.com/huggingface/optimum/issues/305 that the training on DeBERTa with optimum.onnxruntime.ORTTrainer is broken. After investigation, the break comes from two causes: * At that time `XDropOut` didn't have a symbolic function. And it has been implemented by @garymm in https://github.com/huggingface/transformers/pull/17502 and has been merged to the main of transformers. * The implementation of DeBERTa have some numpy/math operations that led to incorrect export. This will be fixed in https://github.com/huggingface/transformers/pull/18272. However with those two fixes, the fp32 training will work, but the mixed-precision training will fail due to mismatched inputs dtype for some `Matmul` nodes. In https://github.com/huggingface/transformers/pull/18272, some `sqrt` results are cast to `fp32`, and they need to be re-casted to fp16 before `Matmul` ops, and this PR is supposed to add the re-cast part. Fixes #https://github.com/huggingface/optimum/issues/305 ## Who can review? @LysandreJik @patrickvonplaten @lewtun
08-08-2022 15:10:04
08-08-2022 15:10:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18529). All of your documentation changes will be reflected on that endpoint.<|||||>close as it turned to be too messy even after rebasing.
transformers
18,528
closed
[New Model] LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding
### Model description Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure. Code and model are publicly available at [this https URL](https://github.com/jpWang/LiLT). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code @jpWang : https://github.com/jpWang/LiLT Weights (Author @ManuelFay ): - https://huggingface.co/manu/lilt-camembert-dit-base-hf - https://huggingface.co/manu/lilt-camembert-base - https://huggingface.co/manu/lilt-camembert-dit-base - https://huggingface.co/manu/lilt-infoxlm-base
08-08-2022 15:02:56
08-08-2022 15:02:56
Hi, thanks for your great effort. Contact me if any problem encountered :)<|||||>Closing as it has been added in #19450
transformers
18,527
closed
unpin resampy
# What does this PR do? unpin resampy
08-08-2022 14:10:19
08-08-2022 14:10:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>Looks good! The test running time is also normal.
transformers
18,526
closed
Specify en in doc-builder README example
# What does this PR do? Corrects a small typo in the docs README Fixes #18508 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger
08-08-2022 13:59:53
08-08-2022 13:59:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,525
open
[Summary] Regarding memory issue in tests
### Description This is a short summary of the memory issue in our tests ### The following tests definitely have memory issues - PyTorch (increase ~`15 MB` each call): - test_torch_fx - test_torch_fx_output_loss - TensorFlow: - test_xla_fit - test_xla_generate_fast (increase ~`100 MB` each call) - test_xla_generate_slow - test_xla_mode - test_onnx_runtime_optimize (increase ~`8 MB` each call) - test_dataset_conversion (increase ~`0.2 M`B each call) - **Flax**: - **Almost all test methods have memory issue!** - [The CircleCI job run page](https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources) demonstrates this issue too ### Some tests are also suspicious, but need more investigations. - For example, the test `test_graph_mode` have the following memory *difference* in consecutive runs (in KB): ``` [936.0, 520.0, 260.0, 520.0, 0.0, 0.0, 260.0, 520.0, 0.0, 0.0, 260.0, 0.0, 0.0, 0.0, 260.0, 260.0, 260.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ``` (not always increase, but it continues to happen) - For `test_saved_model_creation_extended` (in KB): ``` [144436.0, -104552.0, 1280.0, -103908.0, -1536.0, 177868.0, -33572.0, 20240.0, 170852.0, -51704.0, -8448.0, 59904.0, -48128.0, 2440.0, 34856.0, 3068.0, -3420.0, -36864.0, -6756.0, 36136.0, -2048.0, -17400.0, -4608.0, -25896.0, 4096.0, 1024.0, 22344.0, 25784.0, -256.0] ``` (sometimes some amount of memory is released, but still leaks in the long run?) ### Pytest itself will accumulate some memory usage as tests continue to run. This is just my hypothesis: sometimes I see an increase of a few KB after a sequence of runs without leak. ### Possible actions to take - (It's probably worth it to fix this issue for a few tests mentioned above to gain some experience): - In this case, we can only focus on `non-slow` tests - **[Not to go]** There is a `pytest` plugin `pytest-forked` to run each test in a forked subprocess. But it doesn't work well with TensorFlow and Flax (some tests will hang forever). I will provide some details in the comments. - We can try to run the tests per model in each CircleCI job steps. However, the output on job run pages will be a bit noisy, but we can have an extra step to print the test failures in a cleaner way.
08-08-2022 13:56:20
08-08-2022 13:56:20
**TensorFlow hangs if a TF model is forked** This will hangs ```python import tensorflow as tf from transformers import TFDistilBertModel, DistilBertConfig import multiprocessing config = DistilBertConfig() config.n_layers = 1 config.n_heads = 2 config.dim = 4 config.hidden_dim = 4 model = TFDistilBertModel(config) def func(i): print(f"func with arg {i}: start") inputs = tf.ones(shape=(2, 3), dtype=tf.int32) outputs = model(inputs) print(f"func with arg {i}: done") return outputs print("start") with multiprocessing.Pool(processes=1) as pool: r = pool.map(func=func, iterable=range(16)) print("all done") print(len(r)) ```<|||||>**Strange hanging with TensorFlow Probability** Running the test with `--forked` ``` python3 -m pytest --forked -n 2 --max-worker-restart=0 --dist=loadfile -s --make-reports=tests_tf tests/models/auto/test_modeling_tf_auto.py | tee tests_output.txt ``` with `tensorflow-probability` installed will hang. After uninstalling `tensorflow-probability`, the tests finish quickly. (I am not sure what happens with `tensorflow-probability` here though) ---- Actually, running the following also hangs: ``` python3 -m pytest --forked -v test_tf.py ``` with `test_tf.py` being ``` from transformers import TFAutoModelWithLMHead #import tensorflow_probability as tfp from transformers.models.tapas.modeling_tf_tapas import TF_TAPAS_PRETRAINED_MODEL_ARCHIVE_LIST def test_foo(): model = TFAutoModelWithLMHead.from_pretrained("julien-c/dummy-unknown") ```<|||||>**--forked hang with Flax tests** Running the following test with `--forked` will hang ```python python3 -m pytest --forked -v test_flax.py ``` with `test_flax.py` being ```python def test_flax_foo(): from transformers import FlaxDistilBertModel, DistilBertConfig import numpy as np config = DistilBertConfig() config.n_layers = 1 config.n_heads = 2 config.dim = 4 config.hidden_dim = 4 model = FlaxDistilBertModel(config) ```<|||||>cc @LysandreJik for reading :-)<|||||>To ease the debugging process, the code snippet below is a self-contained script for running `FlaxBart`. The results looks like (`mem_FlaxBartForConditionalGeneration.json`, the memory usage in `MB`) ```python [ 157772.0, 823724.0, 850768.0, 878004.0, 905340.0, 933288.0, 959816.0, 986800.0, 1013596.0, 1041560.0, 1067088.0, 1095960.0, 1121640.0, 1149596.0, 1175144.0, 1203396.0, 1228764.0, 1256536.0, 1282528.0, 1309668.0, 1337724.0, 1362584.0, 1390300.0, 1417172.0, 1443084.0, 1471568.0, 1494896.0, 1500424.0, 1512176.0, 1519920.0, 1529484.0 ] ``` Here is the code snippet to run `test_beam_search_generate`. (This removes all `unittest` elements, and running without pytest) ```python run_flax_bart.py import copy import json import numpy as np import os import psutil import random import jax.numpy as jnp from jax import jit from transformers import BartConfig, FlaxBartModel, FlaxBartForConditionalGeneration, FlaxBartForSequenceClassification, FlaxBartForQuestionAnswering def ids_tensor(shape, vocab_size, rng=None): """Creates a random int32 tensor of the shape within the vocab size.""" if rng is None: rng = random.Random() total_dims = 1 for dim in shape: total_dims *= dim values = [] for _ in range(total_dims): values.append(rng.randint(0, vocab_size - 1)) output = np.array(values, dtype=jnp.int32).reshape(shape) return output def random_attention_mask(shape, rng=None): attn_mask = ids_tensor(shape, vocab_size=2, rng=rng) # make sure that at least one token is attended to for each batch attn_mask[:, -1] = 1 return attn_mask def shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray: """ Shift input ids one token to the right. """ shifted_input_ids = np.zeros_like(input_ids) shifted_input_ids[:, 1:] = input_ids[:, :-1] shifted_input_ids[:, 0] = decoder_start_token_id shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids) return shifted_input_ids def prepare_bart_inputs_dict( config, input_ids, decoder_input_ids=None, attention_mask=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, ): if attention_mask is None: attention_mask = np.where(input_ids != config.pad_token_id, 1, 0) if decoder_attention_mask is None: decoder_attention_mask = np.where(decoder_input_ids != config.pad_token_id, 1, 0) if head_mask is None: head_mask = np.ones((config.encoder_layers, config.encoder_attention_heads)) if decoder_head_mask is None: decoder_head_mask = np.ones((config.decoder_layers, config.decoder_attention_heads)) if cross_attn_head_mask is None: cross_attn_head_mask = np.ones((config.decoder_layers, config.decoder_attention_heads)) return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, } class FlaxBartModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_labels=False, vocab_size=99, hidden_size=16, num_hidden_layers=2, num_attention_heads=4, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=32, eos_token_id=2, pad_token_id=1, bos_token_id=0, initializer_range=0.02, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.eos_token_id = eos_token_id self.pad_token_id = pad_token_id self.bos_token_id = bos_token_id self.initializer_range = initializer_range def prepare_config_and_inputs(self): input_ids = np.clip(ids_tensor([self.batch_size, self.seq_length - 1], self.vocab_size), 3, self.vocab_size) input_ids = np.concatenate((input_ids, 2 * np.ones((self.batch_size, 1), dtype=np.int64)), -1) decoder_input_ids = shift_tokens_right(input_ids, 1, 2) config = BartConfig( vocab_size=self.vocab_size, d_model=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, eos_token_id=self.eos_token_id, bos_token_id=self.bos_token_id, pad_token_id=self.pad_token_id, initializer_range=self.initializer_range, use_cache=False, ) inputs_dict = prepare_bart_inputs_dict(config, input_ids, decoder_input_ids) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict class FlaxBartModelTest: is_encoder_decoder = True def __init__(self, model_class): self.model_tester = FlaxBartModelTester(self) self.model_class = model_class def _prepare_for_class(self, inputs_dict, model_class): inputs_dict = copy.deepcopy(inputs_dict) # hack for now until we have AutoModel classes if "ForMultipleChoice" in model_class.__name__: inputs_dict = { k: jnp.broadcast_to(v[:, None], (v.shape[0], self.model_tester.num_choices, v.shape[-1])) if isinstance(v, (jnp.ndarray, np.ndarray)) else v for k, v in inputs_dict.items() } return inputs_dict def _get_input_ids_and_config(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() # cut to half length & take max batch_size 3 max_batch_size = 2 sequence_length = inputs["input_ids"].shape[-1] // 2 input_ids = inputs["input_ids"][:max_batch_size, :sequence_length] attention_mask = jnp.ones_like(input_ids) attention_mask = attention_mask[:max_batch_size, :sequence_length] # generate max 5 tokens max_length = input_ids.shape[-1] + 5 if config.eos_token_id is not None and config.pad_token_id is None: # hack to allow generate for models such as GPT2 as is done in `generate()` config.pad_token_id = config.eos_token_id return config, input_ids, attention_mask, max_length def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model_inputs = self._prepare_for_class(inputs_dict, model_class) outputs = model(**model_inputs) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, self.model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, self.model_class) def test_beam_search_generate(self): config, input_ids, _, max_length = self._get_input_ids_and_config() config.do_sample = False config.max_length = max_length config.num_beams = 2 model = self.model_class(config) generation_outputs = model.generate(input_ids).sequences jit_generate = jit(model.generate) jit_generation_outputs = jit_generate(input_ids).sequences if __name__ == "__main__": all_model_classes = ( ( # FlaxBartModel, FlaxBartForConditionalGeneration, # FlaxBartForSequenceClassification, # FlaxBartForQuestionAnswering, ) ) for model_class in all_model_classes: test = FlaxBartModelTest(model_class) all_rss = [] p = psutil.Process(os.getpid()) m = p.memory_full_info() rss = m.rss / 1024 all_rss.append(rss) for i in range(30): # This is fine # test.test_hidden_states_output() # Mem. leak test.test_beam_search_generate() m = p.memory_full_info() rss = m.rss / 1024 all_rss.append(rss) fn = f"mem_{model_class.__name__}.json" with open(fn, "w") as fp: json.dump(all_rss, fp, ensure_ascii=False, indent=4) ```<|||||>Thanks for summarizing all the info, @ydshieh!<|||||>To debug `test_torch_fx` more easily: with `n_iter = 500`: - with new process: + 60 MB - without new process: + 1700 MB - without `scripted(**filtered_inputs)`: + 400 MB - without `scripted(**filtered_inputs)` and `torch.jit.script(traced_model)`: + 30 MB ```python3 import copy import torch import tempfile import os import json import pickle import psutil import multiprocessing from transformers.utils.fx import symbolic_trace from transformers import BartConfig, BartModel torch_device = "cpu" model_class = BartModel config_dict = { "activation_dropout": 0.0, "activation_function": "gelu", "attention_dropout": 0.1, "bos_token_id": 0, "classifier_dropout": 0.0, "d_model": 16, "decoder_attention_heads": 4, "decoder_ffn_dim": 4, "decoder_layerdrop": 0.0, "decoder_layers": 2, "decoder_start_token_id": 2, "dropout": 0.1, "encoder_attention_heads": 4, "encoder_ffn_dim": 4, "encoder_layerdrop": 0.0, "encoder_layers": 2, "eos_token_id": 2, "forced_eos_token_id": None, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_encoder_decoder": True, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "max_position_embeddings": 20, "model_type": "bart", "num_hidden_layers": 2, "pad_token_id": 1, "scale_embedding": False, "transformers_version": "4.22.0.dev0", "use_cache": True, "vocab_size": 99 } config = BartConfig(**config_dict) inputs = { 'input_ids': torch.tensor([ [22, 30, 84, 13, 46, 95, 2], [74, 91, 58, 38, 3, 48, 2], [43, 32, 21, 60, 12, 42, 2], [20, 24, 75, 46, 62, 55, 2], [59, 91, 36, 57, 40, 36, 2], [23, 24, 33, 70, 13, 93, 2], [15, 4, 11, 45, 5, 87, 2], [78, 76, 67, 38, 3, 46, 2], [ 3, 31, 35, 85, 81, 46, 2], [47, 45, 97, 80, 75, 91, 2], [92, 49, 42, 65, 74, 98, 2], [67, 37, 84, 88, 55, 57, 2], [24, 53, 44, 36, 45, 24, 2], ], dtype=torch.int32), 'decoder_input_ids': torch.tensor([ [50, 56, 84, 91, 16, 49, 54], [ 2, 71, 62, 39, 27, 4, 93], [73, 45, 61, 63, 35, 25, 7], [27, 33, 23, 86, 13, 49, 32], [74, 36, 46, 83, 18, 40, 22], [45, 69, 41, 3, 29, 56, 49], [ 3, 38, 8, 52, 17, 55, 15], [63, 79, 42, 64, 62, 39, 40], [28, 59, 69, 14, 77, 45, 36], [56, 55, 82, 35, 66, 51, 19], [18, 96, 43, 34, 16, 69, 94], [68, 65, 52, 17, 77, 78, 54], [68, 57, 74, 42, 60, 13, 91] ]), 'attention_mask': torch.tensor([ [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True] ], dtype=torch.bool), 'decoder_attention_mask': torch.tensor([ [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True], [True, True, True, True, True, True, True] ], dtype=torch.bool), 'head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]]), 'decoder_head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]]), 'cross_attn_head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]]) } def _config_zero_init(config): configs_no_init = copy.deepcopy(config) for key in configs_no_init.__dict__.keys(): if "_range" in key or "_std" in key or "initializer_factor" in key or "layer_scale" in key: setattr(configs_no_init, key, 1e-10) return configs_no_init def _run_torch_jit(in_queue, out_queue): model, input_names, filtered_inputs = in_queue.get() traced_model = symbolic_trace(model, input_names) # blocked if forked with torch.no_grad(): traced_output = traced_model(**filtered_inputs) # Test that the model can be TorchScripted scripted = torch.jit.script(traced_model) with torch.no_grad(): scripted_output = scripted(**filtered_inputs) out_queue.put((traced_model, scripted_output)) out_queue.join() def create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=100, with_new_proc=False): configs_no_init = _config_zero_init(config) # To be sure we have no Nan configs_no_init.return_dict = False model = model_class(config=configs_no_init) model.to(torch_device) model.eval() model.config.use_cache = False input_names = [ "attention_mask", "decoder_attention_mask", "decoder_input_ids", "input_features", "input_ids", "input_values", ] filtered_inputs = {k: v for (k, v) in inputs.items() if k in input_names} input_names = list(filtered_inputs.keys()) model_output = model(**filtered_inputs) all_rss = [] p = psutil.Process(os.getpid()) m = p.memory_full_info() rss = m.rss / 1024 all_rss.append(rss) for i in range(n_iter): print(f"idx: {i} - start") if not with_new_proc: traced_model = symbolic_trace(model, input_names) with torch.no_grad(): traced_output = traced_model(**filtered_inputs) # Test that the model can be TorchScripted scripted = torch.jit.script(traced_model) with torch.no_grad(): scripted_output = scripted(**filtered_inputs) else: ctx = multiprocessing.get_context('spawn') in_queue = ctx.Queue() out_queue = ctx.JoinableQueue() in_queue.put((model, input_names, filtered_inputs)) process = ctx.Process(target=_run_torch_jit, args=(in_queue, out_queue)) process.start() traced_model, scripted_output = out_queue.get() out_queue.task_done() process.join() print(f"idx: {i} - end") print("=" * 40) m = p.memory_full_info() rss = m.rss / 1024 all_rss.append(rss) fn = f"torch_jit_script_mem_with_new_proc={with_new_proc}.json" with open(fn, "w") as fp: json.dump(all_rss, fp, ensure_ascii=False, indent=4) if __name__ == "__main__": create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=500, with_new_proc=True) create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=500, with_new_proc=False) ```<|||||>@patil-suraj @sanchit-gandhi @patrickvonplaten We have memory leak issue in some Flax tests. Basically, I observed this happens for `test_beam_search_generate`, `test_beam_search_generate_attn_mask` and `test_beam_search_generate_logits_warper`, but there might be more. Each call to them increase memory usage by 10~30 MB. The CircleCI job run page also shows memory issue in Flax testing (https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources) To reproduce, see [here](https://github.com/huggingface/transformers/issues/18525#issuecomment-1209063895) for `test_beam_search_generate`. Not very urgent, but we will have trouble once models are added. Could you have a look, please? Let me know if you need more information.<|||||>Hey @ydshieh, I'm a bit under water at the moment - I'll put the issue on my TODO-list, but I can't promise to find time to look into it very soon. This link: https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources doesn't seem to show anything useful to me. Also just to understand better, are the flax tests running on GPU or CPU?
transformers
18,524
closed
Add EntityPairClassification Pipeline, AutoClass & LUKE ONNX Support
# What does this PR do? This PR started out in adding support for Luke in ONNX. To not break existing AutoPatterns in [FeaturesManager](src/transformers/onnx/features.py), [AutoModelForEntityPairClassification](src/transformers/models/auto/modeling_auto.py) has also been added. Additionally, a pipeline for [EntityPairClassification](src/transformers/pipelines/entity_pair_classification.py) has been added to make the task more supported overall by the library. Note: A previous PR (https://github.com/huggingface/transformers/pull/16562) has been closed / not merged for LUKE ONNX support. I believe this PR addresses the remaining comments in that one. All ONNX tests pass - happy to implement any additional comments for the Pipeline / Autoclass. I have only implemented one of the additional Tasks `EntityPairClassification` - if this has been done to the appropriate standard, I can also implement it for the other two remaining Luke Heads which are not currently supported `Span Classification` & `Entity Classification` @NielsRogge - Worked on the original LUKE implementation @lewtun & @michaelbenayoun - Reviewed the previous PR ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? <img width="991" alt="Screenshot 2022-08-08 at 13 27 00" src="https://user-images.githubusercontent.com/42403093/183430054-26ee3d97-c9b3-43c8-b844-cde031b263e0.png">
08-08-2022 13:39:51
08-08-2022 13:39:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18524). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @NielsRogge - what next steps would you suggest? Happy to make updates to the PR<|||||>I don't think it makes sense to create an auto-map just for this model, and the pipeline can be done as a [custom pipeline with code on the Hub](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub). If/when there are more models associated to this task, we can revisit this approach of course.<|||||>Thanks @NielsRogge , @lewtun & @sgugger ! I'll update the PR by reverting the autoclass creation and bypass the AutoModel Constructors in `test_onnx_v2.py` & use the LukeForXxx classes in `features.py` directly. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @kayvane1, feel free to revive this PR :)
transformers
18,523
closed
[VideoMAE] Add model to doc tests
# What does this PR do? This PR fixes the fact that VideoMAE supports the doc tests, but wasn't actually run. cc @ydshieh, could you point me where I need to add pip install decord to the setup of the machine that runs the doc tests?
08-08-2022 13:31:08
08-08-2022 13:31:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @NielsRogge , it is this file to add ``` docker/transformers-all-latest-gpu/Dockerfile ```<|||||>(The image will only be built tonight) If you want to build it now + even run the doctest to make sure it works, let me now
transformers
18,522
closed
New cache fixes: add safeguard before looking in folders
# What does this PR do? This PR adds a few fixes in the new cache functions, mainly to not call `os.listdir` on a folder that does not exist Fixes #18517
08-08-2022 13:29:36
08-08-2022 13:29:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,521
closed
update fsdp docs
# What does this PR do? 1. updates FSDP doc to reflect the recently integrated features.
08-08-2022 13:11:36
08-08-2022 13:11:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,520
closed
Image transforms library
# What does this PR do? This is the first of a series of PRs to replace feature extractors with image processors for vision models. Create a new module `image_transforms.py` that will contain functions for transforming images e.g. `resize`. The functions are designed to: * Accept numpy arrays. * Return numpy arrays (except for e.g. `to_pil_image`) * Provide logic such that the new image processors produce the same outputs as feature extractors when called directly. Subsequent PRs: * Image Processor Mixin: https://github.com/amyeroberts/transformers/pull/25 * GLPNImageProcessor: https://github.com/amyeroberts/transformers/pull/23 * GLPNFeatureExtractor -> GLPNImageProcessor alias https://github.com/amyeroberts/transformers/pull/24 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-08-2022 11:27:35
08-08-2022 11:27:35
@sgugger @NielsRogge @alaradirik @LysandreJik Adding you all for a first-pass review for the draft ImageProcessor work. This PR is failing because it's not safely importing e.g. `PIL` if it's not available, but the core logic shouldn't change. I'll add you to the follow up PRs too. Note: `ImageProcessor` has only been implemented for the GLPN model so far. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@alaradirik @sgugger I've now merged in the stacked PRs above this one. This PR has the transforms library and the image processor for GLPN. Thanks for all of you reviews so far! This should be ready for a final review to make sure all the pieces work together before merging. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@alaradirik @NielsRogge Could you (re-)review? <|||||>> I just have a question regarding multi-modal models such as CLIP and OWL-ViT. These models have both feature extractors and processors, which call their respective tokenizer and feature extractor. Wouldn't creating XXModelProcessor aliases for their feature extractors create issues? @alaradirik I believe this should be OK, as the feature extractors are being mapped to `XxxImageProcessor` rather than `XxxProcessor`, so there's no clash of names. Not sure if this answers your question or I've missed the consequence you're asking about.
transformers
18,519
closed
Add seed setting to image classification example
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds seed setting in the image classification example. Without it, runs are not reproducible because the seed is not set before model initialization (one can easily checks this behavior by running the command given in the README twice). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-08-2022 10:37:54
08-08-2022 10:37:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,518
closed
onnx run error at translation model
### System Info - `transformers` version: 4.17.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1、convert model to onnx: python3 -m transformers.onnx --model opus-mt-en-zh --atol=2e-04 --feature=seq2seq-lm opus-mt-en-zh-onnx-301 tips: Validating ONNX model... -[✓] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[✓] (2, 8, 65001) matches (2, 8, 65001) -[✓] all values close (atol: 0.0002) All good, model saved at: opus-mt-en-zh-onnx-301/model.onnx 2、translation: ```py from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from onnxruntime import InferenceSession tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh") session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx") inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt") outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` tips: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xieyouxi/anaconda3/envs/HuggingFace-torch-gpu/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 196, in run raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) ValueError: Model requires 4 inputs. Input Feed contains 2 ``` ### Expected behavior Unable to translate from en to zh, Am I using the wrong interface?
08-08-2022 10:34:00
08-08-2022 10:34:00
Hey @regisss @JingyaHuang @michaelbenayoun, do you have ideas about what might be happening there? I never used onnxruntime's `InferenceSession`.<|||||>@xyx361100238 The error message says that the model requires 4 inputs but you are providing only 2 of them. Either you need to provide the missing inputs, or you need to modify the `OnnxConfig` associated to your model to specify only 2 inputs. The architecture of *opus-mt-en-zh* seems to be *MarianMTModel*. According to what I see in the `OnnxConfig` [here](https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/marian/configuration_marian.py#L176), the 4 expected inputs are: - `input_ids`, - `attention_mask`, - `decoder_input_ids`, - `decoder_attention_mask`. So I think you are only providing `input_ids` and `attention_mask` here. To generate the missing inputs, you can take a look at [how dummy inputs used for exporting the model are generated](https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/marian/configuration_marian.py#L233).<|||||>Thanks for your reply! I'm sorry,still understand to generate decoder_input_ids & decoder_attention_mask,could you please give a example, or onnxruntime example with marian model!<|||||>So you also need to provide the inputs for the decoder side, something along the lines: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from onnxruntime import InferenceSession tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh") session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx") inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt") inputs["decoder_input_ids"] = torch.tensor([tokenizer.bos_token_id], dtype=torch.long) inputs["decoder_attention_mask"] = torch.tensor([1], dtype=torch.long) outputs = session.run(output_names=["last_hidden_state"], input_feed=inputs) ``` What is true is that it would be easier if the `decoder_attention_mask` was automatically generated, but we you currently need to provide it manually.<|||||>got error: ![image](https://user-images.githubusercontent.com/19569322/183870045-562624c5-ee9c-400f-95a8-cafcf969cdc9.png) <|||||>Basically, if you want to your ONNX model to predict the next token, provide the start of sentence token as first token, maybe you do not have `tokenizer.bos_token_id` but you know the value? Or maybe you do not have a start of sentence token? How are you running things on the `transformers` side?<|||||>![image](https://user-images.githubusercontent.com/19569322/183874977-e7dbe217-861d-4a94-b05b-abbc58f188ad.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Has this problem been solved?<|||||>Not Yet!<|||||>From the [marian tokenizer](https://github.com/huggingface/transformers/blob/e342ac7e0390d157e8c14a8a7c5cf2e715341cee/src/transformers/models/marian/tokenization_marian.py#L146), the bos_token_id is not initialized. Instead it recommends using the decoder_start_token_id from the config. For this model, the decoder_start_token_id is [65000](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/blob/main/config.json#L24). Example: ``` session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx") inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") inputs["decoder_input_ids"] = np.array([[65000]]) inputs["decoder_attention_mask"] = np.array([[1]]) outputs = session.run(None, input_feed=dict(inputs)) ```<|||||>Alternatively, I found that the optimum library makes working with seq2seq models in ONNX much easier. ``` from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSeq2SeqLM model_path = "Helsinki-NLP/opus-mt-en-zh" tokenizer = AutoTokenizer.from_pretrained(model_path) model = ORTModelForSeq2SeqLM.from_pretrained(model_path, from_transformers=True) onnx_translation = pipeline("translation", model=model, tokenizer=tokenizer) pred = onnx_translation("Hello") ```
transformers
18,517
closed
layoutlmv3 processor
### System Info ```shell The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. There was a problem when trying to move your cache: File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1551, in <module> move_cache() File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1491, in move_cache cached_files = get_all_cached_files(cache_dir=cache_dir) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1397, in get_all_cached_files for file in os.listdir(cache_dir): ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. There was a problem when trying to move your cache: File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1551, in <module> move_cache() File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1491, in move_cache cached_files = get_all_cached_files(cache_dir=cache_dir) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1397, in get_all_cached_files for file in os.listdir(cache_dir): ### Expected behavior ```shell install the processor ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
08-08-2022 10:23:00
08-08-2022 10:23:00
cc @sgugger <|||||>Is that the entire stacktrace, @founou-rihab ?
transformers
18,516
closed
[DX fix] Fixing QA pipeline streaming a dataset.
# What does this PR do? Linked to https://github.com/huggingface/transformers/issues/18510 Enabling nicer code. The dataset example of the docs : https://huggingface.co/docs/transformers/pipeline_tutorial#audio-pipeline Wouldn't work as nicely on QA because of `QuestionAnsweringArgumentHandler`. This handler is legacy and would iterate over the whole dataset effectively killing all properties of the pipeline. This restores nice properties when using `Dataset` or `Generator` since those are meant to be consumed lazily. It means that neither `Dataset` nor `Generator` can contain odd input shapes like List of questions and single context, or lists of questions and list of contexts, but in general that should be OK since it is not advertised as working anywhere. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-08-2022 09:52:20
08-08-2022 09:52:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,515
closed
Adds CLIP to models exportable with ONNX
This isn't currently working, getting an error while validating the model - ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64)) ``` Environment: Pytorch: 1.13.0.dev20220806 onnxruntime: 1.12.0 Would love some guidance here! @ChainYo @patrickvonplaten
08-08-2022 08:46:23
08-08-2022 08:46:23
Hi, @unography. Could you give us a more detailed traceback, please? It's hard to say without the script command and the full traceback.<|||||>> Hi, @unography. Could you give us a more detailed traceback, please? > > It's hard to say without the script command and the full traceback. this is the full traceback - ``` (transformers) ➜ transformers git:(main) python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/ vocab_file vocab.json merges_file merges.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json Using framework PyTorch: 1.13.0.dev20220806 /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask.fill_(torch.tensor(torch.finfo(dtype).min)) /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. warnings.warn( Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 405, in validate_model_outputs onnx_outputs = session.run(onnx_named_outputs, onnx_inputs) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64)) ```<|||||>@unography You need to pass the ONNX inputs in the same order as they are registered in the `forward` method of the model. We can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/models/clip/modeling_clip.py#L982) that `pixel_values` comes before `attention_mask`, so in the ONNX config you must return: ``` OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("pixel_values", {0: "batch"}), ("attention_mask", {0: "batch", 1: "sequence"}), ] ) ``` Note that it is an `OrderedDict` so the order matters :) To explain a bit the error message, what was happening is that it expected the second input to be `int64` since that is how you defined it in the ONNX config. But it actually got a float tensor because `pixel_values` is passed before `attention_mask` in the `forward` method.<|||||>> @unography You need to pass the ONNX inputs in the same order as they are registered in the `forward` method of the model. We can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/models/clip/modeling_clip.py#L982) that `pixel_values` comes before `attention_mask`, so in the ONNX config you must return: > > ``` > OrderedDict( > [ > ("input_ids", {0: "batch", 1: "sequence"}), > ("pixel_values", {0: "batch"}), > ("attention_mask", {0: "batch", 1: "sequence"}), > ] > ) > ``` > > Note that it is an `OrderedDict` so the order matters :) > > To explain a bit the error message, what was happening is that it expected the second input to be `int64` since that is how you defined it in the ONNX config. But it actually got a float tensor because `pixel_values` is passed before `attention_mask` in the `forward` method. ah yes, understood. able to resolve this issue, getting an error on the output values now ``` (transformers) ➜ transformers git:(main) python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/ vocab_file vocab.json merges_file merges.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json Using framework PyTorch: 1.13.0.dev20220806 /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask.fill_(torch.tensor(torch.finfo(dtype).min)) /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. warnings.warn( Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 405, in validate_model_outputs onnx_outputs = session.run(onnx_named_outputs, onnx_inputs) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64)) (transformers) ➜ transformers git:(clip_onnx) ✗ xx (transformers) ➜ transformers git:(clip_onnx) ✗ python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/ vocab_file vocab.json merges_file merges.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json Using framework PyTorch: 1.13.0.dev20220806 /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask.fill_(torch.tensor(torch.finfo(dtype).min)) /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. warnings.warn( Validating ONNX model... -[x] ONNX model output names {'last_hidden_state'} do not match reference model {'text_embeds', 'logits_per_image', 'text_model_output', 'logits_per_text', 'image_embeds', 'vision_model_output'} Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 414, in validate_model_outputs raise ValueError( ValueError: Outputs doesn't match between reference model and ONNX exported model: {'last_hidden_state'} ``` I'm guessing in the onnx config I have to define a separate function for outputs as well?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want. Not sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction?<|||||>> @unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want. > > Not sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction? so if I define in the onnx config, say `text_embeds` and `image_embeds`, but the model is actually returning more outputs, like `vision_model_output`, will these extra outputs create any conflict or will it get handled by the onnxconfig automatically?<|||||>> > @unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want. > > Not sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction? > > so if I define in the onnx config, say `text_embeds` and `image_embeds`, but the model is actually returning more outputs, like `vision_model_output`, will these extra outputs create any conflict or will it get handled by the onnxconfig automatically? Not sure about this. I think it should work the same as inputs, i.e. the order will matter and the inputs that are not specified in the config will just be skipped.<|||||>@regisss sure, i'll try it out, thank you so much for your help!<|||||>how do I make the test cases pass? Is it only formatting issues or something else?<|||||>@regisss I made the changes, apart from the code formatting. How do I format my code correctly? And do I need to run `make fix-copies` ?<|||||>> @regisss I made the changes, apart from the code formatting. How do I format my code correctly? And do I need to run `make fix-copies` ? I don't think `make fix-copies` is necessary anymore because you already updated the doc.<|||||>@unography Not sure why `modeling_groupvit.py` is still in the changes. Also, can you make sure that the test `pytest tests/onnx/test_onnx_v2.py -v -k "clip"` pass?<|||||>@regisss i reverted changes to groupvit, and when I'm running the test (on Colab) the tests are being skipped - ``` pytest tests/onnx/test_onnx_v2.py -v -k "clip" ``` ``` ============================= test session starts ============================== platform linux -- Python 3.7.13, pytest-3.6.4, py-1.11.0, pluggy-0.7.1 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /content/transformers, inifile: setup.cfg plugins: typeguard-2.7.1 collected 398 items / 396 deselected tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [ 50%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [100%] ``` is there some issue with my changes that its skipping these tests?<|||||>> @regisss i reverted changes to groupvit, and when I'm running the test (on Colab) the tests are being skipped - > > ``` > pytest tests/onnx/test_onnx_v2.py -v -k "clip" > ``` > > ``` > ============================= test session starts ============================== > platform linux -- Python 3.7.13, pytest-3.6.4, py-1.11.0, pluggy-0.7.1 -- /usr/bin/python3 > cachedir: .pytest_cache > rootdir: /content/transformers, inifile: setup.cfg > plugins: typeguard-2.7.1 > collected 398 items / 396 deselected > > tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [ 50%] > tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [100%] > ``` > > is there some issue with my changes that its skipping these tests? @unography My bad I forgot the environment variable in the command I gave you, sorry. Here it is: ```bash RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k "clip" ``` Some tests are skipped by default because they can take some time to complete, which is why we need to specify this env variable when running them.<|||||>@regisss for some reason on google colab the tests are still being skipped, so I'm not able to test on GPU this is on my local machine - ``` (transformers) ➜ transformers git:(clip_onnx) RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k "clip" =========================================================================================== test session starts =========================================================================================== platform darwin -- Python 3.8.12, pytest-7.1.2, pluggy-1.0.0 -- /Users/dhruv/Documents/code/transformers/.venv/bin/python cachedir: .pytest_cache rootdir: /Users/dhruv/Documents/code/transformers, configfile: setup.cfg collected 398 items / 396 deselected / 2 selected tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default PASSED [ 50%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default PASSED [100%] ============================================================================================ warnings summary ============================================================================================= tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/image_utils.py:223: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/feature_extraction_clip.py:67: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. resample=Image.BICUBIC, tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:681: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask.fill_(torch.tensor(torch.finfo(dtype).min)) tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. warnings.warn( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ============================================================================= 2 passed, 396 deselected, 14 warnings in 53.51s ============================================================================= ```<|||||>@unography I'm going to checkout your branch and run the test on GPU.<|||||>@regisss do let me know if there are any further changes needed!<|||||>> @regisss do let me know if there are any further changes needed! @unography I had to change two small things in `modeling_clip.py` to make the tests pass: - replace `similarity.T` by `similarity.t()` - replace `logits_per_text.T` by `logits_per_text.t()` It seems ONNX does not like `.T`, I got the issue mentioned [here](https://github.com/pytorch/pytorch/issues/51183). Have you encountered the same issue?<|||||>> > @regisss do let me know if there are any further changes needed! > > @unography I had to change two small things in `modeling_clip.py` to make the tests pass: > > * replace `similarity.T` by `similarity.t()` > * replace `logits_per_text.T` by `logits_per_text.t()` > > It seems ONNX does not like `.T`, I got the issue mentioned [here](https://github.com/pytorch/pytorch/issues/51183). Have you encountered the same issue? Oh I think this got fixed in pytorch's latest release. I'll verify this once, and test on older versions of Pytorch as well, and make the change and push<|||||>@regisss `.T` is working for me while using pytorch's nightly release, but it fails on pytorch `1.12.1`, the stable version. I've made the change to make it `.t()`, this is working in the nightly release version as well<|||||>Thanks @unography, it looks good to me! Looking at the failed tests, it seems you need to run `make fix-copies` one more time. Could you do it please?<|||||>@regisss ah, my mistake. pushed. there are now additional changes to owlvit, groupvit and vision_text_dual_encoder, I'm assuming we copy over code to these files from the actual CLIP model?<|||||>> @regisss ah, my mistake. pushed. there are now additional changes to owlvit, groupvit and vision_text_dual_encoder, I'm assuming we copy over code to these files from the actual CLIP model? Yes that is what happens. Actually you removed those changes because they were among all the formatting changes that you got the first time you ran black, and I told you to remove them, sorry. I did not pay attention to those. It looks good to me @unography :)<|||||>Gently pinging @sgugger for approval<|||||>@sgugger sure, removed the comment and pushed, but some tests are failing right now, I can't understand why.<|||||>This is a flaky test, don't worry. Thanks again for your contribution!<|||||>Congrats @unography for this PR!<|||||>Thanks @regisss for all the help!<|||||>Huge contribution! That's awesome!<|||||>Awesome work @unography - are you planning to add support for GroupViT and OWL-ViT as well?<|||||>@NielsRogge sure, if they're open I'll add a draft PR for them and get started!
transformers
18,514
closed
unpin torch to use 1.12.1
# What does this PR do? unpin torch to use 1.12.1
08-08-2022 08:07:52
08-08-2022 08:07:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>Let's maybe add a `!=1.12.0` as well. cc @sgugger, who also has a PR open here: https://github.com/huggingface/transformers/pull/17925<|||||>Here are the tests to fix before we can use 1.12.1 - tests/models/tapas/test_modeling_tf_tapas.py -k "TFTapasModelTest and test_pt_tf_model_equivalence" - tests/onnx/test_onnx_v2.py -k "StableDropoutTestCase and test_training" - tests/pipelines/test_pipelines_table_question_answering.py -k "TQAPipelineTests and test_slow_tokenizer_sqa_pt" - tests/pipelines/test_pipelines_table_question_answering.py -k "TQAPipelineTests and test_small_model_pt" - tests/models/tapas/test_modeling_tapas.py::TapasUtilitiesTest::<|||||>Most tapas tests will likely work given @sgugger's PR above, as it's probably linked to the version of the torch-scatter dependency<|||||>Yes! Thanks, @LysandreJik <|||||>(guess I can close this PR, and just merge #17925) I will check `StableDropoutTestCase and test_training` though.<|||||>Can you push necessary changes directly on #17925 (I'm too lazy to check this PR contains the same fixes as this one 😅 ) The branch is `enable_pt12`.<|||||>Close this and work on #17925 instead
transformers
18,513
closed
VisionEncoderDecoderModel gradient checkpointing
### Feature request Would love to be able to use gradient checkpointing on VisionEncoderDecoder model. >>> model.gradient_checkpointing_enable() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1418, in gradient_checkpointing_enable raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") ValueError: VisionEncoderDecoderModel does not support gradient checkpointing. ### Motivation Gradient checkpointing always helps increase the accessibility of larger models - HuggingFace is awesome!!! ### Your contribution Happy to take a stab at this if someone can point me to a previous example of this working with an EncoderDecoder model.
08-07-2022 17:23:50
08-07-2022 17:23:50
@NielsRogge, have you seen such examples? :)<|||||>Here's a PR that added gradient checkpointing to T5: https://github.com/huggingface/transformers/pull/11353/files<|||||>Fixed per #18697
transformers
18,512
closed
Add Spanish translation of converting_tensorflow_models.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish translation for `converting_tensorflow_models.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the updated `_toctree.yml` file. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? **Task assignment [here](https://github.com/huggingface/transformers/pull/18415#issuecomment-1203391039)**. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-07-2022 17:05:17
08-07-2022 17:05:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hola @omarespejel. This PR is ready for review. Can you help me here? Thx a lot!<|||||>@donelianc muchas gracias for the translation! I added a few comments as a review 🚀.<|||||>@omarespejel suggested changes done 😀<|||||>Muchas gracias @donelianc! Thanks for the translation! @sgugger LGTM :)
transformers
18,511
closed
FSDP - TypeError: load_state_dict() got an unexpected keyword argument 'strict'
### System Info ``` - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid - Python version: 3.7.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behaviour: 1. Clone transformers - `git clone https://github.com/huggingface/transformers.git` 2. move to transformers folder - `cd transformers` 3. Install from source - `pip install .` 4. Move to image-classification example - `cd examples/pytorch/image-classification` 5. Train the model using fsdp ``` torchrun --nproc_per_node=4 run_image_classification.py --dataset_name beans --output_dir ./beans_outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 5 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --save_strategy epoch --load_best_model_at_end True --save_total_limit 3 --seed 1337 --fsdp "full_shard auto_wrap" ``` ### Expected behavior Model should get finetuned and saved successfully. However, the following error is produced ``` [INFO|trainer.py:1949] 2022-08-07 08:35:00,771 >> Loading best model from ./beans_outputs/checkpoint-165 (score: 0.19044387340545654). Traceback (most recent call last): File "run_image_classification.py", line 384, in <module> main() File "run_image_classification.py", line 358, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model load_result = model.load_state_dict(state_dict, strict=False) TypeError: load_state_dict() got an unexpected keyword argument 'strict' Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "run_image_classification.py", line 384, in <module> File "run_image_classification.py", line 384, in <module> File "run_image_classification.py", line 384, in <module> main()main() File "run_image_classification.py", line 358, in main File "run_image_classification.py", line 358, in main main() File "run_image_classification.py", line 358, in main train_result = trainer.train(resume_from_checkpoint=checkpoint)train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train ignore_keys_for_eval=ignore_keys_for_eval,ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop self._load_best_model()self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model load_result = model.load_state_dict(state_dict, strict=False)load_result = model.load_state_dict(state_dict, strict=False) TypeErrorTypeError: : load_state_dict() got an unexpected keyword argument 'strict'load_state_dict() got an unexpected keyword argument 'strict' load_result = model.load_state_dict(state_dict, strict=False) TypeError: load_state_dict() got an unexpected keyword argument 'strict' ``` Full example log - [fsdp_error.txt](https://github.com/huggingface/transformers/files/9276468/fsdp_error.txt) Torch environment details: ``` PyTorch version: 1.12.0+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.22.4 Libc version: glibc-2.10 Python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB GPU 1: Tesla V100-SXM2-16GB GPU 2: Tesla V100-SXM2-16GB GPU 3: Tesla V100-SXM2-16GB Nvidia driver version: 510.47.03 cuDNN version: Probably one of the following: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mlflow-torchserve==0.2.0 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.21.6 [pip3] numpydoc==1.1.0 [pip3] pytorch-kfp-components==0.1.0 [pip3] pytorch-lightning==1.6.5 [pip3] pytorch-ranger==0.1.1 [pip3] torch==1.12.0 [pip3] torch-model-archiver==0.6.0 [pip3] torch-optimizer==0.1.0 [pip3] torch-workflow-archiver==0.2.4b20220511 [pip3] torchdata==0.4.0 [pip3] torchmetrics==0.7.3 [pip3] torchserve==0.6.0 [pip3] torchtext==0.13.0 [pip3] torchvision==0.13.0 [conda] blas 1.0 mkl [conda] mkl 2020.2 256 [conda] mkl-service 2.3.0 py37he8ac12f_0 [conda] mkl_fft 1.2.1 py37h54f3939_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] mlflow-torchserve 0.2.0 pypi_0 pypi [conda] numpy 1.21.6 pypi_0 pypi [conda] numpydoc 1.1.0 pyhd3eb1b0_1 [conda] pytorch-kfp-components 0.1.0 pypi_0 pypi [conda] pytorch-lightning 1.6.5 pypi_0 pypi [conda] pytorch-ranger 0.1.1 pypi_0 pypi [conda] torch 1.12.0 pypi_0 pypi [conda] torch-model-archiver 0.6.0 pypi_0 pypi [conda] torch-optimizer 0.1.0 pypi_0 pypi [conda] torch-workflow-archiver 0.2.4b20220511 pypi_0 pypi [conda] torchdata 0.4.0 pypi_0 pypi [conda] torchmetrics 0.7.3 pypi_0 pypi [conda] torchserve 0.6.0 pypi_0 pypi [conda] torchtext 0.13.0 pypi_0 pypi [conda] torchvision 0.13.0 pypi_0 pypi ``` the issue seems to be appearing after [this commit ](https://gist.github.com/shrinath-suresh/d613b48791d7fc49b859508ec8676ba1).
08-07-2022 08:47:33
08-07-2022 08:47:33
Hello @shrinath-suresh , this issue has to be fixed from PyTorch side. The issue raised with PyTorch has been linked above.<|||||>Also, when using `auto_wrap` please specify either `--fsdp_transformer_layer_cls_to_wrap <value>` or `--fsdp_min_num_params <number>` as part of cmd arguments. This is what enables sharding of parameters, gradients and optimizer state across GPUs so that peak memory usage is further decreased drastically and you get the most out of using FSDP. For more details, please refer https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html and https://pytorch.org/docs/1.12/fsdp.html?highlight=fsdp#module-torch.distributed.fsdp. 🤗 Trainer FSDP integration doc is being updated to reflect the recent updates in this PR https://github.com/huggingface/transformers/pull/18521. Please refer it for more details.<|||||>Thanks for raising this issue! I responded in PT: https://github.com/pytorch/pytorch/issues/82963. Although, not sure if HF uses nightlies/latest PT or a stable version. If we can't get pytorch updated in HF to include the fix, could we work around this by changing ``` model.load_state_dict(state_dict, strict=False) ``` to ``` model.load_state_dict(state_dict, False) ```<|||||>@rohan-varma Thank you very much. I applied the fix as given in the screenshot and compiled from source. The model is gettting saved in the fsdp mode. Attached image and logs for the same ![image](https://user-images.githubusercontent.com/63862647/184059491-94326735-b031-44dd-800e-660f5687c9b2.png) [vit_fsdp_with_fix.txt](https://github.com/huggingface/transformers/files/9305853/vit_fsdp_with_fix.txt) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This should be fixed in PyTorch nightly now: https://github.com/pytorch/pytorch/pull/83309
transformers
18,510
closed
Tqdm not working with question-answering pipeline
### System Info Running inside a notebook on Google Colab - `transformers` version: 4.21.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Narsil, @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the below code: from tqdm.auto import tqdm from transformers import pipeline dataset = load_dataset('cuad', split='test') dataset = dataset.remove_columns(['id', 'title', 'answers']) nlp = pipeline("question-answering", device=0, batch_size=64) for answer in tqdm(nlp(dataset)): print(answer) ### Expected behavior Expected to see a progress bar with the time remaining and the it/s but nothing is displayed.
08-07-2022 06:34:55
08-07-2022 06:34:55
Hi there, Try this: ```python from tqdm.auto import tqdm from transformers import pipeline from datasets import load_dataset dataset = load_dataset('cuad', split='test') dataset = dataset.remove_columns(['id', 'title', 'answers']) batch_size = 16 nlp = pipeline("question-answering", device=0) results = [] for i in tqdm(range(0, len(dataset), batch_size)): results.extend( nlp( context=dataset[i:i+batch_size]["context"], question=dataset[i:i+batch_size]["question"] ) ) ```<|||||>@nbroad1881 Thanks a bunch that worked! <|||||>@Alex-apostolo , The answer from @nbroad1881 will work, however it will not batch anything because the pipeline was not set with batching. ```python from tqdm.auto import tqdm from transformers import pipeline from datasets import load_dataset dataset = load_dataset('cuad', split='test') dataset = dataset.remove_columns(['id', 'title', 'answers']) batch_size = 16 nlp = pipeline("question-answering", device=0, batch_size=batch_size) # <--- small change results = [] for i in tqdm(range(0, len(dataset), batch_size)): results.extend( nlp( context=dataset[i:i+batch_size]["context"], question=dataset[i:i+batch_size]["question"] ) ) ``` This should work more as you intend. Keep in mind that batching will occur on chunks of text, not on the entire question/context. That's a feature since you have more control on the memory + sequence_length of what the model sees. So while you are sending 16 question+context pairs at a time you might get any amount of forward calls depending on the chunking of those pairs (only 1 forward call if the pair is small enough). `max_seq_len`, `doc_stride` and `max_question_len` might have to be adjusted for your dataset+model pair. (There are defaults used for squad, but it might impact actual score for your use case). Actually the problem is not really the pipeline in general it should work with `tqdm`. The problem is the legacy support for many args that's actually looking a tthe whole dataset to create `SquadExample` out of it. I am going to look at solutions for this, since consuming the entire dataset before feeding it to the pipeline (+ consuming memory) is not really intended. ```python from tqdm.auto import tqdm from transformers import pipeline from datasets import load_dataset dataset = load_dataset("cuad", split="test") dataset = dataset.remove_columns(["id", "title", "answers"]) pipe = pipeline("question-answering", device=0, batch_size=1, framework="pt") def data(dataset): for item in dataset: yield {"question": item["question"], "context": item["context"]} results = [] for out in tqdm(pipe(data(dataset)), total=len(dataset)): pass # print(out) ``` Here is an example that should be working + desirable (it actually works, but is *not* an iterator like intended.
transformers
18,509
closed
How to use run_glue.py with tensorboard?
### System Info I'm writting a folder path in --logging_dir but nothing is written there . I've tried with --logging_dir foldername --logging_dir pathtofolder But nothing works @LysandreJik @sgugger ### Who can help? @sgugger @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction --logging_dir foldername --logging_dir pathtofolder --logging_strategy steps \ --logging_first_step True \ --logging_steps 5 \ ### Expected behavior Nothing is saved in the folder
08-07-2022 02:36:38
08-07-2022 02:36:38
Try using `--report_to tensorboard`
transformers
18,508
closed
Small typo in docs/README.md
### System Info Working on main (commit 9129fd0377e4d46cb2d0ea28dc1eb91a15f65b77). The suggested command: ``` doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build ``` fails with: ``` FileNotFoundError: [Errno 2] No such file or directory: 'docs/source/_toctree.yml' ``` I think is because you have to specify a language (e.g. `en`) while building the docs, e.g. ``` doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build ``` I'm happy to contribute a fix. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following command: ``` doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build ``` ### Expected behavior The command should _not_ error, and instead should generate the expected MDX files.
08-06-2022 23:09:21
08-06-2022 23:09:21
Hey @ankrgyl! It seems that is correct! Would you like to open a PR to fix the problem? Happy to guide you through it if you need pointers; please ping @sgugger and myself on the PR. Thanks!<|||||>Hi, `CONTRIBUTING.md` also requires a language such as "en" to be specified in the doc-builder. @ankrgyl @LysandreJik <|||||>> Hi, `CONTRIBUTING.md` also requires a language such as "en" to be specified in the doc-builder. @ankrgyl @LysandreJik This typo has been fixed, sorry I missed the latest version of `CONTRIBUTING.md`.
transformers
18,507
closed
https://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/models/funnel/modeling_funnel.py#L1004
https://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/models/funnel/modeling_funnel.py#L1004
08-06-2022 18:17:57
08-06-2022 18:17:57
This does not look like an issue :) Closing Feel free to reopen with the actual question.
transformers
18,506
closed
bert-large-uncased gives `(1024) must match the size of tensor b (512) at non-singleton dimension 1` error
### System Info Python : python3.6 "transformers_version": "4.18.0" ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to use the bert-large-uncased for long sequence ending, but it's giving the error: Code: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) It's giving the following error : ~/.local/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 218 if self.position_embedding_type == "absolute": 219 position_embeddings = self.position_embeddings(position_ids) --> 220 embeddings += position_embeddings 221 embeddings = self.LayerNorm(embeddings) 222 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1 I also tried to change the default size of the positional embedding: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") model.config.max_position_embeddings = 1024 text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) But still the error is persistent, How to use large model for 1024 length sequences? ### Expected behavior Expecting the output of 1024 given the sequence length of 1024
08-06-2022 11:54:38
08-06-2022 11:54:38
Hi @monk1337 The loaded model has a maximum sequence length of 512 tokens. If you use: `model = BertModel.from_pretrained("bert-large-uncased", max_position_embeddings=1024)` The model won't be loaded because the loaded checkpoint also relies on 512 tokens (wrong tensor size). If you set `model.config.max_position_embeddings = 1024` after loading, this has no effect because the model is already loaded with 512 tokens. Some models have a `model.resize_position_embeddings(1024)` method (e.g Pegasus) but it is not the case for BERT. You have to: * load the model * set `model.config.max_position_embeddings = 1024` * manually resize both `model.embeddings.position_ids` and `model.embeddings.position_embeddings.weight.data` tensors Note that the way you resize `model.embeddings.position_embeddings.weight.data` can have a significant effect on the quality of predictions as you add new untrained parameters and vanilla attention has poor extrapolation capabilities. If you don't mind switching to an efficient attention mecanism, you can use my [repo](https://github.com/ccdv-ai/convert_checkpoint_to_lsg) to convert your model and process long sequences while preserving the quality of its predictions.<|||||>@ccdv-ai That's helpful; I was checking the repo; excellent work! It would be great if you could provide a simple working classification example of colab?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,505
closed
Suppress `reset_parameters` of `torch.nn.Linear,Conv2d...` inside `no_init_weights`
### Feature request `torch.nn.Linear,Conv2d...` will call `self.reset_parameters()` inside their `__init__`. I'd like to make `reset_parameters` be a no-op inside `no_init_weights` context manager. ### Motivation `no_init_weights` is used in `from_pretrained` to speed up loading large models. However, torch-built-in modules like `torch.nn.Linear` are heavily used in models of `transformers`, while its weights initialization cannot be disabled by `no_init_weights`. And in the doc string of `no_init_weights`, it should "globally disable weight initialization". ### Your contribution possible implementation ```python class SupportsResetParameters(Protocol): def reset_parameters(self): ... @contextmanager def no_init(module_classes: Iterable[Type[SupportsResetParameters]]): saved = {m: vars(m).get('reset_parameters') for m in module_classes} def no_op(_): pass for m in saved: m.reset_parameters = no_op # Iterable can only be safely iterated through once try: yield finally: for m, init in saved.items(): del m.reset_parameters if init is not None: m.reset_parameters = init TORCH_BUILT_IN_MODULES = [nn.Linear, nn.Conv2d, ...] @contextmanager def no_init_weights(): """ Context manager to globally disable weight initialization to speed up loading large models. """ global _init_weights saved = _init_weights _init_weights = False try: with no_init(TORCH_BUILT_IN_MODULES): yield finally: _init_weights = saved ```
08-06-2022 11:29:23
08-06-2022 11:29:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,504
closed
Restore _init_weights value in no_init_weights
Fend for potential nested use, and match the intuitive expectation for context managers(). In addition, users might modify private no_init_weights as well. @patrickvonplaten @stas00
08-06-2022 11:14:49
08-06-2022 11:14:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger has also been contributing extensively to that part of the code and might have something to say!<|||||>Thanks a lot for iterating on this!
transformers
18,503
open
Add Mask2Former
### Model description Mask2Former is a single architecture for panoptic, instance and semantic segmentation. **Mask2Former Paper Abstract**: Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2112.01527 Github repo (and weights): https://github.com/facebookresearch/Mask2Former
08-06-2022 10:27:50
08-06-2022 10:27:50
@NielsRogge I'd like to work on adding this model if no one is working on it yet?<|||||>cc'ing @alaradirik, yes we're planning to add this model. If you're interested in it, feel free to get started with a draft PR. Note that we already have [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer) implemented, and I've heard Mask2Former only adds minor modifications. Could you give me your email address, such that we can add you on Slack for easier communication?<|||||>Thanks @NielsRogge that would be great! You can use this email ([email protected]) to add me on Slack! I'll get started on a draft PR. But, I may need some guidance as this is my first time contributing to transformers. I'll get started by understanding the [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer) implementation.<|||||>Hi @NielsRogge just a gentle reminder to add me on slack :)<|||||>Hi, I've pinged someone to add you.<|||||>Hello. Any updates about the Mask2Former integration? Thanks<|||||>Hi @ArthurOuaknine I'm working on it. Currently there is an open PR on my transformers fork. Will try to close this in next couple of days. <|||||>Thanks for your work, it will definitely help :)
transformers
18,502
closed
The Document of LongT5 confilcts with and its example code of prefix
### System Info All. ### Who can help? @patrickvonplaten ### Reproduction See https://huggingface.co/docs/transformers/main/en/model_doc/longt5 ### Expected behavior In the above document, it said `Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective inspired by the pre-training of [PegasusForConditionalGeneration].`. But in the example code of `LongT5ForConditionalGeneration`, there is a prefix of `summarize: `. I am confused about how to use LongT5 in different down tasks. Could you please help? Thanks.
08-06-2022 04:00:49
08-06-2022 04:00:49
@stancld @patil-suraj Could you please help to solve this issue and tell me how to set and use special down tasks for LongT5? Thanks.<|||||>Hi @GabrielLin, with LongT5 model no prefix should be added to the input sentence. The doc example seems not to be accurate.<|||||>Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas.<|||||>> Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas. Hi, example should have been already fixed by @patrickvonplaten. Fine-tuning on different down-tasks should be pretty standard. There's no prefix, you can thus use same techniques for models like BART, GPT-2, etc. :] However, the final performance is questionable as, AFAIK, only summarization and Q&A has been investigated so far.<|||||>> > Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas. > > Hi, example should have been already fixed by @patrickvonplaten. Fine-tuning on different down-tasks should be pretty standard. There's no prefix, you can thus use same techniques for models like BART, GPT-2, etc. :] However, the final performance is questionable as, AFAIK, only summarization and Q&A has been investigated so far. Thank you @stancld . Thank @patrickvonplaten . I have one more question. If having the prefix, I consider that different down-tasks can be fine-tuned in the same model. Now, without the prefix, should we use separated model for different down-tasks? Thanks.<|||||>Hey @GabrielLin That depends on how different the use cases are and what your limitations are exactly. In general, I'd say yes you should use different fine-tuned models for different tasks<|||||>@patrickvonplaten Got it. Thanks. This issue has been fixed and closed.
transformers
18,501
closed
Wav2Vec 2.0 model output logits related audio pad?
### System Info ubuntu 18.04 python 3.6, 3.9 transformers 1.18.0 ### Who can help? @patrickvonplaten, @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. dev, test datasets is shuffled 2. eval, predict loop model input is not smart_batching (not support group_by_length) 3. then large batch size input calculated wer is higher than small batch size input 4. if i sorted audio length dev, test datasets, wer compute_metric is faster and not affected by batch size ### Expected behavior this is my real test case sorted case (batch: metric_result) 8: {'test_wer': 0.2266084739113378, 'test_cer': 0.08425300357677845} 4: {'test_wer': 0.22646135739505688, 'test_cer': 0.08419186206474887} 2: {'test_wer': 0.2264123185562966, 'test_cer': 0.08417657668674146} 1: {'test_wer': 0.22646135739505688, 'test_cer': 0.08419186206474887} un sorted case 8: 35% 4: not test 2: 25% 1: {'eval_wer': 0.22646135739505688, 'eval_cer': 0.08419186206474887} maybe, CNN Layer or Group normalization is affect to padded data...? (both config all raised this issue) when i was training, input group_by_length=True, so training is good i think but, eval, test sampler is just sequential sampler, so eval or predict test wer result is some weired
08-06-2022 00:18:03
08-06-2022 00:18:03
cc @sanchit-gandhi as well<|||||>@patrickvonplaten plz help!<|||||>Hey @YooSungHyun! I too have experienced differences in eval WER results by changing my padding strategy. In this case, I changed how I bucketed my inputs from bins of 2s to 1.5s, and got a 0.5% WER improvement when training on LibriSpeech 100h and evaluating on validation.clean. It looks like your example is much more severe! Theoretically speaking, padding should not impact the training or evaluation results: the attention mask ensures that padded inputs/labels are not attended to and sets them to a large negative number in the attention scores, so group norm and self-attention operations should be unaffected. However, practically there might be small differences due to numerical precision, especially if the amount of padding is excessive. If padding is having such a large effect on your evaluation results, it might be worthwhile injecting some custom behaviour into the `Trainer`. What you can do is override the `_get_eval_sampler` method to return the `LengthGroupedSampler` instead of the sequential sampler: ```python from typing import Optional import datasets import torch from datasets import Dataset from torch.utils.data import SequentialSampler from transformers import Trainer, is_datasets_available from transformers.trainer_pt_utils import LengthGroupedSampler from packaging import version class CustomTrainer(Trainer): def _get_eval_sampler(self, eval_dataset: Dataset) -> Optional[torch.utils.data.Sampler]: if self.args.group_by_length: # Build the sampler. Adapted from _get_train_sampler generator = None if version.parse(torch.__version__) >= version.parse("1.6"): generator = torch.Generator() generator.manual_seed(self.args.data_seed) if is_datasets_available() and isinstance(self.eval_dataset, datasets.Dataset): lengths = ( eval_dataset[self.args.length_column_name] if self.args.length_column_name in self.eval_dataset.column_names else None ) else: lengths = None model_input_name = self.tokenizer.model_input_names[0] if self.tokenizer is not None else None return LengthGroupedSampler( self.args.eval_batch_size, dataset=eval_dataset, lengths=lengths, model_input_name=model_input_name, generator=generator, ) else: return SequentialSampler(eval_dataset) trainer = CustomTrainer(model=model, ...) ``` Let me know how you get on<|||||>hi, bro! @sanchit-gandhi ! Lol! you make custom trainer???? 😂 Awesome!! But, I have another very easy way solution....kkk!!! eval and predict loop used SequentialSampler right? so! i only sorted my datasets. look like this, <img width="560" alt="image" src="https://user-images.githubusercontent.com/34292279/184590907-609f5227-783a-4679-9d9b-fd87658cce1c.png"> If training, group_by_length working. don't sort! If eval & predict, group_by_length not working, so sorting and input SequentialSampler -> it works looks like LengthGroupedSampler So, i don`t have to override anymore!! 😎 and, anyway, i think that problem is caused layer normalization. not attention. attention is innocent! wav2vec 2.0 pre-training have to select group or layer norm. and i debugging already it. using pad & not using pad(batch 1)'s normalize output is different and in case of very long sequence text and very short text (2 batchs), short text's attention output(context vector) is looks like all pad so, model predict empty text ''. so WER metric is high. that is problem🦄<|||||>Hey @YooSungHyun! Nice, the `.sort()` trick you used is neat! As you said, this is fine for the dev/test datasets where we don't require shuffling, and so a deterministic sorting strategy is entirely valid. There is indeed strange behaviour in the original Wav2Vec2 base checkpoint caused by a bug in the computation of the layer-norm layers: https://github.com/huggingface/transformers/blob/84beb8a49bf137a88d1b29ab3a85ba0a3cd097d5/src/transformers/models/wav2vec2/configuration_wav2vec2.py#L98 This was copied one-to-one from the original fariseq implementation! You could try using a checkpoint that uses the 'stable' layer-norm implementation, i.e. one of the large checkpoints: https://huggingface.co/facebook/wav2vec2-large-lv60/blob/main/config.json#L42<|||||>THX @sanchit-gandhi i'm already use that `do_stable_layer_norm ` that problem raised too. so, i have to sorted eval, test set...😂 and also, wav2vec2-conformer is not supported that param! do you agree pad issue is raised to layer_norm?<|||||>Sure, if you're using Wav2Vec2Conformer then the only configuration is the correct layer-norm implementation. It's hard to know where the issue lies without a reproducible example, could you maybe provide a short code-snippet that I could run to see how you're padding the data? Thanks!<|||||>@sanchit-gandhi THX for reply i checked wav2vec2-conformer, that is already do_stable_layer_norm like...! in case, i just pretrained base model and Wav2Vec2ForCTC finetuning. (do_stable_layer_norm is True, group_by_length True) and finally when i do predict loop (for model evaluation(testing)) first case. eval_set shuffle and eval batch size 2 WER is high second case. eval_set sort and eval batch size 2 WER is lower than first case third case. eval_set sort and eval batch size 1 WER is the lowest fourth case, eval_set sort and eval batch size 1 WER is same that third case. so, i think batch and shuffle is affect WER. that is reason to 'padded data is affect to layer normalization'. do_stable_layer_norm is not help for this situation i think. i used source is https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py but dataset is korean audio and korean text <|||||>Okay interesting - could you check the losses for the four cases - are they the same or do they differ? If they are the same it's a tokenizer issue with padding. Otherwise likely a modelling issue!<|||||>@sanchit-gandhi i`m very busy now, so i will reply this comment as soon as possible bro!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,500
closed
Update to use interlibrary links instead of Markdown
This PR updates Markdown links to other HF libraries with the doc-builder's interlibrary links instead.
08-05-2022 21:32:12
08-05-2022 21:32:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,499
closed
Update feature extractor methods to enable type cast before normalize
# What does this PR do? At the moment, the return type of our feature extractors isn't always as expected or sometimes fails if a `do_xxx` config flag is set to `False`. This PR introduces the necessary changes to the `ImageFeatureExtractionMixin` methods such that we can modify the feature extractor calls to fix this. This is an alternative solution to setting `return_tensors="np"` as default. Each vision model using `ImageFeatureExtractionMixin` has a separate PR adding their necessary modifications and tests. - [ ] [beit](https://github.com/amyeroberts/transformers/pull/12) - [ ] [clip](https://github.com/amyeroberts/transformers/pull/22) - [ ] [convnext](https://github.com/amyeroberts/transformers/pull/13) - [ ] [deit](https://github.com/amyeroberts/transformers/pull/14) - [ ] [detr](https://github.com/amyeroberts/transformers/pull/1) - [ ] [dpt](https://github.com/amyeroberts/transformers/pull/15) - [ ] [flava](https://github.com/amyeroberts/transformers/pull/17) - [ ] [glpn](https://github.com/amyeroberts/transformers/pull/18) - [ ] [imagegpt](https://github.com/amyeroberts/transformers/pull/2) - [ ] [layoutlmv2](https://github.com/amyeroberts/transformers/pull/19) - [ ] [layoutlmv3](https://github.com/amyeroberts/transformers/pull/20) - [ ] [levit](https://github.com/amyeroberts/transformers/pull/3) - [ ] [maskformer](https://github.com/amyeroberts/transformers/pull/4) - [ ] [mobilevit](https://github.com/amyeroberts/transformers/pull/21) - [ ] [owlvit](https://github.com/amyeroberts/transformers/pull/5) - [ ] [perceiver](https://github.com/amyeroberts/transformers/pull/6) - [ ] [poolformer](https://github.com/amyeroberts/transformers/pull/7) - [ ] [segformer](https://github.com/amyeroberts/transformers/pull/8) - [ ] [vilt](https://github.com/amyeroberts/transformers/pull/10) - [ ] [vit](https://github.com/amyeroberts/transformers/pull/16) - [ ] [yolos](https://github.com/amyeroberts/transformers/pull/11) - [ ] [videomae](https://github.com/amyeroberts/transformers/pull/9) ## Details At the moment, if `do_normalize=False`, `do_resize=True` and `return_tensors=None` then the output tensors will be a list of `PIL.Image.Image` objects if even if the inputs are numpy arrays. If `do_normalize=False` and `return_tensors` is specified (`"pt"`, `"np"`, `"tf"`, `"jax"`) an exception is raised. The main reasons for this are: * `BatchFeature` can't convert `PIL.Image.Image` to the requested tensors. * The necessary conversion of `PIL.Image.Image` -> `np.ndarray` happens within the `normalize` method and the output of `resize` is `PIL.Image.Image`. In order to have the type of the returned `pixel_values` reflect `return_tensors` we need to: * Convert `PIL.Image.Image` objects to numpy arrays before passing to `BatchFeature` * Be able to optionally rescale the inputs in the `normalize` method. If the input to `normalize` is a `PIL.Image.Image` it is converted to a numpy array using `to_numpy_array` which rescales to between [0, 1]. If `do_resize=False` then this rescaling won't happen if the inputs are numpy arrays. The optional flags enable us to preserve the same default behaviour for the `resize` and `normalize` methods whilst modifying the internal logic of the feature extractor call. ## Checks The model PRs are all cherry picked (file diffs) of `type-cast-before-normalize` The following was run to check the outputs: ``` from dataclasses import dataclass import requests import numpy as np from PIL import Image import pygit2 from transformers import AutoFeatureExtractor @dataclass class FeatureExtractorConfig: model_name: str checkpoint: str return_type: str = "np" feat_name: str = "pixel_values" IMAGE_FEATURE_EXTRACTOR_CONFIGS = [ FeatureExtractorConfig(model_name="clip", checkpoint="openai/clip-vit-base-patch32"), FeatureExtractorConfig(model_name="convnext", checkpoint="facebook/convnext-tiny-224"), FeatureExtractorConfig(model_name="deit", checkpoint="facebook/deit-base-distilled-patch16-224"), FeatureExtractorConfig(model_name="detr", checkpoint="facebook/detr-resnet-50"), FeatureExtractorConfig(model_name="dpt", checkpoint="Intel/dpt-large"), FeatureExtractorConfig(model_name="flava", checkpoint="facebook/flava-full"), FeatureExtractorConfig(model_name="glpn", checkpoint="vinvino02/glpn-kitti"), FeatureExtractorConfig(model_name="imagegpt", checkpoint="openai/imagegpt-small", feat_name='input_ids'), FeatureExtractorConfig(model_name="layoutlmv2", checkpoint="microsoft/layoutlmv2-base-uncased"), FeatureExtractorConfig(model_name="layoutlmv3", checkpoint="microsoft/layoutlmv3-base"), FeatureExtractorConfig(model_name="levit", checkpoint="facebook/levit-128S"), FeatureExtractorConfig(model_name="maskformer", checkpoint="facebook/maskformer-swin-base-ade", return_type="pt"), FeatureExtractorConfig(model_name="mobilevit", checkpoint="apple/mobilevit-small"), FeatureExtractorConfig(model_name="owlvit", checkpoint="google/owlvit-base-patch32"), FeatureExtractorConfig(model_name="perceiver", checkpoint="deepmind/vision-perceiver-fourier"), FeatureExtractorConfig(model_name="poolformer", checkpoint="sail/poolformer_s12"), FeatureExtractorConfig(model_name="segformer", checkpoint="nvidia/mit-b0"), FeatureExtractorConfig(model_name="vilt", checkpoint="dandelin/vilt-b32-mlm"), FeatureExtractorConfig(model_name="vit", checkpoint="google/vit-base-patch16-224-in21k"), FeatureExtractorConfig(model_name="yolos", checkpoint="hustvl/yolos-small"), ] VIDEO_FEATURE_EXTRACTOR_CONFIGS = [ FeatureExtractorConfig(model_name="videomae", checkpoint="MCG-NJU/videomae-base"), ] url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) def produce_pixel_value_outputs(): BRANCH = pygit2.Repository('.').head.shorthand def get_processed_outputs(inputs, model_checkpoint, feat_name): feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint) outputs = feature_extractor(inputs, return_tensors=fe_config.return_type)[feat_name] return outputs for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS: print(fe_config.model_name, fe_config.checkpoint) outputs = get_processed_outputs(image, fe_config.checkpoint, fe_config.feat_name) np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs) for fe_config in VIDEO_FEATURE_EXTRACTOR_CONFIGS: print(fe_config.model_name, fe_config.checkpoint) outputs = get_processed_outputs([[image, image]], fe_config.checkpoint, fe_config.feat_name) np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs) branch_main = "main" branch_feature = "type-cast-before-normalize" repo = pygit2.Repository('.git') print("\nChecking out main") branch = repo.lookup_branch('main') ref = repo.lookup_reference(branch.name) repo.checkout(ref) produce_pixel_value_outputs() print("\nChecking out type-cast-before-normalize") branch = repo.lookup_branch('type-cast-before-normalize') ref = repo.lookup_reference(branch.name) repo.checkout(ref) produce_pixel_value_outputs() for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS + VIDEO_FEATURE_EXTRACTOR_CONFIGS: model_name = fe_config.model_name try: output_1 = np.load(f"{model_name}_{branch_main}_pixel_values.npy") output_2 = np.load(f"{model_name}_{branch_feature.replace('-', '_')}_pixel_values.npy") max_diff = np.amax(np.abs(output_1 - output_2)) print(f"{model_name}: {max_diff:.5f}") except Exception as e: print(f"{model_name} failed check with {e}") ``` Output: ``` clip: 0.00000 convnext: 0.00000 deit: 0.00000 detr: 0.00000 dpt: 0.00000 flava: 0.00000 glpn: 0.00000 imagegpt: 0.00000 layoutlmv2: 0.00000 layoutlmv3: 0.00000 levit: 0.00000 maskformer: 0.00000 mobilevit: 0.00000 owlvit: 0.00000 perceiver: 0.00000 poolformer: 0.00000 segformer: 0.00000 vilt: 0.00000 vit: 0.00000 yolos: 0.00000 videomae: 0.00000 ``` ## Fixes https://github.com/huggingface/transformers/issues/17714 https://github.com/huggingface/transformers/issues/15055 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? (in model PRs)
08-05-2022 21:18:13
08-05-2022 21:18:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Looks good to me! If the changes per model are small enough, it would probably be best to change them all in the same PR, rather than doing individual ones. @sgugger Yep, I completely agree. The changes all together aren't that small, but almost exactly the same across models. Once this is merged in, I'll open a PR for the VideoMAE refactor (https://github.com/amyeroberts/transformers/pull/9/files) as this covers all the changes. Once approved, I'll merge in the other models to the branch, as for re-review of the total PR and then merge all together.
transformers
18,498
closed
Add example of multimodal usage to pipeline tutorial
As suggested by @NielsRogge, this PR adds a small example of how to use a multimodal pipeline (VQA) in the pipeline tutorial. 🙂
08-05-2022 20:51:31
08-05-2022 20:51:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,497
closed
Clean up hub
# What does this PR do? This PR removes all uses of the old utils in Transformers to migrate to the new version using `cached_file`, and remove said utils from the Transformers library. It is slightly breaking since we remove objects (in particular `cached_path` is in the main init albeit not documented), but those are all internal tools, so it's okay in my opinion. Only the research examples are left as before.
08-05-2022 20:24:20
08-05-2022 20:24:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>I still had an error mentioning `ImportError: cannot import name 'cached_path' from 'transformers.utils' ` @ `tf.__version__ == '2.11.0'`. Is this related? What should i do?<|||||>You should stop using that function, as it has been removed from `transformers.utils` in the most recent versions.
transformers
18,496
closed
TF to ONNX export fails with large models
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-187-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run `python -m transformers.onnx --model=gpt2-large --framework=tf onnx/` See error like below: ``` Traceback (most recent call last): File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 221, in from_trackable frozen_graph = from_function(concrete_func, inputs, outputs, large_model) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 280, in from_function raise e File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 273, in from_function frozen_func = convert_variables_to_constants_v2(func, lower_control_flow=False, aggressive_inlining=True) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1156, in convert_variables_to_constants_v2 converted_input_indices) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1082, in _construct_concrete_function new_output_names) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 660, in function_from_graph_def wrapped_import = wrap_function(_imports_graph_def, []) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 631, in wrap_function collections={}), File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 1143, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 87, in __call__ return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 93, in wrapped return fn(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 654, in _imports_graph_def importer.import_graph_def(graph_def, name="") File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 552, in new_func return func(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 412, in import_graph_def producer_op_list=producer_op_list) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 501, in _import_graph_def_internal with c_api_util.tf_buffer(graph_def.SerializeToString()) as serialized: ValueError: Message tensorflow.GraphDef exceeds maximum protobuf size of 2GB: 3096993336 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/craig/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/craig/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 94, in main args.output, File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/convert.py", line 338, in export return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/convert.py", line 265, in export_tensorflow onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/convert.py", line 493, in from_keras tf_loader.from_trackable(model, concrete_func, input_names, output_names, large_model) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 224, in from_trackable raise ValueError(err_large_model) ValueError: model exceeds maximum protobuf size of 2GB. Try setting large_model. ``` ### Expected behavior Export should still be successful for large TF models. `tf2onnx` expects `large_model` to be passed in should the protobuf exceed 2 GB. Not sure if `tf2onnx` behavior will be changed, but maybe `transformers` can account for this before using `tf2onnx`?
08-05-2022 18:20:41
08-05-2022 18:20:41
cc @JingyaHuang @michaelbenayoun <|||||>If there are no onnx-level solutions, it may be due to TF1 code (embeddings) in our models -- see https://github.com/tensorflow/tensorflow/issues/45041 Rewriting embeddings into TF2 code is in our to do list, which may fix this issue.<|||||>TF2ONNX offers the [support for exporting large ONNX](https://github.com/onnx/tensorflow-onnx/blob/v1.12.1/tf2onnx/convert.py#L427) tensors with external files, however by adding the flag to the ONNX exporter of transformers, it doesn't work correctly for the moment: ``` File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/transformers/onnx/convert.py", line 338, in export return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer) File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/transformers/onnx/convert.py", line 265, in export_tensorflow onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset, large_model=True) File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/convert.py", line 495, in from_keras model_proto, external_tensor_storage = _convert_common( File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/convert.py", line 165, in _convert_common g = process_tf_graph(tf_graph, const_node_values=const_node_values, File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values, File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 499, in graphs_from_tf utils.check_io(input_names, output_names, output_shapes.keys()) File "/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/utils.py", line 316, in check_io raise ValueError("Inputs/Outputs Not Found") ValueError: Inputs/Outputs Not Found ``` Further investigation needs to be done from the TensorFlow side. And I will be happy to help with a PR to enable this in transformers' onnx tf exporter once we are sure that the large proto export features work correctly.<|||||>> If there are no onnx-level solutions, it may be due to TF1 code (embeddings) in our models -- see [tensorflow/tensorflow#45041](https://github.com/tensorflow/tensorflow/issues/45041) > > Rewriting embeddings into TF2 code is in our to do list, which may fix this issue. Didn't know that, ok, it seems that it is not just a problem from the limit of protobuf size then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,495
closed
TF to ONNX export fails with CLI using example from docs
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-187-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Save a TF transformers model (from example at https://huggingface.co/docs/transformers/serialization) ``` from transformers import AutoTokenizer, TFAutoModelForSequenceClassification # Load tokenizer and TensorFlow weights from the Hub tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") # Save to disk tokenizer.save_pretrained("local-tf-checkpoint") tf_model.save_pretrained("local-tf-checkpoint") ``` 2. Use CLI to export to ONNX to see failure: `python -m transformers.onnx --model=local-tf-checkpoint onnx/` 3. Use `--framework` to use successfully: `python -m transformers.onnx --model=local-tf-checkpoint --framework=tf onnx/` ### Expected behavior Once the model directory has been provided, the export should know that a TF model is being used. There should be no dependency on PyTorch (there is also no PyTorch in this environment). Instead, I get this error: `RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found.` Either `transformers` should be updated or the docs at https://huggingface.co/docs/transformers/serialization should be updated to say that `--framework=tf` for TensorFlow models is required.
08-05-2022 17:25:28
08-05-2022 17:25:28
Hmmm that's interesting, indeed! The docs should be updated, but it would be nice to also support this out of the box. Would you like to try your hand at a PR? cc @lewtun @michaelbenayoun @JingyaHuang for knowledge<|||||>Sure, I can try making a PR for it! Will be doing so from my personal account, @rachthree.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Because PR https://github.com/huggingface/transformers/pull/18615 has been merged, I'm considering this closed.
transformers
18,494
closed
`pipeline` support for `device="mps"` (or any other string)
No tests yet given we don't officially support `torch==1.12` yet (and M1-based CI is still a work in progress)
08-05-2022 16:28:33
08-05-2022 16:28:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @julien-c . am getting a new error My guess this is on the pytorch op. > NotImplementedError: The operator 'aten::unique_consecutive' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. Here is the traceback > preds = pipe('i am feeling awesome', ['positive', 'negative']) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py", line 182, in __call__ > return super().__call__(sequences, **kwargs) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1071, in __call__ > return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1093, in run_single > model_outputs = self.forward(model_inputs, **forward_params) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 987, in forward > model_outputs = self._forward(model_inputs, **forward_params) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py", line 201, in _forward > outputs = self.model(**model_inputs) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl > return forward_call(*input, **kwargs) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py", line 1516, in forward > if len(torch.unique_consecutive(eos_mask.sum(1))) > 1: > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/_jit_internal.py", line 447, in fn > return if_false(*args, **kwargs) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/_jit_internal.py", line 447, in fn > return if_false(*args, **kwargs) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/functional.py", line 913, in _consecutive_return_output > output, _, _ = _unique_consecutive_impl(input, return_inverse, return_counts, dim) > File "/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/functional.py", line 830, in _unique_consecutive_impl > output, inverse_indices, counts = _VF.unique_consecutive( # type: ignore[attr-defined]<|||||>Yes @AsaKal, please comment on https://github.com/pytorch/pytorch/issues/77764<|||||>⚠️ You might have to `os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"` because many operations are still not implemented also cc @pacman100 for visibility
transformers
18,493
closed
Typo reported by Joel Grus on TWTR
null
08-05-2022 16:26:38
08-05-2022 16:26:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,492
closed
Move cache folder to huggingface/hub for consistency with hf_hub
# What does this PR do? This PR relocates the cache to just `~/.cache/huggingface/` when no environment variable has been set. Users having pulled between #18348 and now will need to move their cache manually.
08-05-2022 16:16:58
08-05-2022 16:16:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,491
closed
Update the original mapping in _LazyConfigMapping to fix AutoTokenizer registration
# What does this PR do? Currently when we want to register a new config+tokenizer+model, per [the instructions](https://huggingface.co/docs/transformers/model_doc/auto), it seems we should do the following: ``` from transformers import AutoConfig, AutoModel AutoConfig.register("new-model", NewModelConfig) AutoTokenizer.register(NewModelConfig, TokenizerSlow, TokenizerFast) AutoModel.register(NewModelConfig, NewModel) AutoTokenizer.from_pretrained("xxx") # <--- error `Unrecognized configuration class <xxx> to build an AutoTokenizer.` ``` However, there is one potential bug in the current AutoTokenizer registration code: - In https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/tokenization_auto.py#L605, `AutoTokenizer` will `config_class_to_model_type` to determine whether the corresponding config is registered in the input config. - The `config_class_to_model_type` function checks the `CONFIG_MAPPING_NAMES ` to find the newly register config class. https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L438 - However, according to https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L781 , after registering a config, the `CONFIG_MAPPING ` only updates the `_extra_content ` but not the original mapping or `CONFIG_MAPPING_NAMES` in this case https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L492 . That is to say, the `config_class_to_model_type` cannot find the newly registered config in this case, and will throw an error `Unrecognized configuration class <xxx> to build an AutoTokenizer.` A temporary local hot fix can be: ``` from transformers import AutoConfig, AutoModel from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES AutoConfig.register("new-model", NewModelConfig) CONFIG_MAPPING_NAMES["new-model"] = NewModelConfig.__name__ AutoTokenizer.register(NewModelConfig, TokenizerSlow, TokenizerFast) AutoModel.register(NewModelConfig, NewModel) ``` But thought it would be better to fix it upstream. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @n1t0, @LysandreJik
08-05-2022 16:13:59
08-05-2022 16:13:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18491). All of your documentation changes will be reflected on that endpoint.<|||||>cc @sgugger <|||||>Do you have a full example of the error you are reporting I could run? I am unable to reproduce it. Something like the [test](https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/tests/models/auto/test_tokenization_auto.py#L234) of this feature we could investigate more.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,490
closed
`transformers-cli login` => `huggingface-cli login`
null
08-05-2022 15:31:11
08-05-2022 15:31:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failures are due to `ERROR tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py` trying to be run in a non-torch setup and without the appropriate decorator setup; Fine to ignore the failure for me.
transformers
18,489
closed
Just re-reading the whole doc every couple of months 😬
null
08-05-2022 15:12:51
08-05-2022 15:12:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,488
closed
Add Donut
# What does this PR do? This PR adds Donut to the library. Donut is to LayoutLM what T5 is to BERT. :D The model is implemented as an instance of our existing `VisionEncoderDecoderModel`. See also https://github.com/clovaai/donut/issues/10#issue-1324734927 To do: - [x] move repos to appropriate organization in - [x] update niels to appropriate organization in `test_modeling_vision_encoder_decoder.py`
08-05-2022 13:21:27
08-05-2022 13:21:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>I have implemented a new `DonutSwinModel`, that copies everything of `SwinModel`, except the final layer norm. I've added it in a file called `modeling_donut_swin.py` (and implemented a corresponding `DonutSwinConfig` in `configuration_donut_swin.py`). I went with `modeling_donut_swin.py` (and `configuration_donut_swin.py`) in the "donut" folder rather than `modeling_donut.py` (and `configuration_donut.py`) since it only implements the model and configuration of the encoder part (Swin Transformer). For the decoder, BART is leveraged. Let me know if this is ok.<|||||>Hi @NielsRogge , do you plan on supporting: `Document Parsing` modality?
transformers
18,487
closed
Fix pipeline tests
# What does this PR do? Changes in #18392 broke tests in the pipelines, this PR fixes them.
08-05-2022 12:57:01
08-05-2022 12:57:01
> Why weren't these ran on the original PR ? Cause the common test files used not only have the tester for each pipeline, not tests of its own, so is not triggered by the `tests_fetcher`. Will fix in this PR as well.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
18,486
closed
Change BartLearnedPositionalEmbedding's forward method signature to support Opacus training
As outlined in #18425, this PR changes the signature of `BartLearnedPositionalEmbedding`'s forward method signature to take the `input_ids` tensor (and not just its shape). This is needed to enable private training of BART via DP-SGD in Opacus. PR welcomed by @sgugger in linked issue. Fixes #18425.
08-05-2022 12:55:45
08-05-2022 12:55:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>You will also need to apply the same changes to all the models that are touched by the change in embedding (mBART, plBARt etc) to have the tests passing.<|||||>@sgugger I am iterating through. 👍 thanks for the heads up, though!<|||||>Ah, this also looks like it's breaking the conversion to `torch.fx`. Let's see if @michaelbenayoun can think of an easy solution to that.<|||||>Thanks, I was about to ask.. any thoughts? I am not well-versed in torch's symbolic tracer (or FX generally). I'm happy to do the work if you can point me somewhere useful 😄 <|||||>Current offending line is line 985 in `src/transformers/utils/fx.py` (`HFTracer.trace()`): `self.graph = super().trace(root, concrete_args=concrete_args)` where `root` is ``` PLBartModel( (shared): Embedding(99, 16, padding_idx=1) (encoder): PLBartEncoder( (embed_tokens): Embedding(99, 16, padding_idx=1) (embed_positions): PLBartLearnedPositionalEmbedding(102, 16) (layers): ModuleList( (0): PLBartEncoderLayer( (self_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (activation_fn): GELUActivation() (fc1): Linear(in_features=16, out_features=4, bias=True) (fc2): Linear(in_features=4, out_features=16, bias=True) (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) (1): PLBartEncoderLayer( (self_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (activation_fn): GELUActivation() (fc1): Linear(in_features=16, out_features=4, bias=True) (fc2): Linear(in_features=4, out_features=16, bias=True) (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) ) (layernorm_embedding): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) (decoder): PLBartDecoder( (embed_tokens): Embedding(99, 16, padding_idx=1) (embed_positions): PLBartLearnedPositionalEmbedding(102, 16) (layers): ModuleList( (0): PLBartDecoderLayer( (self_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (activation_fn): GELUActivation() (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (encoder_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (encoder_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=16, out_features=4, bias=True) (fc2): Linear(in_features=4, out_features=16, bias=True) (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) (1): PLBartDecoderLayer( (self_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (activation_fn): GELUActivation() (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (encoder_attn): PLBartAttention( (k_proj): Linear(in_features=16, out_features=16, bias=True) (v_proj): Linear(in_features=16, out_features=16, bias=True) (q_proj): Linear(in_features=16, out_features=16, bias=True) (out_proj): Linear(in_features=16, out_features=16, bias=True) ) (encoder_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=16, out_features=4, bias=True) (fc2): Linear(in_features=4, out_features=16, bias=True) (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) ) (layernorm_embedding): LayerNorm((16,), eps=1e-05, elementwise_affine=True) ) ) ``` and `concrete_args` is: ``` {'head_mask': None, 'decoder_head_mask': None, 'cross_attn_head_mask': None, 'encoder_outputs': None, 'past_key_values': None, 'inputs_embeds': None, 'decoder_inputs_embeds': None, 'use_cache': None, 'output_attentions': None, 'output_hidden_states': None, 'return_dict': None} ```<|||||>I will check on Monday and come back to you, it should be easily fixable I think.<|||||>Hey @michaelbenayoun, let me know if you have any thoughts to resolve the tracer issue :)<|||||>@sgugger all resolved now. Would you mind giving the PR another look?<|||||>Thanks a lot for working on this!<|||||>No problem, thank you for your support 👍
transformers
18,485
closed
Remove py.typed
The experiment was fun (not). As we are getting a rise in issues of users complaining typecheckers are not happy with our annotations, which we have chosen to keep simple for the sake of documentation, this PR removes the `py.typed` file indicating to type-checkers that Transformers is properly typed. It is not, and it won't be in the near future because static type-checking things in Python requires sacrificing too much in term of clarity of code (at least in my opinion). This PR just makes it "official".
08-05-2022 12:29:04
08-05-2022 12:29:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,484
closed
Update some expected values in `quicktour.mdx` for `resampy 0.3.0`
# What does this PR do? It took me some time to figure out the test failure is due to different `resampy` versions. Some current values work with `0.2.2`, but not with `0.3.0`. [current failed job](https://github.com/huggingface/transformers/runs/7682932969?check_suite_focus=true)
08-05-2022 12:26:36
08-05-2022 12:26:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,483
closed
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): initialization failed
### System Info Goal: Run a **GPT-2** model instance. I am using the latest Tensorflow and Hugging Face 🤗 Transformers. - Tensorflow - 2.9.1 - Transformers - 4.21.1 Notebook: ``` pip install tensorflow ``` ``` pip install transformers ``` ``` from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2') set_seed(42) ``` ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd --------------------------------------------------------------------------- ImportError Traceback (most recent call last) ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: SystemError Traceback (most recent call last) SystemError: <built-in method __contains__ of dict object at 0x7f5b58a64d00> returned a result with an error set The above exception was the direct cause of the following exception: ImportError Traceback (most recent call last) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1001 try: -> 1002 return importlib.import_module("." + module_name, self.__name__) 1003 except Exception as e: ~/anaconda3/envs/python3/lib/python3.8/importlib/__init__.py in import_module(name, package) 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _gcd_import(name, package, level) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _find_and_load(name, import_) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _load_unlocked(spec) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap_external.py in exec_module(self, module) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/__init__.py in <module> 36 from ..utils import HUGGINGFACE_CO_RESOLVE_ENDPOINT, http_get, is_tf_available, is_torch_available, logging ---> 37 from .audio_classification import AudioClassificationPipeline 38 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py in <module> 19 from ..utils import add_end_docstrings, is_torch_available, logging ---> 20 from .base import PIPELINE_INIT_ARGS, Pipeline 21 ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/base.py in <module> 33 from ..feature_extraction_utils import PreTrainedFeatureExtractor ---> 34 from ..modelcard import ModelCard 35 from ..models.auto.configuration_auto import AutoConfig ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/modelcard.py in <module> 43 ) ---> 44 from .training_args import ParallelMode 45 from .utils import ( ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/training_args.py in <module> 25 from .debug_utils import DebugOption ---> 26 from .trainer_utils import ( 27 EvaluationStrategy, ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/trainer_utils.py in <module> 46 if is_tf_available(): ---> 47 import tensorflow as tf 48 ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/__init__.py in <module> 36 ---> 37 from tensorflow.python.tools import module_util as _module_util 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/__init__.py in <module> 36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow ---> 37 from tensorflow.python.eager import context 38 ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/eager/context.py in <module> 34 from tensorflow.python import tf2 ---> 35 from tensorflow.python.client import pywrap_tf_session 36 from tensorflow.python.eager import executor ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/client/pywrap_tf_session.py in <module> 18 from tensorflow.python import pywrap_tensorflow ---> 19 from tensorflow.python.client._pywrap_tf_session import * 20 from tensorflow.python.client._pywrap_tf_session import _TF_SetTarget ImportError: initialization failed The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) /tmp/ipykernel_4924/2487422996.py in <cell line: 1>() ----> 1 from transformers import pipeline, set_seed 2 3 generator = pipeline('text-generation', model='gpt2') 4 set_seed(42) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in __getattr__(self, name) 990 value = self._get_module(name) 991 elif name in self._class_to_module.keys(): --> 992 module = self._get_module(self._class_to_module[name]) 993 value = getattr(module, name) 994 else: ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1002 return importlib.import_module("." + module_name, self.__name__) 1003 except Exception as e: -> 1004 raise RuntimeError( 1005 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1006 f" traceback):\n{e}" RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): initialization failed ``` ``` def query(payload, multiple, min_tokens, max_tokens): nlp_setup() list_dict = generator(payload, min_length=min_tokens, max_new_tokens=max_tokens, num_return_sequences=multiple) return [d['generated_text'].split(payload)[1].strip() for d in list_dict ``` ``` output = query("Banking customer's needs:", 3000, 50, 50) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. pip install tensorflow 2. pip install transformers 3. from transformers import pipeline, set_seed ### Expected behavior Should import and create instance of generator
08-05-2022 10:09:06
08-05-2022 10:09:06
Changed Kernel: `conda_tensorflow2_p38`
transformers
18,482
closed
Fix `test_dbmdz_english` by updating expected values
# What does this PR do? Fix #18405 I originally stated this is an expected value - it turns out to be an **input sentence**. But probably it is still intentional to have `the the`.
08-05-2022 08:57:54
08-05-2022 08:57:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think we can update the results instead yes: The original string didn't have the typo: https://github.com/huggingface/transformers/issues/5077#issuecomment-656398617 So probably my bad in putting it into a test (or I copied from somewhere else, I can't remember)<|||||>(off topic) @Narsil How you are able to find that (very) old comment - I can't even find some comments that are just 2-3 months old 😢 <|||||>Voilà 🚀 <|||||>> you I copy pasted `Enzo works at the UN` in the GH search bar within `issues` tab. It doesn't work all the time, but it does work better than searching in the top left bar. GH search is very hit and miss. Really depends how segregating your keywords are. and you HAVE to be word aligned (which is super annoying when looking for function/method names since I usually only know part of the name)
transformers
18,481
closed
Add TF prefix to TF-Res test class
# What does this PR do? Let's give TF a bit more space.
08-05-2022 07:38:35
08-05-2022 07:38:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,480
closed
Not able to use DistilBERT in VisualTextDualEncoder
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.6.4 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.2 (gpu) - Jax version: 0.3.14 - JaxLib version: 0.3.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from PIL import Image import requests from transformers import ( VisionTextDualEncoderModel, VisionTextDualEncoderProcessor, AutoFeatureExtractor, AutoTokenizer, ) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") processor = VisionTextDualEncoderProcessor(feature_extractor, tokenizer) model = VisionTextDualEncoderModel.from_vision_text_pretrained( "google/vit-base-patch16-224", "distilbert-base-uncased" ) # contrastive training urls = [ "http://images.cocodataset.org/val2017/000000039769.jpg", "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg", ] images = [Image.open(requests.get(url, stream=True).raw) for url in urls] inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="pt", padding=True ) outputs = model( input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, pixel_values=inputs.pixel_values, return_loss=True, ) loss, logits_per_image = outputs.loss, outputs.logits_per_image # this is the image-text similarity score # save and load from pretrained model.save_pretrained("vit-bert") model = VisionTextDualEncoderModel.from_pretrained("vit-bert") # inference outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` TypeError: forward() got an unexpected keyword argument 'token_type_ids' ### Expected behavior `CLIPOutput' object with these components: ['loss', 'logits_per_image', 'logits_per_text', 'text_embeds', 'image_embeds', 'text_model_output', 'vision_model_output']
08-05-2022 06:26:16
08-05-2022 06:26:16
I believe this is because [this line in `modeling_vision_text_dual_encoder.py`](https://github.com/huggingface/transformers/blob/14928921e2f6d5b049d8dcfa07982e9ca351a402/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L368) passes a keyword argument `token_type_ids` to the `text_model` which is not supported by the `distilbert-base-uncased` model.<|||||>Hi, The scripts are meant as examples, and you can easily tweak them for your use case. So it's advised to fork the library and tweak it to your liking. If you can come up with a fix that makes the scrpit more general, feel free to open a PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,479
closed
join last hidden states of layers for BertForSequenceClassification
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-05-2022 03:43:14
08-05-2022 03:43:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18479). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @dantruonghtno1, was this discussed in an issue somewhere? We're very unlikely to merge this as it seems like a niche use-case that adds a number of statements to the code for that specific use-case which could be handled outside of the model.
transformers
18,478
closed
How to do batch inference in GPT-J
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid - Python version: 3.7.3 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <False> - Using distributed or parallel set-up in script?: <False> ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") sens = [ "In a shocking finding, scientists discovered a herd of unicorns living in a remote", "previously unexplored valley, in the Andes Mountains. Even more surprising to the", "researchers was the fact that the unicorns spoke perfect English." ] token_ids = [torch.squeeze(tokenizer(sen,return_tensors='pt',truncation=True)['input_ids'],0) for sen in sens] sens = pad_sequence(token_ids, batch_first=True, padding_value=-1) attention_mask = (sens != -1).long() print(sens) print(attention_mask) gen_tokens = model.generate( sens, attention_mask = attention_mask, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ### Expected behavior It should work well.
08-05-2022 03:02:32
08-05-2022 03:02:32
Maybe @gante can help out regarding generate<|||||>EDIT: removed this comment, a correct example is given below. <|||||>@gante ,Thanks for your reply! Actually, your reply just made me a bit confused about the padding things. Why do we need to do the left padding? Typically, I think we are doing the RHS padding, right? And I tried that no matter what kind of padding or wherever the padding is inserted, having the corresponding attention mask(i.e. 0 for the position needs to be masked) would be enough.<|||||>Hey @ZeyiLiao 👋 Yeah, left padding matters! Although tokens with the attention mask set to `0` are numerically masked and the position IDs are correctly identified from the attention mask, models like GPT-2 or GPT-J generate a new token at a time from the previous token. As such, if your last input token is not part of your prompt (e.g. it is padding), your output will be drastically different! Check this colab with examples: https://colab.research.google.com/drive/1i0g18lUNZ2cYRms0E-gE1KCf6N4mZRwy?usp=sharing<|||||>@ZeyiLiao I realized one incorrect detail from the example I gave above (setting the padding token), GPT-J is working for batched generation :) Here's the working example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", padding_side="left") tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token}) model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") sens = [ "I am a random example", "This is the" ] prompts = tokenizer(sens, return_tensors='pt', padding=True, truncation=True) print(prompts["input_ids"]) print(prompts["attention_mask"]) with torch.no_grad(): gen_tokens = model.generate( **prompts, do_sample=False, max_new_tokens=20, ) gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True) print(gen_text) ``` This means that this issue is now sorted -- closing it. Let us know if you have further questions or run into more issues!<|||||>> Hey @ZeyiLiao 👋 > > Yeah, left padding matters! Although tokens with the attention mask set to `0` are numerically masked and the position IDs are correctly identified from the attention mask, models like GPT-2 or GPT-J generate a new token at a time from the previous token. As such, if your last input token is not part of your prompt (e.g. it is padding), your output will be drastically different! > > Check this colab with examples: https://colab.research.google.com/drive/1i0g18lUNZ2cYRms0E-gE1KCf6N4mZRwy?usp=sharing Thanks! That's a really great example to elaborate it.<|||||>Hi @gante , I did some review on this issue and read the tips from huggingface <img width="1452" alt="image" src="https://user-images.githubusercontent.com/97815464/190067034-79000b94-2e0a-4a44-a181-d2da9a0aff23.png"> Why it said that it's recommended to pad on the right sides? Or does it means that when do inference instead of generation, we should use right padding? But for generation, we should use left padding? <|||||>BTW, do you know how to change the loss function of a model like GPT2. Like, I wanna set ```CrossEntropyLoss(reduction = 'sum')```, do I need to change the internal code or there is a way to deal with it. Thanks!!!!!<|||||>@ZeyiLiao > Or does it means that when do inference instead of generation, we should use right padding? But for generation, we should use left padding? It depends on whether you pass the [`position_ids`](https://huggingface.co/docs/transformers/main/en/glossary#position-ids) argument to the model or not. At generation time, we hand-craft it according to the attention mask (remember: padded tokens get `0` in the attention mask), at inference time we do not. As such, if you run inference with left padding, unless you build `position_ids` correctly and pass it to the model, you will get a slightly different output. Hence the suggestion to not use left padding at inference time :) > BTW, do you know how to change the loss function of a model like GPT2 PT training questions are best left to our [forum](https://discuss.huggingface.co/) :D <|||||>@gante , Thanks :D, And I checked the position_ids and wonder why the padded parts are **_1_**(left-side)? I think _**1**_ here of the position_ids can not achieve what the padded token(0) at attention mask does(totally ignore them)? ``` attention mask: [[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] position ids: [[ 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17], [ 1, 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,13, 14, 15, 16], [ 1, 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,13, 14, 15, 16], [ 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,17, 18, 19, 20], [ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,18, 19, 20, 21], [ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18], [ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18]] ```<|||||>The code that creates the position ids at generation time for GPT-2 is defined [here](https://github.com/huggingface/transformers/blob/30a28f5227d541ae6b0a287ae345dfae687f21da/src/transformers/models/gpt2/modeling_gpt2.py#L986)<|||||>@gante ,yeah, I know that. I wanna ask why the position ids of padded tokens are 1? Like for the attention mask, it would set the padded tokens to 0 to make sure the score (query * key)for the padded one is zero. So what's the point of setting position ids of padded tokens to 1?<|||||>There is no point, but it also doesn't make a difference :) Because of the attention mask, the signal at the start of the sentence will be almost inexistent regardless of what's in the input. BTW, we reserve this GH issues space for bugs in transformers. These sort of questions are best left to our [forum](https://discuss.huggingface.co/)<|||||>@gante Hi gante, when I checked the course [here](https://huggingface.co/course/chapter7/6?fw=pt), I note that when we do training on gpt2 , we don't have left padding setting:? Don't we need that during training? I think it also would affect it?<|||||>During training, the model doesn't generate text, it only predicts the next token (for each position in the sequence, given all prior tokens). Being left or right padded doesn't make a difference, the text has no discontinuities :) That is opposed to generate, where if left padding is NOT applied there will be a gap between the input text and the start of generation, which causes the problems.<|||||>Hi there! Is possible to do batch with different parameters in each prompt? <|||||>Hi @carlose2108 👋 That is impossible with our `.generate()` function. But it should be possible to build under certain assumptions, if you'd like to build it for your project.<|||||>Hi @gante thanks a lot for your response!
transformers
18,477
closed
Spanish translation of summarization.mdx (#15947)
Spanish translation of summarization.mdx (#15947) <!-- Remove if not applicable --> Fixes #15947 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? @omarespejel @osanseviero @sgugger
08-05-2022 01:09:53
08-05-2022 01:09:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Muchas gracias @AguilaCudicio por la traducción! Agregué algunos comentarios en el review.<|||||>Gracias @AguilaCudicio for the translation! 🚀 @sgugger LGTM :)
transformers
18,476
closed
Fine tuning TensorFlow DeBERTa fails on TPU
### System Info Latest version of transformers, Colab TPU, tensorflow 2. - Colab TPU - transformers: 4.21.0 - tensorflow: 2.8.2 / 2.6.2 - Python 3.7 ### Who can help? @LysandreJik, @Rocketknight1, @san ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am facing some issues while trying to fine-tune a TensorFlow DeBERTa model ``microsoft/deberta-v3-base`` on TPU. I have created some Colab notebooks showing the errors. Note, the second and third notebooks already include some measures to circumvent previous errors. - ValueError with partially known TensorShape with latest ``take_along_axis`` change: [FineTuning_TF_DeBERTa_TPU_1](https://colab.research.google.com/drive/1TN4Ro-U6a-7MypDN3AUoHFfEPnFErPBt?usp=sharing) - Output shape mismatch of branches with custom dropout: [FineTuning_TF_DeBERTa_TPU_2](https://colab.research.google.com/drive/1gubIwNKNFwexKcra37w9-CSzFJUDGm07?usp=sharing) - XLA compilation error because of dynamic/computed tensor shapes: [FineTuning_TF_DeBERTa_TPU_3](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing) I have seen similar issues when using ``microsoft/deberta-base``. I believe the following issues are related: - [TF2 DeBERTaV2 runs super slow on TPUs #18239](https://github.com/huggingface/transformers/issues/18239) - [Debertav2 debertav3 TPU : socket closed #18276](https://github.com/huggingface/transformers/issues/18276). From this I used the fix on ``take_along_axis``. Thanks! ### Expected behavior Fine tuning is possible as it happens when using a GPU.
08-04-2022 19:45:45
08-04-2022 19:45:45
Hi @tmoroder 👋 Thank you for adding all that information to the issue <3 If I got it right, the second notebook replaces the `take_along_axis` function, and the third notebook also replaces the custom dropout. Still, there are XLA exceptions. Before diving into debugging, two questions: 1. Does it return the same error on a GPU? 2. I see that you prepare a dataset with static batch size and that the input is padded. Do you think that there is any additional source of shape variability in the inputs? (I don't think so, but asking doesn't hurt :D )<|||||>Hi @gante. > If I got it right, the second notebook replaces the ``take_along_axis function``, and the third notebook also replaces the custom dropout. Still, there are XLA exceptions. Correct. I think the XLA exceptions occur during gradient computation at these dynamic/computed tensor shape sizes. The first collection seems to me being triggered within the ``TFDebertaV2DisentangledSelfAttention.disentangled_att_bias`` method, like at [L735](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L735). I am not about other position ``TFDebertaV2DisentangledSelfAttention.call`` like [L704](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L704). > 1. Does it return the same error on a GPU? It runs on GPU without errors if I use ``transformers==4.20.1``, see [FineTuning_TF_DeBERTa _GPU](https://colab.research.google.com/drive/1KduvPzwXbDee3sR4DR4-woehg7v-9c8I?usp=sharing). With version ``4.21.0`` I get the same error ValueError. > 2. I see that you prepare a dataset with static batch size and that the input is padded. Do you think that there is any additional source of shape variability in the inputs? (I don't think so, but asking doesn't hurt :D ) No further shape variability as fas as I can judge. <|||||>Hi @tmoroder, can you try on GPU with `jit_compile=True` in both 4.20 and 4.21? I believe the code had issues with XLA before 4.21, and TPU code is always compiled to XLA.<|||||>Interesting. Since `transformers==4.20.1`, there are only two DeBERTa PRs: 1. https://github.com/huggingface/transformers/pull/17940 (should have no impact at all here) 2. https://github.com/huggingface/transformers/pull/18256 (what should have been a TPU-friendly `take_along_axis`) As @Rocketknight1 said, that data would be interesting. If v4.21 works on GPU but not on TPU, we are up for an interesting challenge :D <|||||>> Hi @tmoroder, can you try on GPU with jit_compile=True in both 4.20 and 4.21? Using ``jit_compile=True`` while compiling the model gives an error for both 4.20.1 and 4.21, e.g., [FineTuning_TF_DeBERTa _GPU_Tests](https://colab.research.google.com/drive/1VMPD2k5WuiHzT5ESdIXs0PuESvg82Ju3?usp=sharing) for 4.21; with 4.20.1 it crashes in the last command. > As @Rocketknight1 said, that data would be interesting. If v4.21 works on GPU but not on TPU, we are up for an interesting challenge :D Without ``jit_compile=True`` it also fails on GPU with 4.21; with 4.20.1 it works.<|||||>That makes sense - we made changes to the model to make it XLA-compatible in 4.21. XLA compatibility is necessary for TPU support, so the 4.20 model would never have run on TPU. However, we seem to have some other TPU-specific issues now - in my testing I was able to get DeBERTa to work with XLA on GPU in 4.21.<|||||>Weird! During my TPU and GPU tests, i was using a custom training loop instead of keras's `.fit()`, which I'm not sure if it actually matters. In my custom training code, I got deberta to train in an electra style training, with XLA enabled with `jit_compile=True` with non of the issues mentioned above. I will be sharing my code asap once I finish the pretraining and validate the results. It is based on Nvidia BERT and Electra TF2 training code https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/<|||||>@tmoroder I can confirm that I can run your example with `jit_compile=True` (i.e. XLA compilation) on `model.compile()`, using a GPU, if the two changes you made in your [third TPU notebook](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing): - replace `take_along_axis` by the `tf.einsum` version - replace dropout by the standard dropout If XLA compilation works, then it should run on TPU. I noticed that in [your notebook](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing) you were using TF 2.6, which may explain the XLA failure. Are you able to bump your TPU TF version (to as high as possible)? Meanwhile I'm opening a PR to reflect those two changes :)<|||||>@gante Thanks a lot for your effort. Maybe I am doing something wrong... but using the code from your pull request it now runs on GPU (with ``jit_compile=True`` as additional argument during model compilation), while it still fails on TPU (without using ``jit_compile=True`` as an argument). I am using TF 2.8.2 in both cases which is the current default in the Colab environment. On TPU it seems again to have errors on the [tile operation](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L733). - Working GPU version: [FineTuning_TF_DeBERTa_Propsed_Fix_GPU](https://colab.research.google.com/drive/1kF-I5Mb3eUyydl9681RXi4GfxDL2Ls4o?usp=sharing) - Failing TPU version: [FineTuning_TF_DeBERTa_Propsed_Fix_TPU](https://colab.research.google.com/drive/1BlYkfl0l5RZVhTXHDuHbLTsD1WlrzomT?usp=sharing) <|||||>(linking issues -- the Tile issue is also present in the following unsolved issue: https://github.com/huggingface/transformers/issues/14058)<|||||>The cause is trivial (the `multiple` argument of `tf.tile` can't have dynamic shapes), but the fix may be not :D Will look into it <|||||>@tmoroder the dynamic shape in question is the batch size. I may be able to get an alternative to `tf.tile`, but I highly doubt that it will make a difference -- it will be a dynamic shape no matter how I turn it around, as it is not set. Alternatively, could you try setting the `batch_size` argument in the `Input` layers? It will make that shape static, which should be enough to squash the problem you're seeing :)<|||||>@gante Great, setting the ``batch_size`` works 🥳. I only had to make sure that it divides the ``strategy.num_replicas_in_sync``, [FineTuning_TF_DeBERTa_Working_Fix_TPU](https://colab.research.google.com/drive/1wQ_shM9zigRzeATvcncTC4koFb2GkDgY?usp=sharing). Thanks a lot, I will test the procedure now on my real use case at hand.<|||||>Wooo nice! 🎊 I'm closing this issue since the problem seems to be solved for now. Feel free to reopen if you run into new related issues. Also, if you have the authorization to, please share TPU-related findings -- I'm sure they will be useful for other users!<|||||>@tmoroder Hey, can i ask about the training throughput/performance you got with the TPUs?<|||||>@WissamAntoun Here some output that I get during the ``model.fit`` call. The model is very close to the one in the Colab notebooks, but the run is carried out on a Kaggle TPU. Some further specification: - model max length: 512 - batch size: 128 - 12800 training samples (or 100 steps per epoch) - about 7500 validation samples - smoothed cross-entropy loss - accuray and cross-entropy metric When calling ``model.fit`` the method prints, depending on the base model backbone the following times: - ``deberta-v3-base``: 540s (632s first epoch) - ``bert-base-uncased``: 29s (115s first epoch) Hope it helps!<|||||>Oh great! I mean not great in the sense that the model is super slow on TPUs, but great that `model.fit` and my custom training loop have the same issue. you are getting 512sentences*100batches/540s=~23sents/s, and I'm getting ~sents/s but for an electra style training. Thank you for providing the numbers they really helped.
transformers
18,475
closed
Add type hints to XLM-Roberta-XL models
This PR adds type hints for the PyTorch XLM-Roberta-XL as mentioned in #16059 @Rocketknight1
08-04-2022 17:58:40
08-04-2022 17:58:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @Rocketknight1 , just reminding you of this <|||||>Hi @asofiaoliveira, I'm extremely sorry for the delay here! The PR is perfect, and I'm merging now!
transformers
18,474
closed
Update no trainer examples for QA and Semantic Segmentation
# What does this PR do? Update run_qa_no_trainer.py, run_qa_beam_search_no_trainer.py, run_semantic_segmentation_no_trainer.py examples to include `accelarator.gather_metrics` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # https://github.com/huggingface/transformers/issues/18437 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? I ran the scripts locally with the following arguments ``` "program": "examples/pytorch/question-answering/run_qa_no_trainer.py", "args": [ "--dataset_name", "squad", "--dataset_config_name", "plain_text", "--model_type", "bert", "--tokenizer_name", "bert-base-uncased", "--max_train_steps", "50" ] ``` ``` "program": "examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py", "args": [ "--dataset_name", "squad", "--model_name_or_path", "xlnet-base-cased", "--max_train_steps", "50" ] ``` ``` "program": "examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py", "args": [ "--max_train_steps", "50" ] ``` ## Who can review? @muellerzr , @sgugger , @pacman100 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-04-2022 17:05:51
08-04-2022 17:05:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,473
closed
Update no_trainer.py scripts to include accelerate gradient accumulation wrapper
# What does this PR do? Updates no_trainer.py scripts to use the new gradient accumulation wrapper feature from accelerate according to #18436. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ N] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [N ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-04-2022 14:08:08
08-04-2022 14:08:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Let us know when the PR is ready for review (it's in draft mode right now) so we can go ahead and merge.<|||||>@sgugger @muellerzr I changed PR to review. :) <|||||>@sgugger I removed changes to wav2vec script and fixed the using of wrong constant. Feel sure to merge if its fine. Its going to be squash merged right? or do i need to rebase and squash before merge?<|||||>We squash indeed. Thanks again for your contribution!
transformers
18,472
closed
TFEncoderDecoderModel can not be trained with TF Keras fit() method
### System Info - `transformers` version: 4.21.0 - Platform: Linux-4.15.0-188-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.6.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use the example here: https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example 1. try to fit the model: **model.fit**(input_ids=input_ids, decoder_input_ids=input_ids) 2. You will receive errors "TypeError: fit() got an unexpected keyword argument 'input_ids'" 3. ![image](https://user-images.githubusercontent.com/41159849/182850011-08f8bb4f-d40f-42f5-9d56-9d916ff4efe9.png) 4. you can try this : **model.fit**(input_ids, input_ids) 5. but you receive many errors: ![image](https://user-images.githubusercontent.com/41159849/182850264-b4e8f827-e53c-434d-ad96-85eb70af1217.png) ### Expected behavior I should be able to train a TFEncoderDecoderModel with TF Keras fit() method
08-04-2022 12:52:41
08-04-2022 12:52:41
Hi @kmkarakaya 👋 Having a popular project like `transformers` means we get many support and feature requests — if we want to maximize how much we help the community, the community has to help us stay productive 🙏 To that end, please share a *short* script where the issue is clearly reproducible on *any* computer. Thank you 🤗<|||||>Hi @gante, Here is the script ( https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example ) which **_I modified it to train the model as below_**: import tensorflow as tf from transformers import TFEncoderDecoderModel, BertTokenizer model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2") tokenizer = BertTokenizer.from_pretrained("bert-base-cased") model.compile(loss=None) **model.fit**(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids) **The error message:** ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() <|||||>Hi @kmkarakaya -- technically I can't reproduce the script, since I don't have access to your `input_ids`. However, looking at the code, I can tell that `model.fit` is not being called correctly. Please check its [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit), especially its `x` and `y` arguments :)<|||||>@gante As I wrote in every message this code belongs to the HF repo https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example here is the complete & full code from the HF link: I hope this time you can help to fix the problem: from transformers import TFEncoderDecoderModel, BertTokenizer model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2") tokenizer = BertTokenizer.from_pretrained("bert-base-cased") input_ids = tokenizer.encode( "Hello, my dog is cute", add_special_tokens=True, return_tensors="tf" ) model.compile(loss=None) model.fit(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids) <|||||>@gante Please note that my question is related to **TFEncoderDecoderModel** therefore, model.fit(x,y) is not enough! We need to provide **encoder input, decoder input and decoder output** as **the HF suggests in its official documentation**: https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example Thus, this bug's title is "**_TFEncoderDecoderModel can not be trained with TF Keras fit() method_**". If you know how to train **TFEncoderDecoderModel** with TF or Keras please share with me. Because in the current **model.fit()** I am not able to do it. Thank you for your attention.<|||||>Hi @kmkarakaya -- the [example you linked](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example) runs fine and, as I've written above, the issue with your example is in the arguments to `model.fit`. Please see our [examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow) to learn how to prepare the data for training. For instance, see [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_mlm.py#L563) -- you need to prepare your data into a dataset in advance. Finally, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,471
closed
Let's not cast them all
# What does this PR do? This PR is an alternative solution (and cleaner) to https://github.com/huggingface/transformers/pull/18467 An issue has been found when running this script: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", device_map="auto", torch_dtype=torch.float16) text = "def quicksort(l):" encoded_input = tokenizer(text, return_tensors='pt') output_sequences = model.generate(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask']) print(tokenizer.decode(output_sequences[0], skip_special_tokens=True)) ``` Since `torch_dtype=torch.float16` will cast all parameters of the models including the buffers, this will include also causal masks for some models. In some niche cases those buffers are in `uint` or `bool` instead of `int`. This PR should address this issue and check if the parameter is either an `uint`, `int` or `bool` before casting it. cc @sgugger Ran codegen slow tests and the tests are passing, let me know if we need more checks!
08-04-2022 12:24:48
08-04-2022 12:24:48
Thanks also for giving me the right pointer to the rootcause!<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
18,470
closed
Fix load of model checkpoints in the Trainer
# What does this PR do? #18221 broke the model reload when the contributor removed the `strict_load` variable (as requested in the review) without setting it to its proper value in the calls to `load_state_dict` after. This PR addresses that. Fixes #18373
08-04-2022 11:55:43
08-04-2022 11:55:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,469
closed
Add `TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING`
# What does this PR do? The original goal is to fix `TFSegformerModelTest.test_keras_fit`, but it ends up the following - Add `TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING` to some `__init__` files. - Add `training` arguments in a few layers for `TFSegformerModel` - Update `_prepare_for_class` to deal with 2 more image tasks - Fix `TFData2VecVisionForSemanticSegmentation` loss: we need batch dimension (without this, `test_dataset_conversion` failed - this was previously skipped due to the lack of labels)
08-04-2022 09:29:44
08-04-2022 09:29:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Test failures are `ValueError: Connection error` - irrelevant.<|||||>Thank you, @ydshieh for this. I appreciate the help.
transformers
18,468
closed
Update no trainer scripts for multiple-choice
# What does this PR do? Update `run_swag_no_trainer` example to include `accelarator.gather_metrics` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #18437 I ran the script locally with the following arguments ``` "--dataset_name", "swag", "--dataset_config_name", "regular", "--model_type", "bert", "--tokenizer_name", "bert-base-uncased" ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr , @sgugger, @pacman100 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-04-2022 09:28:06
08-04-2022 09:28:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,467
closed
CodeGen Fix causal mask for half precision
# What does this PR do? This PR forces the causal mask to stay in `torch.uint8`. An error occurs when loading a model in half precision since `torch_dtype=torch.float16` casts also the buffers in fp16. Here is a minimal script to reproduce the error: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", device_map="auto", torch_dtype=torch.float16) text = "def quicksort(l):" encoded_input = tokenizer(text, return_tensors='pt') output_sequences = model.generate(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask']) print(tokenizer.decode(output_sequences[0], skip_special_tokens=True)) ``` In a future PR we could address non-casting the buffers (aka keeping them in their native `dtype`) Can also confirm the slow tests pass! cc @ydshieh
08-04-2022 08:19:44
08-04-2022 08:19:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yeah let's move the discussion to: https://github.com/huggingface/transformers/pull/18471
transformers
18,466
closed
Fused Softmax Kernels
### Feature request Optional Fused Softmax Cuda kernels for transformer implementations. Megatron-LM has implemented these [here](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/fused_kernel), and they offer massive speedups for models under 10B params when training at 2048 sequence lengths. In my experience, this amounts to 2x improvements in throughput. As you can see from [this example](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L203-L209), it's relatively straightforward to add fused kernels to a model. ### Motivation From profiling the transformers models, it seems like they achieve at best 20% of peak hardware utilization on V100s and A100s for 2048 token contexts. With just the addition of fused kernels from the Megatron codebase, I see around 40% utilization. This is supported by the findings from [YaLM-100B](https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6). For massive models, the performance improvements are less substantial (175B-500B params) but NVIDIA notes 10-20% speedups in section 5.8 of [this paper](https://cs.stanford.edu/~matei/papers/2021/sc_megatron_lm.pdf). ### Your contribution I have my own GPT-2 implementation that uses Megatron's kernels and I would be happy to contribute. I don't have the time to implement the full feature request - which would be providing the ability to use these fused kernels for most of the hugging face models - but I think this would be very valuable for the ecosystem. A 2x improvement in throughput at medium to large scales (around 100M-1B params) would be a substantial cost improvement for users.
08-04-2022 03:10:15
08-04-2022 03:10:15
@Sanger2000 could you add a link to the kernels from Megatron-LM? I'm curious if it could also be easily combined with a fused kernel for attention-dot-product, like FLASH attention.<|||||>Raw Cuda and C++ code: https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/fused_kernels This can then be easily added to a model like here: https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L203-L209<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Sanger2000 Are you still interested in this?<|||||>Not anymore. But I very much agree with Abhi about using FlashAttention instead of the Megatron kernels, since it is a decent bit faster and consumes far less memory (esp for longer sequences). It would massively speed up all transformer implementations if they had the option of using flash attention for their attention computation. The only downside is it won't be possible for all models since Flash Attention limits the head dimensions that can be used (I don't believe it supports anything larger than 128 last time I checked).<|||||>I see, thanks for the feedback. There is an integration of the nn.TransformerEncoderLayer fastpath in [Optimum](https://huggingface.co/docs/optimum/bettertransformer/overview), but it is only for inference for now - the training support + flash attention will come in a next pytorch release. I've been thinking about massively supporting xformers or HazyResearch/flash-attention for transformers since some people may be interested in already benefiting from memory efficient attention / flash attention for training, and don't want to wait 2 months. I'm not just not aware if other solutions as deepspeed or others already allow to use it or not, in which case I'd rather avoid doing double work.<|||||>Hi @fxmarty, just want to flag I am also very interested in this. I've been digging around and haven't seen anything active regarding deepspeed integration for training (only inference too). It does look like there's been some activity over the past few days in pytorch on this though.
transformers
18,465
closed
How to embed relational information in a Transformer for NMT task?
### Feature request Embedding relational information for a transformer ### Motivation I am using Transformer model form huggingface for machine translation. However, my input data has relational information as shown below: ![image](https://user-images.githubusercontent.com/102386930/182737748-7d0fd33e-caa6-4c51-a540-92123cf82bbf.png) So I have has semantic information using Language Abstract Meaning Representation (AMR) graph in the input graph. Is there even a way to embed relationship like the above in a transformer model? Is there any model from Huggingface that I can use in this regard? ### Your contribution If a model is developed, I could beta test the model.
08-04-2022 00:32:01
08-04-2022 00:32:01
What is the input to the transformer going to be? Is it more like: > He ended his meeting on Tuesday night. but with the graph data encoded into the embeddings somehow? Or more like: > end-01 He meet-03 data-entity Tuesday night with the graph data iteslf as input?<|||||>The graph could be though of like the following: ``` ________ | | | \|/ He ended his meeting on Tuesday night. /|\ | | /|\ | | | | |__| |________________| ``` Essentially each token in the sentence is a `node` and there could be `edge` embedded between tokens. <|||||>In a normal transformer, the tokens are processed into token embeddings, then an encoding of each position is processed into an embedding and added to the token embeddings at the corresponding positions. The result is positional embeddings. This is how each position 'knows' where it is in the sequence. You could do something similar with the edge information. You need some trainable network that takes the edge type and the positional encoding of the target node, combines this information, and outputs an embedding. The embeddings of all the edges can be added to the positional embeddings for the corresponding nodes. My intuition is that the attention layers could use this encoded information to 'find' related nodes. I don't know how well it will work but that would be my approach. Good luck!<|||||>@sinking-point thanks for your response. So essentially I need to extend the `positional embedding` generation considering not position in the sentence and instead based on the `edge type`. But there could be different types of edges as well. How could that be combined? I suppose there would be a need to use different weight for different types of edge? Is there any such model implementation with hugging face? I have already have a look but can't find anything.<|||||>You could combine them like this: Edge type as one hot vector -> nn.Embedding -> edge type embedding Index of target node -> positional encoding -> whatever positional embedding method your chosen transformer uses -> target node embedding Sum = edge type embedding + target node embedding If we only have a maximum of one edge per node, we can just add this sum to the origin node embedding. However, we might have many edges and if we do this they'll interfere with eachother. We want different edge types to be able to partition themselves into different parts of the vector, so I'd try a multi layer perceptron kinda thing: Sum (embedding width) -> nn.Linear -> hidden (bigger width) -> activation fn -> nn.Linear -> finished edge embedding Alternatively, you could take each edge, turn it into an embedding, add embeddings for both the origin and target nodes' positional encodings. Then just append these to the transformer input. There's less complexity in that you don't need the MLP I described, but might be more expensive because attention scales quadratically with length in both time and space. <|||||>I don't know of any existing transformer that does what you want already.<|||||>@sinking-point thanks for your response. Can I apply this change in a modular fashion? I suppose I need to augment the following snippet? ``` positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) ``` Having said that how could I pass the edge information 🤔 For me the it does not need to be optimized. Have you have any code snippet demonstrating something similar 🙏 ? <|||||>What transformer do you want to use? Take Bart for example, you can pass in inputs_embeds. <|||||>I would like to use `Longformer`.<|||||>I would probably go with my first suggestion then. Putting all the edges at the end might not play well with longformer's local attention. Longformer also has inputs_embeds as an argument, so you could do something like: ```python class MyLongformer(nn.Module): def __init__(...): self.model = LongformerModel(...) self.edge_embed = MyEdgeEmbedding(...) def forward(...): inputs_embeddings = self.model.get_input_embeddings()(input_ids, ...) for batch, edge_type_id, origin_idx, target_idx in edges: input_embeddings[batch][origin_idx] += self.edge_embed(edge_type_id, target_idx) # might be best to normalise here return self.model(inputs_embeds=inputs_embeds, ...) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,464
closed
mypy typing not working for AutoModelForMaskedLM when used with Trainer
### System Info Python version - 3.9 transformers version - 4.20.1 ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Initialise `AutoModelForMaskedLM` model = `AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")` 2. Pass this to the Trainer ``` trainer = Trainer( model= model, args=training_config, train_dataset=train_data, eval_dataset=valid_data, callbacks=None, data_collator=masking_processor ) ``` 3. If you check this against mypy, it produces an error stating `error: Argument "model" to "Trainer" has incompatible type "AutoModelForMaskedLM"; expected "Union[PreTrainedModel, Module]"` The model is defined as `model: Union[PreTrainedModel, nn.Module] = None` in the Trainer class. ### Expected behavior Since it's a valid input to the Trainer class, it should expect the models from AutoModel classes as well.
08-03-2022 23:59:02
08-03-2022 23:59:02
Hi there @harshit-sethi09 👋 Our code has type annotations, but they are mostly for documentation purposes (and not to be used with `mypy`) cc @sgugger <|||||>I have very strong thoughts about trying to statically type-checking a dynamically typed language which would take too long to express here. But this is what you get as a result: an object will actually never be of type `AutoModel` has those can only be instantiated via classmethods which actually return other classes, a thing the static type-checker is of course incapable to see. We could pollute the code with tons of useless annotations to please the almighty static typechecker but we have chosen not to :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,463
closed
Multi-GPU setting: Expected to mark a variable ready only once (RuntimeError)
### System Info I’m trying to run a Huggingface model on a multi-GPU environment. The problem is that when I’m processing multiple inputs which are bound to each other from a single class (shared-weights encoder), I’m getting `RuntimeError: Expected to mark a variable ready only once`. While if I use this encoder module only once, for processing the first input, I won’t get this error, in multi-GPU setting. Should add that I don't get this error when training on single GPU. To make it clearer, here is the structure (I have simplified the code): ``` class LEDModel(): def __init__(self, ...) self.encoder = ... def forward(input_ids, ...): encoder_outputs = self.encoder(input_ids, ...) # filter encoder_outputs and construct another tensor called 'input_ids_selected' encoder_outputs = self.encoder(input_ids_selected, ...) ... return LEDSeq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, sent_scores=sent_scores, sect_scores=sect_scores, decoder_hidden_states=decoder_outputs.hidden_states, encoder_last_hidden_state=encoder_outputs.last_hidden_state, ) ``` when I remove this line: `encoder_outputs = self.encoder(input_ids_selected, ...)`, I don't run into this error. Should say that to filter `encoder_outputs` from the first pass of the encoder, I’m using other modules (linear layers) to find important `input_ids`, retaining those in input_ids_selected. You can perceive this as a two-step summarizer. ### Who can help? @patrickvonplaten @ydshieh @patil-suraj ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Having a weight-shared module inside LEDModel() class will reproduce this error. ### Expected behavior Currently, I'm getting `Expected to mark a variable ready only once (RuntimeError)` **in multi-GPU** configuration. I expect to run this model flawlessly in this setting.
08-03-2022 22:19:36
08-03-2022 22:19:36
Hi @sajastu Could you share a (minimal) code snippet that could reproduce this issue? Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,462
closed
[FLAX] Add dtype to embedding for gpt2 model
# What does this PR do? Add dtype to embedding for gpt2 models. This dtype is necessary for mixed precision training. ## Who can review? @patrickvonplaten, @LysandreJik
08-03-2022 21:49:18
08-03-2022 21:49:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @merrymercy, I've noticed that we omit the `dtype` arg from all Flax `nn.Embed` modules! @patil-suraj is there a reason why we do this? BART: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/bart/modeling_flax_bart.py#L841-L845 BERT: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/bert/modeling_flax_bert.py#L186-L200 T5: https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/t5/modeling_flax_t5.py#L1259-L1263 <|||||>I don't know the reasons, but this dtype is required for half-precision training. I can modify all other classes as well if needed.<|||||>Let's wait for @patil-suraj to weigh in on this!<|||||>Gentle ping @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sanchit-gandhi could you maybe take a look here? <|||||>Sorry @patrickvonplaten - as mentioned in my previous comment https://github.com/huggingface/transformers/pull/18462#issuecomment-1209435023 I'm not sure why we omit the `dtype` from all Flax `nn.Embed` modules, hence the request for @patil-suraj to weight in! Maybe you could shed some light on this? It seems like intrinsic design philosophy given that we do this for all models.<|||||>The change looks good to me. T5x also puts the embedding in half precision if necessary: https://github.com/google-research/t5x/blob/1f8cec78b1f28f1955d70741792d7b6e7dd76226/t5x/examples/t5/network.py#L287 @patil-suraj what do you think?<|||||>Can we merge this?<|||||>It's interesting that we omit the `dtype` arg in the embedding layer for both PyTorch and Flax: https://github.com/huggingface/transformers/blob/cbb8a37929c3860210f95c9ec99b8b84b8cf57a1/src/transformers/models/gpt2/modeling_gpt2.py#L675-L676 Wondering if this was a deliberate design decision that we're violating in this PR? Otherwise am happy with the change for half-precision training!<|||||>Your conclusion aligns with the previous observations of embedding dtypes never being down-cast in any Transformer models, both for PyTorch and Flax! Wondering if you could share the rationale behind _why_ one must not down-cast embedding weights to half-precision? This would be helpful in understanding why this should be avoided and help educate us all!<|||||>I think my modification does not conflict with t5x. My PR only changes the dtype of computation and output tensor, not the parameter type (`param_dtype`). https://github.com/google/flax/blob/0be6f32582b9acafe1741e8641a748eb99501021/flax/linen/linear.py#L732-L733 This aligns with @patrickvonplaten 's finding of the code of t5x. @patil-suraj Please review. I am working extensively on the flax backend and am happy to contribute more code. <|||||>Hey @merrymercy, I think `nn.Embed` is an exception in Flax where providing a `dtype` does exactly modify the embedding weights and not just the computation. @patil-suraj can maybe explain better here :-) <|||||>By looking at the code, I don't know why `dtype` changes the type of parameters. You can check the code https://github.com/google/flax/blob/0be6f32582b9acafe1741e8641a748eb99501021/flax/linen/linear.py#L739-L742. The type of parameters is controlled by `param_dtype`. Could you explain how the "exception" happens?<|||||>The way I see it, `dtype` promotes the whole embedding matrix to `bf16` here: https://flax.readthedocs.io/en/latest/_modules/flax/linen/linear.html#Embed and then takes a bf16 vector from this tensor -> this is different from just doing the matrix computation in bf16 IMO<|||||>You are right @patrickvonplaten. This is how fp16 mixed precision training with fp32 master weights works. My point is, the current code in hugging face is wrong. The code in t5x is correct . My modification makes hugging face’s code match t5x’s code. Reasons: 1. Regard less of self.dtype. The weights is stored in fp32. This holds for both my PR and t5x. 2. If dtype is fp16, the computation is in fp16. This holds for my PR and t5x (https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L501). But the original hugging face code is wrong <|||||>@merrymercy But T5X exactly doesn't set `dtype=jnp.bfloat16` when instantiating the layer, see: https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L479 but instead wraps the embedding in `dtype=jnp.bfloat16` only during the forward: https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L501 Shouldn't we try to match this? <|||||>Aha! I think we are talking at different levels. Could my comment below address your concerns? ## First, I match the way we call ‘nn.Embed’ with t5x This PR doesn’t modify ‘nn.Embed’ at all. It modifies the way we call ‘nn.Embed’. What my pr tries to match is this line in t5x. https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/network.py#L287 You can see it passes dtype to ‘nn.Embed’ ## Then, I match the implementation of ‘nn.Embed’ with t5x The code you refers to is ‘layer.Embed’ in t5x, the equivalence of this in our code base is ‘flax.nn.Embed’. Both of them are implemented correctly. In t5x, ‘nn.Embed’ has one argument dtype to control the type of computation and hard code fp32 for the type of parameters. In flax, ‘nn.Embed’ has two arguments. One for dtype of computation and one for the dtype of parameter. I never change the ‘param_dtype’, so it uses the default value fp32. This makes flax.nn.Embed match t5x.layer.Embed. In summary, after my PR, the hugging face gpt should match t5x. Before my PR, the dtype of computation in mixed precision training is wrong. <|||||>Hey @merrymercy, thanks for clarifying and sorry for not making the connection before! The PR looks good to me then :-) Just one other thing - it seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>I fixed the circle CI issue, but I don't know how to fix the "Build PR Documentation" test
transformers
18,461
closed
BartForConditionalGeneration output is not dependent on input when trained from scratch
### System Info - `transformers` version: 4.21.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've been trying to pretrain Bart to use as a baseline for comparison with other models I'd like to evaluate. However, I've found that when trained from scratch, its outputs have nothing to do with the inputs. You can set the input as anything you want, and the output will always be the same. It essentially acts like a causal language model. The funny thing is, if you start with the pretrained Bart instead of the randomly initialised Bart, it works fine. Could there be some problem with the way cross attention parameters are initialised? Or maybe there's an issue with Seq2SeqTrainer. Equally likely is that I've made a mistake somewhere. If anyone can help I'd greatly appreciate it. Thanks in advance. The following code reproduces the issue. This attempts to train the model on the simplest conceivable seq2seq task: output the input, exactly as it is. If Bart can't even learn that, there must surely be something wrong. ```python from datasets import load_dataset from transformers import BartForConditionalGeneration, BartTokenizer, Seq2SeqTrainer, Seq2SeqTrainingArguments dataset = load_dataset("c4", "en", streaming=True) seed, buffer_size = 42, 10_000 train_set = dataset['train'].shuffle(seed, buffer_size=buffer_size).with_format('torch') val_set = dataset['validation'].shuffle(seed, buffer_size=buffer_size).take(5000).with_format('torch') tokeniser = BartTokenizer.from_pretrained("facebook/bart-base") def transform(data_array): texts = [] for data in data_array: texts.append(data['text']) batch = tokeniser(texts, padding=True, truncation=True, max_length=max_length) with tokeniser.as_target_tokenizer(): labels = tokeniser(texts, padding=True, truncation=True, max_length=max_length) batch['labels'] = labels['input_ids'] for k in batch: batch[k] = torch.tensor(batch[k]) return dict(batch) config = BartConfig.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration(config) batch_size = 2 args = Seq2SeqTrainingArguments( output_dir="checkpoints-bart-baseline-2", do_train=True, do_eval=True, evaluation_strategy="steps", eval_steps=5000, save_strategy="steps", save_steps=5000, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, learning_rate=1e-4, max_steps=50_000, fp16=True, remove_unused_columns=False, ) trainer = Seq2SeqTrainer( model=model, args=args, data_collator=transform, train_dataset=train_set, eval_dataset=val_set, ) trainer.train() model.eval() input_texts = [ "Please provide a code sample that reproduces the problem you ran into.", "It can be a Colab link or just a code snippet.", "If you have code snippets, error messages, stack traces please provide them here as well.", ] inputs = tokeniser(input_texts, padding=True) input_ids = torch.tensor(inputs['input_ids']).cuda() model.cuda() output_ids = model.generate(input_ids) print(tokeniser.batch_decode(output_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)) ``` Output: ``` ['</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>', '</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>', '</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>'] ``` ### Expected behavior I would expect the Bart model to learn an approximation of the function represented by the training data. Specifically, I would expect even the poorest approximation to produce different outputs depending on what input is given.
08-03-2022 17:09:46
08-03-2022 17:09:46
This has been resolved by using a smaller learning rate.<|||||>hello, I meet exactly the same problem. My bart model always generate the same content no matter what the input is. How you solve your problem? Thanks <|||||>@xienian87 I retrained the model with a smaller learning rate and the problem went away. <|||||>I have the same problem. Bart generated the same output, no matter what the model inputs.<|||||>@enze5088 have you tried a smaller learning rate?<|||||>> @enze5088 have you tried a smaller learning rate? The problem seems to disappear, when using smaller learning rates.
transformers
18,460
closed
Fix torch version comparisons (helps with +cu*** or +cpu official builds)
Comparisons like `version.parse(torch.__version__) > version.parse("1.6")` are `True` for `torch==1.6.0+cu101` or `torch==1.6.0+cpu` (which is not intended, I suppose). So `version.parse(version.parse(torch.__version__).base_version)` comparisons are preferred (and used in pytorch_utils.py but not in other places). # What does this PR do? * Updated all comparisons to failsafe (when used in copypasting inspiration) `version.parse(version.parse(torch.__version__).base_version)` * added some often used patterns to `pytorch_utils.py` Did not check if original version checks were eligible. Believe original authors just missed this version check caveat. Only torch version check changed (not sure if other packages may be affected). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? As it is touching many parts of code: @patrickvonplaten, @LysandreJik, @sgugger
08-03-2022 15:55:49
08-03-2022 15:55:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>Wow very cool @LSinev !
transformers
18,459
closed
Add machine type in the artifact of Examples directory job
# What does this PR do? We have <img width="257" alt="Screenshot 2022-08-03 174611" src="https://user-images.githubusercontent.com/2521628/182652043-02a031a1-c8b9-457a-8876-130a37099075.png"> even when there are some errors in `Examples directory` test. (relevant run: https://github.com/huggingface/transformers/actions/runs/2786567567) Adding the machine type (single-gpu / multi-gpu) in the artifact names should make things work.
08-03-2022 15:50:10
08-03-2022 15:50:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>(I should have run the job and make sure it works before requesting review - ending up a few more commits to fix things, sorry)
transformers
18,458
closed
Compute true loss Flax examples
# What does this PR do? 'True' losses should be computed in Flax examples, as [discussed](https://github.com/huggingface/transformers/pull/18297#discussion_r931971230) with @sanchit-gandhi. ## Who can review? cc @sanchit-gandhi @patrickvonplaten
08-03-2022 15:37:39
08-03-2022 15:37:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>@duongna21 super sorry it seems like the git commit history got messed up :-/ Any chance you could re-submit your PR?