repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
17,054
closed
[CodeParrot] Near-deduplication with jaccard similarity
# What does this PR do? This PR address the code duplication issue describe in this thread https://twitter.com/miltos1/status/1497126435261083649?s=20&t=v5-vwaEtXLrgZ_GuZHrPKQ <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> # run the code ``` from datasets import load_dataset from minhash_deduplication import deduplicate_dataset ds = load_dataset("lvwerra/codeparrot-clean", split="train") ds_dedup, duplicate_clusters = deduplicate_dataset(ds) ``` The function runs in 2:30 (make_duplicate_clusters) + 1:30 (find_extremes) on a 8 cores VM ``` Orginal dataset size: 5361373 Duplicate cluster: 757944 Files in duplicate cluster: 2677040 Unique files in duplicate cluster: 911947 Filtered dataset size: 3596280 ```
05-02-2022 19:29:05
05-02-2022 19:29:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @lvwerra I agree with you. I will do that. The overall code is running now, here are the next steps: - [x] refactor the code to be used in preprocess.py and clean up - [x] document statistics and performance data in the PR - [x] use dataset.map to compute minhash I will do the deduplication of the validation set in another PR probably. ~question, does dataset.map put the whole dataset in RAM? I imagine it is not a problem because preprocess.py is already doing so~<|||||>Hi @lvwerra there is one decision we need make, then the PR will be ready to review. I mentioned before, we could use dataset.map to compute the minhash. However, there is two steps in the deduplication: - compute minhash for each code file - add into MinHashLSH (can not be parallelized) In previous function, a queue is used while adding into minhash. It would be difficult to do the same using dataset.map. So the dataset.map [implementation](https://github.com/huggingface/transformers/pull/17054/files#diff-c6c3b9ed9e98c7b3b8603011208c2db7c0cf08facbf2a75ec8f56dfea0242040R119) will be almost twice slow (to be confirmed ...) ~I might prefer the dataset.map solution, which makes the code easier to read~ Finally I choose the initial implementation, which reduce the computation time by half <|||||>Here are some statistic and time performance data: on the dataset lvwerra/codeparrot-clean ~Execution time 13h: Execution time: 2:30:00 for make_duplicate_clusters, 11:00:00 for find_cluster_extremes~ Orginal dataset size: 5361373 Duplicate cluster: 757938 Files in duplicate cluster: 2677039 Unique files in duplicate cluster: 940857 Filtered dataset size: 3625191 ~I think the code is ready for review. If you need to generate a dataset, you can go ahead. I might still need more days to figure out how to do find_cluster_extremes better~ Please see the next message for update<|||||>multipro_find_extremes is done with multi processing ! This PR is ready for review Execution time ~3h: Execution time: 2:30:00 for make_duplicate_clusters, 1:00:00 for multipro_find_extremes Orginal dataset size: 5361373 Duplicate cluster: 757938 Files in duplicate cluster: 2677039 Unique files in duplicate cluster: 940857 Filtered dataset size: 3625191 @lvwerra when review, pay more attention to - [Here](https://github.com/huggingface/transformers/pull/17054/files#diff-c6c3b9ed9e98c7b3b8603011208c2db7c0cf08facbf2a75ec8f56dfea0242040R140) I use a global parameter to be able to do multi pro in a efficient way <|||||>We are good to go, welcome your thought @lvwerra I will try to run some last test
transformers
17,053
closed
Make Trainer compatible with sharded checkpoints
# What does this PR do? The `Trainer` is currently incompatible with the new sharded checkpoint feature in two places: - resuming from a checkpoint - loading the best model at the end of training In both cases, the model state dict is loaded back inside the model **but** there is no model save file if the model was above the default size for sharding, resulting in errors (as was pointed out by #16976 ). This PR addresses this by: 1. Creating a new function `load_sharded_checkpoint` that does the same thing as `model.load_state_dict` for regular model files, but loads a sharded checkpoint (and errors in case of missing/unexpected keys when `strict=True`). 2. Use that function inside the Trainer in the two places mentioned above. A test is added to make sure resuming works from a sharded checkpoint. Fixes #16976
05-02-2022 18:13:37
05-02-2022 18:13:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,052
closed
ValueError: too many values to unpack (expected 2) using BERT to training
### System Info ```shell Preparing the dataset and dataloader and Defining the model but I get this error ValueError: too many values to unpack (expected 2). ``` ### Who can help? @LysandreJik, @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') class dataset(Dataset): def __init__(self, dataframe, tokenizer, max_len): self.len = len(dataframe) self.data = dataframe self.tokenizer = tokenizer self.max_len = max_len def __getitem__(self, index): # step 1: get the sentence and word labels sentence = self.data.sentence[index].strip().split() word_labels = self.data.word_labels[index].split(",") # step 2: use tokenizer to encode sentence (includes padding/truncation up to max length) # BertTokenizerFast provides a handy "return_offsets_mapping" functionality for individual tokens encoding = self.tokenizer(sentence, #is_pretokenized=True, return_offsets_mapping=True, padding='max_length', truncation=True, max_length=self.max_len) # step 3: create token labels only for first word pieces of each tokenized word labels = [labels_to_ids[label] for label in word_labels] # code based on https://huggingface.co/transformers/custom_datasets.html#tok-ner # create an empty array of -100 of length max_length encoded_labels = np.ones(len(encoding["offset_mapping"]), dtype=int) * -100 # set only labels whose first offset position is 0 and the second is not 0 i = 0 for idx, mapping in enumerate(encoding["offset_mapping"]): if mapping[0] == 0 and mapping[1] != 0: # overwrite label encoded_labels[idx] = labels[i] i += 1 # step 4: turn everything into PyTorch tensors item = {key: torch.as_tensor(val) for key, val in encoding.items()} item['labels'] = torch.as_tensor(encoded_labels) return item def __len__(self): return self.len train_size = 0.8 train_dataset = data.sample(frac=train_size,random_state=200) test_dataset = data.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) print("FULL Dataset: {}".format(data.shape)) print("TRAIN Dataset: {}".format(train_dataset.shape)) print("TEST Dataset: {}".format(test_dataset.shape)) training_set = dataset(train_dataset, tokenizer, MAX_LEN) testing_set = dataset(test_dataset, tokenizer, MAX_LEN) train_params = {'batch_size': TRAIN_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } test_params = {'batch_size': VALID_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } training_loader = DataLoader(training_set, **train_params) testing_loader = DataLoader(testing_set, **test_params) model = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=len(labels_to_ids)) model.to(device) inputs = training_set[2] input_ids = inputs["input_ids"].unsqueeze(0) attention_mask = inputs["attention_mask"].unsqueeze(0) labels = inputs["labels"].unsqueeze(0) input_ids = input_ids.to(device) attention_mask = attention_mask.to(device) labels = labels.to(device) outputs = model(input_ids, attention_mask=attention_mask, labels=labels) initial_loss = outputs[0] initial_loss And here is the error code: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-31-c8d1cd345a9a>](https://localhost:8080/#) in <module>() 8 labels = labels.to(device) 9 ---> 10 outputs = model(input_ids, attention_mask=attention_mask, labels=labels) 11 initial_loss = outputs[0] 12 initial_loss 3 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 948 raise ValueError("You have to specify either input_ids or inputs_embeds") 949 --> 950 batch_size, seq_length = input_shape 951 device = input_ids.device if input_ids is not None else inputs_embeds.device 952 ValueError: too many values to unpack (expected 2) ### Expected behavior ```shell I will train my NER-BARTmodel ```
05-02-2022 17:33:58
05-02-2022 17:33:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,051
open
Collection of Tokenizer issues
### System Info ```shell Transformers + Tokenizers ``` ### Who can help? This Issue is a summary of multiple problems that we are currently encountering with Tokenizers. To solve them we'll need a more profound discussion of: - To what extend fast and slow tokenizers should be aligned - Whether all slow tokenizers should be kept - How to treat special tokens - Whether all internal methods of tokenizer should be exposed Relevant issues/PRs: https://github.com/huggingface/transformers/issues/15420 https://github.com/huggingface/transformers/issues/16336 https://github.com/huggingface/transformers/issues/16334 https://github.com/huggingface/transformers/issues/16337 https://github.com/huggingface/transformers/issues/15138 https://github.com/huggingface/transformers/issues/16339 https://github.com/huggingface/transformers/pull/15775 To community: At the moment we sadly don't find the time to dive deeper here, but we're trying hard to allocate time to discuss the strategy here soon. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See issues above ### Expected behavior ```shell Don't know yet ```
05-02-2022 16:53:59
05-02-2022 16:53:59
cc @LysandreJik @sgugger @SaulLu @Narsil @patil-suraj <|||||>Internal thread: https://huggingface.slack.com/archives/C01N44FJDHT/p1647966224411599<|||||>Another one: https://github.com/huggingface/transformers/issues/16787#issuecomment-1100009727<|||||>It would be nice to add those to a project so that we may track the resolution of these issues.<|||||>Another one: https://github.com/huggingface/transformers/issues/16225 <|||||>Another one: https://github.com/huggingface/transformers/issues/17595<|||||>https://github.com/huggingface/tokenizers/issues/1011
transformers
17,050
closed
Allow all imports from transformers
This PR enables doing `from transformers import *` when just the base `transformers` install is done. This should always have been possible, but due to some errors in the imports, it currently failed with a `sentencepiece` import error. This PR fixes that for both the FNet and CPM tokenizers.
05-02-2022 16:36:39
05-02-2022 16:36:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,049
closed
Make the sacremoses dependency optional
Sacremoses is currently installed by default when installing `transformers`, but it should not be needed. This is an artifact of the past, and we have since introduced optional dependencies, which applies perfectly to this situation.
05-02-2022 16:36:35
05-02-2022 16:36:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,048
closed
Fix hashing for deduplication in CodeParrot
# What does this PR do? Fix hashing mechanism to be process independent. Typically `hash` doesn't generate the same hash when using different process. So this makes the maximum number of occurence of a text to be `num_proc` instead of `1` when deduplicating. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @lvwerra @loubnabnl
05-02-2022 15:47:24
05-02-2022 15:47:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,047
closed
Add type hints for BERTGeneration
# What does this PR do? I added type hints for the `BERTGenerationEncoder` and `BERTGenerationDecoder` classes as requested in [#16059](https://github.com/huggingface/transformers/issues/16059) and demonstrated in [#16074](https://github.com/huggingface/transformers/pull/16074). I wasn't completely sure on what to use for the `past_key_values` argument. I set it to `Optional[Tuple[Tuple[torch.FloatTensor]]]`, but let me know if this is wrong. Also, not sure if I should also add type hints for the `BertGenerationConfig` class? @Rocketknight1
05-02-2022 15:20:11
05-02-2022 15:20:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>> I wasn't completely sure on what to use for the `past_key_values` argument. I set it to `Optional[Tuple[Tuple[torch.FloatTensor]]]`, but let me know if this is wrong. Also, not sure if I should also add type hints for the `BertGenerationConfig` class? What you put here is fine, and type hints for `BertGenerationConfig` are nice but optional - if you want to do them you can, but the main thing we're interested in is the core model classes. Let me know either way - if you don't want to do it, this is ready to merge now!<|||||>Okay great, you can go ahead and merge it then. I'll run your notebook to see what else needs to be done and work on some of those instead. Cheers<|||||>Got it. Thanks for the PR!
transformers
17,046
closed
Fix no_trainer examples to properly calculate the number of samples
# Fix number of samples for `no_trainer` scripts ## What does this add? This PR fixes all of the no_trainer scripts to properly use the right number of training steps after the length of the dataloader was changed with `accelerator.prepare` ## Why is it needed? Currently in a multi-process setup, the progress bar still shows the old number of samples. As a result the old number of steps before breaking is set at the original amount, even though the length of the dataloaders changed. The progress bar reflects this too. Simplified example: If the dataloader starts with 128 batches, if 2 GPUs are used then each dataloader has 64 batches. As a result the progress bar should use `64`, and the break condition needs to also know there is only 64. Both currently use 128 still ## What parts of the API does this impact? ### User-facing: All scripts have a recalculation of the max_train_steps after `accelerate.prepare` ## Basic Usage Example(s): ```python # Prepare everything with our `accelerator`. model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # We need to recalculate our total training steps num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch ``` ## When would I use it, and when wouldn't I? While this is always used, technically it is only needed when the number of nodes > 1.
05-02-2022 15:17:03
05-02-2022 15:17:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @muellerzr, @sgugger, in case I specify the argument `max_train_steps` instead of `num_train_epochs` while launching the training script, I need to recalculate the `num_train_epochs` after `accelerate.prepare` instead of `max_train_steps` right? Am I missing something?<|||||>@kowndinya-renduchintala we already do this for you 😄 https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L425
transformers
17,045
closed
Clean up setup.py
# What does this PR do? This PR cleans up a bit the README in two ways: - Remove support for Python 3.6 as we said the current release was the last one with Python 3.6 - Clean up a bit the authors field, description and keywords to emphasize the multimodal support. Since the Hugging Face team is growing, I propose to replace the authors field by something more generic than adding names, let me know if you have a better idea
05-02-2022 14:36:19
05-02-2022 14:36:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,044
closed
Update no_trainer examples to use new logger
# Update `no_trainer` examples to use the new Accelerate logger ## What does this add? - Accelerate recently added a [new logger](https://github.com/huggingface/accelerate/pull/337/) to help deal with repeat logs across all processes. If it should be logged on all, a new kwarg `main_process_only=False` should be passed in. This helps also solve an annoyance users were pointing out about repeat logs leading to misunderstandings of how the internal API was acting on ## What parts of the API does this impact? ### User-facing: The examples now show using the new `get_logger()` function from Accelerate
05-02-2022 14:14:44
05-02-2022 14:14:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>rereview for propagate sanity check then all good 🤗
transformers
17,043
closed
[Trainer] Move logic for checkpoint loading into separate methods for easy overriding
# What does this PR do? This PR does a small refactoring in the `Trainer` class, specifically it moves the logic for the following two steps out of the training loop into separate helper methods: - loading a pre-existing checkpoint into the Trainer before the training starts is moved into the `_load_from_checkpoint()` method. - loading the best evaluated model checkpoint after training has completed is moved into the `_load_best_model()` method. The PR does not change any existing logic in any way. ## Motivation In [our library](https://github.com/Adapter-Hub/adapter-transformers), we implement a custom Trainer class that subclasses your great built-in Trainer class. However, as we don't save full model checkpoints during training, the mentioned steps for checkpoint loading are not applicable to our use case. Moving this logic to separate methods would be super helpful to us (and potentially others), since we could easily override these helper methods without modifying the training loop itself. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger (cc @hSterz)
05-02-2022 13:08:05
05-02-2022 13:08:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,042
closed
Disable Flax GPU tests on push
# What does this PR do? The Flax GPU tests have been failing for more than a month on every commit they are run in master (error changed 20 days ago to an install error). This is making CI checks hard to read, so disabling those tests until someone really fixes them.
05-02-2022 12:56:25
05-02-2022 12:56:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,041
closed
No separation between Torch and TF examples in create_a_model.md
Hi! While writing the Spanish translation of the guide to create a custom mode (#15947), I've come across something that I think it is not intentional. In the section Model, we have this first paragraph explaining how to start initializing and customizing a - custom - model. ![image](https://user-images.githubusercontent.com/56955040/166233739-9f83de75-0dfc-46c8-b0be-e169d6f7ef3e.png) And then, immediately after that, the texts repeats itself, but this time using TF models. ![image](https://user-images.githubusercontent.com/56955040/166234019-6887b5b4-6ee0-4a74-a663-d8ba0550e2a9.png) I think there is a missing subsection title separating both ways of doing the procedure, or the information is redundant and one of then should be removed. One way or the other, is strange to be reading this from top to bottom and see some text repeating itself without any clue.
05-02-2022 12:32:29
05-02-2022 12:32:29
Hi, thanks for your help in translating the docs! :) From my end on the `main` version of the docs, there are two separate blocks for PyTorch and TensorFlow content: ![Screen Shot 2022-05-03 at 10 30 01 AM](https://user-images.githubusercontent.com/59462357/166507509-7aabe3d4-ef88-4657-a57e-44ec4d841518.png)<|||||>Oh, I was directly looking using https://github.com/huggingface/transformers/blob/main/docs/source/en/create_a_model.mdx, and there is no separation there I think (that should also be the main version). If that is handled in the docs, I can just go on and replace the text in the same fashion. Thank you!
transformers
17,040
closed
error with Vision Transformer (ViT)
There seems to be a problem with the HuggingFace Vision Transformer: it takes up all the memory in GPU and renders it impossible to train the model. When I was dealing with the inference just like in `https://huggingface.co/docs/transformers/model_doc/vit` with a single image, it works well. But when I tried to do some fine-tuning with the MNIST dataset intergrated in PyTorch, thus with batches, suddenly it doesn't work anymore. Some details concerning my problem: 1. PyTorch MNIST (torchvision.datasets.MNIST) is not stocked in common image file formats (e.g. jpg., jpeg., png.) which forbids me from using ImageFolder; 2. and as the common dataloader can't handle batches of images, I had to enter my custom transform (which uses feature_extractor) as a parameter when loading the dataset. Sot it appears to me that the feature extractor is handling one image at a time. Here's my code: ``` import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torchvision import datasets from transformers import ViTFeatureExtractor, ViTForImageClassification feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') class ImageTransform: def __init__(self): pass def __call__(self, image): output = torch.from_numpy(feature_extractor(images=image.convert("RGB")).pixel_values[0]) return output mnist_train_dataset = datasets.MNIST("/mnist_root", train=True, download=True, transform=ImageTransform()) mnist_test_dataset = datasets.MNIST("/mnist_root", train=False, download=True, transform=ImageTransform()) mnist_train_dataloader = DataLoader(mnist_train_dataset, batch_size=64, shuffle=True) mnist_test_dataloader = DataLoader(mnist_test_dataset, batch_size=64, shuffle=False) device = torch.cuda.current_device() if torch.cuda.is_available() else 'cpu' print(f"using {device}.") model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k').to(device) optimizer = optim.AdamW(model.parameters(), lr=1e-4) loss_function = nn.CrossEntropyLoss() num_epochs = 3 for epoch in range(num_epochs): model.train() epoch_loss = 0 for images, labels in mnist_train_dataloader: optimizer.zero_grad() images, labels = images.to(device), labels.to(device) outputs = model(pixel_values=images) loss = loss_function(outputs.logits, labels) epoch_loss += loss * len(images) loss.backward() optimizer.step() print(f"Epoch {epoch + 1}: Cross Entropy loss = {epoch_loss / len(mnist_train_dataset)}") ``` The error message: ``` Traceback (most recent call last): File "main.py", line 56, in <module> outputs = model(pixel_values=images) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 603, in forward File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 507, in forward File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 346, in forward File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 278, in forward File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 221, in forward File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 165, in forward RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 2.00 GiB total capacity; 1.63 GiB already allocated; 0 bytes free; 1.70 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` I thought at first that it was due to a too big batch_size. I turned it into `batch_size=2` which is of course to highlight the effect, and the error message becomes: ``` C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:247: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:247: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "main.py", line 59, in <module> loss.backward() File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\autograd\__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: Unable to find a valid cuDNN algorithm to run convolution ``` And when I searched on Google, people is telling me that it was also due to a limit of allocation in GPU... I really do hope that this can be solved as I can't quite figure out how to properly use the official model release of Google... Thank you very much. BTW my GPU configuration: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 512.15 Driver Version: 512.15 CUDA Version: 11.6 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 Off | N/A | | N/A 36C P0 N/A / N/A | 0MiB / 2048MiB | 3% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```
05-02-2022 10:18:10
05-02-2022 10:18:10
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,039
closed
Unable to reproduce gigawords results from google/pegasus-gigaword
@patrickvonplaten Hi Patrick, I am trying to reproduce the PEGASUS results for gigaword. I used gigawords in datasets library and directly use its testing portion without further preprocessing. I used PegasusForConditionalGeneration, PegasusTokenizer (I used checkpoint from Google, google/pegasus-gigaword) to decode summary using default setting. However, my ROUGE score is bit deviated from the original paper reported (my results roug1,2,L: 28/12/25 vs 39.65/20.47/36.76). I wondered if my setup was incorrect.
05-02-2022 09:34:05
05-02-2022 09:34:05
Hey @xu1998hz, We're trying to keep Transformers issues for bugs in the core library. Could you try to use the forum instead: https://discuss.huggingface.co/ ? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,038
closed
Fix `LayoutXLM` docstrings
# What does this PR do? A follow-up PR for #16187. Also fix a legacy issue that the `ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING` of both `layoutlmv2` and `layoutxlm` is incorrect, which should be like [this](https://github.com/huggingface/transformers/blob/ff846e9b28358e5741dea5058433f7bcf8e7de76/src/transformers/tokenization_utils_base.py#L1311-L1363) instead of a copy of `ENCODE_KWARGS_DOCSTRING`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) @NielsRogge @LysandreJik ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-02-2022 08:59:31
05-02-2022 08:59:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>ok got closed without a reason.<|||||>Hi @qqaatw, Apologies this didn't get merged yet. I would like to re-open the PR, but seems like that's not possible anymore as the branch is deleted. Could you open a new PR? Apologies again for how this was treated.<|||||>@NielsRogge I restored the branch, I don't know why it was deleted on my behalf.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @qqaatw, feel free to re-open this PR and apply the suggestions<|||||>@NielsRogge I'm not able to re-open it. There is no button for re-opening.
transformers
17,037
closed
Make DETR `pixel_values` input optional
### Feature request Currently the `pixel_values` input of `DetrModel` and `DetrForObjectDetection` is a required argument, which is nearly useless when `encoder_outputs` is specified. Therefore, I propose to make `pixel_values` optional and infer batch size for subsequent uses from the `encoder_outputs` when it is specified. We may also need to add a new optional `position_embeddings` argument for the decoder since the backbone is no longer used and no longer produces the embeddings in this case. The same approach can be seen at many models, e.g. Bert, which also has its `input_ids` optional: https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/bert/modeling_bert.py#L912-L914 The only issue is that in `DetrForSegmentation`, `pixel_values` is required for producing feature maps and reconstructing the predicted mask. So the proposal is not applicable for this model. ### Motivation Described above. ### Your contribution Can make a PR. @NielsRogge What do you think :) ?
05-02-2022 08:38:05
05-02-2022 08:38:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,036
closed
[Flax(Speech)EncoderDecoder] Fix bug in `decoder_module`
The current use of `decoder_module` assumes that `encoder_hidden_states` is the fourth positional argument of the decoder's call method. We see that this is indeed true of the two current Flax decoder models: [`FlaxGPT2LMHeadModel`](https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/gpt2/modeling_flax_gpt2.py#L691) and [`FlaxBartForCausalLM`](https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/bart/modeling_flax_bart.py#L1911). However, for other possible decoder models, such as the work-in-progress [`FlaxBertForCausalLM`](https://github.com/huggingface/transformers/blob/9c9e49bd3aeb3f84c0d61b7f0fdca8ea853ac5a1/src/transformers/models/bert/modeling_flax_bert.py#L1545), there may be additional positional arguments (such as `token_type_ids` or `head_mask`) **prior** to `encoder_hidden_states`. To handle this more general case, we should not assume `encoder_hidden_states` is necessarily the fourth positional argument, and should instead pass it as a _key-word argument_.
05-02-2022 08:22:52
05-02-2022 08:22:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,035
closed
[FlaxGenerate] Fix bug in `decoder_start_token_id`
In Python, `bool` is a subclass of `int`, and `False` has the value `0`. We observe this by calling the `__bool__` method of `0`: ```python print((0).__bool__()) print((1).__bool__()) ``` ``` False True ``` https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/generation_flax_utils.py#L266-L268 In the preceding lines of code, if `decoder_start_token_id` has the value `0` (valid): - `if decoder_start_token_id` will be `False` - `decoder_start_token_id` will be set to `self.config.decoder_start_token_id` The correct behaviour should be that if `decoder_start_token_id` has the value `0`, it remains set to `0`, and not changed to `self.config.decoder_start_token_id`.
05-02-2022 08:05:06
05-02-2022 08:05:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,034
closed
Move test model folders
# What does this PR do? As discussed offline, this PR moves model specific test folders (e.g. `tests/bert`) to `tests/models` (e.g. `tests/models/bert`) In addition to the necessary changes on `import`, the following changes are made: - In some test files regarding processors (tokenizer/feature extractor, etc.), change ``` SAMPLE_ROBERTA_CONFIG = os.path.join(os.path.dirname(os.path.abspath(__file__)), ".../fixtures/dummy-config.json") ``` to ``` SAMPLE_ROBERTA_CONFIG = get_tests_dir("fixtures/dummy-config.json") ``` (see [the commit](https://github.com/huggingface/transformers/pull/17034/commits/ee9956cf4181932c821d4c3c28677ac33660496a)) - The changes (**to be reviewed particularly**) - `.circleci/config.yml` - `.github/workflows/self-scheduled.yml` - `src/transformers/commands/add_new_model.py` - `src/transformers/commands/add_new_model_like.py` - `utils/check_repo.py` - `utils/notification_service.py` - `utils/test_fetcher.py` ### Remarks: - The `self-push` result is [here](https://github.com/huggingface/transformers/actions/runs/2256959215) - The slack report job has `Artifact was not found, job was probably canceled.`, but this issue exists for some time. My plan is to continue the task of changing self-push report format (and fix this issue) - The `run_tests_flax_gpu` failure is just the same as in other runs. This is not in the scope of this PR. - The scheduled CI (partial) result is [here](https://github.com/huggingface/transformers/actions/runs/2254833118). The report is available on Slack. - On the GitHub Actions page, the jobs have name like `Model tests (models/albert, single-gpu-docker)`. It becomes a bit long (with `models/`). - Same for the Slack report ``` 0 | 0 | 3 | 0 | 0 | models_auto ``` - So far I only ran a subset of models. From the results, I think the PR is ready. We can run a full suite of tests before merge.
05-02-2022 08:00:41
05-02-2022 08:00:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @stas00 Thank you for the feedbacks. - Regarding `difficult to read` (the long command): totally agreed! I had the same feeling, and thought might be a good idea to create a tiny python script and just call it. Otherwise, we can use what you proposed above (after some tests). - About `parents[4]`: thank you for the information! - I can check `TestCasePlus` later. I would prefer to merge as it is now, and work on these points in another PR. The main reason is that I ran the full suite of tests, the results look all good, and would like to merge with a version that has been fully tested :-)<|||||>Merged now (after rebase on main for the merged `flax_bert` and `yolos` PRs).
transformers
17,033
closed
Multi GPU training crashes when running run_mlm_wwm.py
### System Info ```shell I am running this script on a 8 A100 cards cluster. gcc/11.2.0 python/3.8/3.8.13 cuda/11.3/11.3.1 cudnn/8.2/8.2.4 nccl/2.9/2.9.9-1 accelerate 0.7.1 datasets 2.1.0 huggingface-hub 0.5.1 protobuf 3.20.1 sentencepiece 0.1.96 tokenizers 0.12.1 torch 1.11.0+cu113 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.18.0 ``` ### Who can help? @wlhgtc Sorry to bother you again, please check this issue if you have time🙏. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Dataset example My dataset is Chinese, Japanese and Korean's Wikipedia. And I generate ref files for not only Chinese but for all whole words. ``` mrph_train.txt 統一 獄中 者 組合 統一 獄中 者 組合 ( とういつ ごくちゅう しゃく みあい ) は 、 日本 の 刑務所 に 在監 して いる 受刑 者 に よって 結成 さ れた 組織 。 現在 、 日本 で 唯一 の 「 囚人 組合 」 組織 である 。 沿革 . 明治 時代 以降 、 日本 の 刑務所 で は 受刑 者 自身 が 行 刑 の 運営 に あたる 「 囚人 自治 」 を 認めて い ない 。 これ は 江戸 時代 の 伝馬 町 牢 屋敷 の ように 受刑 者 の 代表 である 牢 名主 が 牢獄 を 仕切る こと で 、 結果 と して 受刑 者 の 処遇 が 劣悪 化 した こと に 対する 反省 から 来て いる 。 ref_train.txt [2, 4, 7] [2, 4, 7, 10, 11, 12, 14, 15, 16, 17, 19, 20, 22, 23, 28, 31, 32, 35, 37, 39, 41, 45, 46, 48, 51, 53, 56, 59, 62, 66, 68, 71, 73, 74] [2] [2, 4, 6, 9, 12, 13, 17, 20, 26, 29, 30, 33, 35, 39, 40, 43, 46, 49, 51, 54, 58, 61, 62, 64, 68, 70, 71, 74, 77, 80, 81, 83, 87, 90, 92, 96, 99, 102, 104, 107, 108, 110, 112, 114, 116] ``` ### Command ```shell torchrun --nproc_per_node 8 run_mlm_wwm.py \ --model_type bert \ --tokenizer_name tokenizer.json \ --train_file mrph_train.txt \ --validation_file mrph_test.txt \ --train_ref_file ref_train.txt \ --validation_ref_file ref_test.txt \ --config_overrides="pad_token_id=2,hidden_size=512,num_attention_heads=8,num_hidden_layers=4" \ --max_seq_length 128 \ --fp16 \ --per_device_train_batch_size 256 \ --per_device_eval_batch_size 256 \ --gradient_accumulation_steps 2 \ --max_steps 500000 \ --save_steps 1000 \ --save_total_limit 5 \ --do_train \ --do_eval \ ``` ### Change in `run_mlm_wwm.py` - To use my own tokenizer, I changed ```python3 if model_args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name, **tokenizer_kwargs ) elif model_args.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, **tokenizer_kwargs ) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported by this script." "You can do it from another script, save it, and load it from here, using --tokenizer_name." ) ``` to ```python3 tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` ### Expected behavior ```shell ### Bug info After loading dataset, it should begin training, but PyTorch crashed at this time. WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380593 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380595 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380596 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380597 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380598 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380599 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380600 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 1 (pid: 2380594) of binary: /local/9884269.1.gpua/work/bin/python3 Traceback (most recent call last): File "/local/9884269.1.gpua/work/bin/torchrun", line 8, in <module> sys.exit(main()) File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper return f(*args, **kwargs) File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main run(args) File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run elastic_launch( File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ``` ### Have tried - use `gloo` for torch's backend instead of `nccl` ❌ - use torch1.10.0 instead of 1.11.0 ❌ - use V100 cluster instead of A100 ❌
05-02-2022 01:53:28
05-02-2022 01:53:28
@conan1024hao Sorry I don't know more details about multi gpu training, but you should make sure your code works well in single GPU. And then you could try code like this: ```python export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node 8 run_mlm_wwm.py \ --model_type bert \ --tokenizer_name tokenizer.json \ --train_file mrph_train.txt \ --validation_file mrph_test.txt \ --train_ref_file ref_train.txt \ --validation_ref_file ref_test.txt \ --config_overrides="pad_token_id=2,hidden_size=512,num_attention_heads=8,num_hidden_layers=4" \ --max_seq_length 128 \ --fp16 \ --per_device_train_batch_size 256 \ --per_device_eval_batch_size 256 \ --gradient_accumulation_steps 2 \ --max_steps 500000 \ --save_steps 1000 \ --save_total_limit 5 \ --do_train \ --do_eval \ ```<|||||>@wlhgtc Thank you for your advice. There does exist some bug info which will not be printed when in multi GPU mode. However, after I making sure it can run in single GPU, this error still exist. I will keep this issue open for a solution in the future.<|||||>@wlhgtc An update. I found that multi GPU crash when running `add_chinese_references()`. I ran the whole script successfully after I made the dataset much more smaller. A temprory solution will be preprocessing and saving the tokenized dataset locally by CPU and then start training by multi GPU.<|||||>> @wlhgtc An update. I found that multi GPU crash when running `add_chinese_references()`. I ran the whole script successfully after I made the dataset much more smaller. A temprory solution will be preprocessing and saving the tokenized dataset locally by CPU and then start training by multi GPU. yeah and I met the same problem. This operation of "add_column" needs huge memory, related to some issue in `datasets` [this](https://github.com/huggingface/datasets/issues/1825). There are two ways: 1. preprocess ref files and merge all info("input_ids",...,"chinese_ref") to a json file, avoid tokenized dataset all the time. 2. `datasets.set_transform(tokenize_function)` to lazy load your dataset. Hope it could help.
transformers
17,032
closed
[Trainer]: Resume training with `save_strategy="epoch"` does not load RNG state
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.15.36-1-lts-x86_64-with-glibc2.33 - Python version: 3.8.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I provide a MWE for this issue by forking `transformers` and writing a failing test case. This can be reproduced via the steps below: 1. `git clone https://github.com/atreyasha/transformers` 2. Create a virtual environment and install the `[dev-torch]` extras 3. `pytest tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_randomness_from_epoch` **Edit**: I removed the forked repository as the diff has been incorporated in the PR mentioned below. Here is the relevant test snippet where I added `save_strategy="epoch"` and adjusted the checkpoint number to reflect the steps in one epoch: ```python @require_torch_non_multi_gpu def test_resume_training_with_randomness_from_epoch(self): # This test will fail flakily for more than 1 GPUs since the result will be slightly more different # TODO: investigate why it fails for 2 GPUs? if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True train_dataset = RegressionDataset(length=128) eval_dataset = RegressionDataset() config = RegressionModelConfig(a=0, b=2) model = RegressionRandomPreTrainedModel(config) tmp_dir = self.get_auto_remove_tmp_dir() args = RegressionTrainingArguments(tmp_dir, save_strategy="epoch", learning_rate=0.1) trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() (a, b) = trainer.model.a.item(), trainer.model.b.item() model = RegressionRandomPreTrainedModel(config) trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train(resume_from_checkpoint=os.path.join(tmp_dir, "checkpoint-16")) (a1, b1) = trainer.model.a.item(), trainer.model.b.item() self.assertAlmostEqual(a, a1, delta=1e-8) self.assertAlmostEqual(b, b1, delta=1e-8) ``` This should produce an error because the regression variables are not the same or similar: ```console > self.assertAlmostEqual(a, a1, delta=1e-8) E AssertionError: 2.0825276374816895 != 2.081479072570801 within 1e-08 delta (0.0010485649108886719 difference) ``` ### Cause The RNG state is only loaded when resuming a checkpoint that completed non-zero steps in the current epoch. If the checkpoint was saved at the end of the epoch, `steps_trained_in_current_epoch` would be `0` for the new epoch and the saved RNG state would not be loaded. https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/trainer.py#L1423-L1435 ### Possible fix Check if the checkpoint to resume is a whole-number multiple of steps per epoch. If this is true, then load the RNG state once before entering the `epoch_iterator` loop above. ### Expected behavior The test case above should pass, meaning that the regression variables should be the same or similar (within the delta).
05-01-2022 17:43:22
05-01-2022 17:43:22
Thanks for the fully reproducible example, which will become a new test in our CI :-) This was a bit painful to debug, but the PR above should solve the issue.<|||||>Thanks @sgugger for the quick response
transformers
17,031
closed
Training a tokenizer - add argument for preprocessing the input
### Feature request I am training my huggingface tokenizer on my own corpora, and I want to save it with a preprocessing step. That is, if I pass some text to it, I want it to apply the preprocessing and then tokenize the text, instead of explicitly preprocessing it before that. A good example is BERTweet: https://github.com/VinAIResearch/BERTweet and their `tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)` (here normalization=True indicates that the input will be preprocessed according to some function). I want the same to apply when I train a tokenizer with a custom preprocessing function. My code is: from pathlib import Path from tokenizers import ByteLevelBPETokenizer def preprocess(text): return text paths = [str(x) for x in Path('data').glob('*.txt')] tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=50_000, min_frequency=2, special_tokens=['<s>', '<pad>', '</s>', '<unk>', '<mask>']) tokenizer.save_model('CustomBertTokenizer') Now, when I load the tokenizer: from transformers import RobertaTokenizerFast sentence = 'Hey' tokenizer = RobertaTokenizerFast.from_pretrained('CustomBertTokenizer') tokenizer(sentence) I want `sentence` to be preprocessed with the `preprocess` function, and then tokenized. So I want to pass like an argument : preprocessing=True, or something like that. How can I do it? How can I achieve this? ### Motivation . ### Your contribution .
05-01-2022 15:19:46
05-01-2022 15:19:46
I have also posted my question in the huggingface forum: https://discuss.huggingface.co/t/save-tokenizer-with-argument/17389<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,030
closed
Added XLM onnx config
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Added XLM OnnxConfig to make this model available for conversion. @ChainYo ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/16308 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - ~~[ ] Did you write any new necessary tests?~~ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-01-2022 12:22:28
05-01-2022 12:22:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello thanks for the PR it looks really clean!! 🤗 If you have time, it could be nice to upload a converted XLM model to `ONNXConfig for all` organisation on Hugging Face's hub.<|||||>> Thanks for this clean PR @nandwalritik fire ! Apart from a small comment about the formatting changes, this LGTM :) > > Could you please confirm that the slow tests pass with: > > ``` > RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "xlm" > ``` @lewtun The test cases are successfully passing on running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "xlm"`.<|||||>Thanks for checking the tests pass @nandwalritik ! Could you please rebase on `main` to account for a recent refactoring that was done to order the model names in `features.py` alphabetically?<|||||>Hey @nandwalritik are you struggling with the commits or rebasing a branch ?<|||||>> Hey @nandwalritik are you struggling with the commits or rebasing a branch ? I just saw that there were merge conflicts, since `main` was updated, so I rebased again. Did I rebased incorrectly? Steps which I followed to rebase:- * Fetch and merge upstream * git pull origin main * git checkout featureBranch * git rebase main And then I solved the merge conflicts manually wherever were required.<|||||>Yes it seems that there is more than your commits attached to this PR<|||||>Nice it seems to be better ! <|||||>Thanks again for your contribution!
transformers
17,029
closed
add `mobilebert` onnx configs
# What does this PR do? This PR adds MobileBert OnnxConfig to make this model available for conversion. #16308 ## Who can review? @lewtun @LysandreJik Anyone in the community is free to review the PR once the tests have passed.
05-01-2022 12:19:51
05-01-2022 12:19:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @manandey thanks for the PR, it looks really clean. Did you try to convert one MobileBERT model with this config? It could be nice to upload a converted MobileBERT model of your choice to the `ONNXConfig for all` if you have time.<|||||>Hi @lewtun, I tried to address the fixes you had suggested, and the tests are passing after running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "mobilebert" `. :)
transformers
17,028
closed
Adding a ISSUE_TEMPLATE for the translation of docs
### Feature request Users can create issues to translate the docs to several languages. This technique worked for the translation of [the course](https://github.com/huggingface/course/issues) (cc @lewtun). Since we have [several docs to translate](https://github.com/huggingface/transformers/issues/15947) a template would be adequate. Currently, the [ISSUE TEMPLATES](https://github.com/huggingface/transformers/tree/main/.github/ISSUE_TEMPLATE) of the Transformers library are .yml. However, I would prefer to write a PR with a MD (similar to the [one in the Course](https://github.com/huggingface/course/blob/main/.github/ISSUE_TEMPLATE/translations.md)). We do not need to ask info from the issue writer, maybe a field to ask if they are willing to take leadership of the translation they are proposing. ### Motivation Allowing the users to create their own issues (and possibly take ownership/leadership) would allow for a faster translation. ### Your contribution If this is accepted I can create a PR with the issue template.
04-30-2022 23:12:13
04-30-2022 23:12:13
I closed the issue for the moment.<|||||>This issue was mentioned in a previous community issue #17404 for translating into Italian 🇮🇹.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,027
closed
Add XLNet OnnxConfig
# What does this PR do? 1. Add XLNet OnnxConfig to make this model available for conversion. 2. In order to make the onnx export work, I had to remove the `**kwargs` argument in the `forward` function of the `XLNet` models. Seems like the `**kwargs` was on deprecation warning anyway and removing it didn't break any tests. Here is the reproduction and the error log of the OnnxExport if the `**kwargs` argument doesn't get removed. ``` from typing import Mapping, OrderedDict from pathlib import Path from transformers.onnx import OnnxConfig, export from transformers import AutoTokenizer, AutoModel, AutoConfig class XLNetOnnxConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: if self.task == "multiple-choice": dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"} else: dynamic_axis = {0: "batch", 1: "sequence"} return OrderedDict( [ ("input_ids", dynamic_axis), ("attention_mask", dynamic_axis), ("token_type_ids", dynamic_axis) ] ) config = AutoConfig.from_pretrained("xlnet-base-cased") onnx_config = XLNetOnnxConfig(config, task="sequence-classification") onnx_path = Path("model.onnx") base_model = AutoModel.from_pretrained("xlnet-base-cased") tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [1], in <module> 28 base_model = AutoModel.from_pretrained("xlnet-base-cased") 29 tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") ---> 31 onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) File /opt/homebrew/lib/python3.9/site-packages/transformers/onnx/convert.py:116, in export(tokenizer, model, config, opset, output) 113 config.patch_ops() 115 # export can works with named args but the dict containing named args as to be last element of the args tuple --> 116 export( 117 model, 118 (model_inputs,), 119 f=output.as_posix(), 120 input_names=list(config.inputs.keys()), 121 output_names=onnx_outputs, 122 dynamic_axes={name: axes for name, axes in chain(config.inputs.items(), config.outputs.items())}, 123 do_constant_folding=True, 124 use_external_data_format=config.use_external_data_format(model.num_parameters()), 125 enable_onnx_checker=True, 126 opset_version=opset, 127 ) 129 config.restore_ops() 131 return matched_inputs, onnx_outputs File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/__init__.py:316, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 38 r""" 39 Exports a model into ONNX format. If ``model`` is not a 40 :class:`torch.jit.ScriptModule` nor a :class:`torch.jit.ScriptFunction`, this runs (...) 312 model to the file ``f`` even if this is raised. 313 """ 315 from torch.onnx import utils --> 316 return utils.export(model, args, f, export_params, verbose, training, 317 input_names, output_names, operator_export_type, opset_version, 318 _retain_param_name, do_constant_folding, example_outputs, 319 strip_doc_string, dynamic_axes, keep_initializers_as_inputs, 320 custom_opsets, enable_onnx_checker, use_external_data_format) File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:107, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 102 if use_external_data_format is not None: 103 warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next " 104 "PyTorch release. The code will work as it is False if models are not larger than 2GB, " 105 "Otherwise set to False because of size limits imposed by Protocol Buffers.") --> 107 _export(model, args, f, export_params, verbose, training, input_names, output_names, 108 operator_export_type=operator_export_type, opset_version=opset_version, 109 do_constant_folding=do_constant_folding, example_outputs=example_outputs, 110 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs, 111 custom_opsets=custom_opsets, use_external_data_format=use_external_data_format) File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:724, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, use_external_data_format, onnx_shape_inference) 720 dynamic_axes = {} 721 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names) 723 graph, params_dict, torch_out = \ --> 724 _model_to_graph(model, args, verbose, input_names, 725 output_names, operator_export_type, 726 example_outputs, val_do_constant_folding, 727 fixed_batch_size=fixed_batch_size, 728 training=training, 729 dynamic_axes=dynamic_axes) 731 # TODO: Don't allocate a in-memory string for the protobuf 732 defer_weight_export = export_type is not ExportTypes.PROTOBUF_FILE File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:493, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, example_outputs, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes) 490 if isinstance(args, (torch.Tensor, int, float, bool)): 491 args = (args, ) --> 493 graph, params, torch_out, module = _create_jit_graph(model, args) 495 params_dict = _get_named_param_dict(graph, params) 497 graph = _optimize_graph(graph, operator_export_type, 498 _disable_torch_constant_prop=_disable_torch_constant_prop, 499 fixed_batch_size=fixed_batch_size, params_dict=params_dict, 500 dynamic_axes=dynamic_axes, input_names=input_names, 501 module=module) File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:437, in _create_jit_graph(model, args) 435 return graph, params, torch_out, None 436 else: --> 437 graph, torch_out = _trace_and_get_graph_from_model(model, args) 438 state_dict = _unique_state_dict(model) 439 params = list(state_dict.values()) File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:388, in _trace_and_get_graph_from_model(model, args) 381 def _trace_and_get_graph_from_model(model, args): 382 383 # A basic sanity check: make sure the state_dict keys are the same 384 # before and after running the model. Fail fast! 385 orig_state_dict_keys = _unique_state_dict(model).keys() 387 trace_graph, torch_out, inputs_states = \ --> 388 torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True) 389 warn_on_static_input_change(inputs_states) 391 if orig_state_dict_keys != _unique_state_dict(model).keys(): File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:1166, in _get_trace_graph(f, args, kwargs, strict, _force_outplace, return_inputs, _return_inputs_states) 1164 if not isinstance(args, tuple): 1165 args = (args,) -> 1166 outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) 1167 return outs File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs) 1098 # If we don't have any hooks, we want to skip the rest of the logic in 1099 # this function, and just call forward. 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:127, in ONNXTracedModule.forward(self, *args) 124 else: 125 return tuple(out_vars) --> 127 graph, out = torch._C._create_graph_by_tracing( 128 wrapper, 129 in_vars + module_state, 130 _create_interpreter_name_lookup_fn(), 131 self.strict, 132 self._force_outplace, 133 ) 135 if self._return_inputs: 136 return graph, outs[0], ret_inputs[0] File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:118, in ONNXTracedModule.forward.<locals>.wrapper(*args) 116 if self._return_inputs_states: 117 inputs_states.append(_unflatten(in_args, in_desc)) --> 118 outs.append(self.inner(*trace_inputs)) 119 if self._return_inputs_states: 120 inputs_states[0] = (inputs_states[0], trace_inputs) File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs) 1098 # If we don't have any hooks, we want to skip the rest of the logic in 1099 # this function, and just call forward. 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1090, in Module._slow_forward(self, *input, **kwargs) 1088 recording_scopes = False 1089 try: -> 1090 result = self.forward(*input, **kwargs) 1091 finally: 1092 if recording_scopes: TypeError: forward() takes from 1 to 14 positional arguments but 15 were given​ ``` Fixes #16308 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #16308 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ChainYo for the OnnxConfig @patrickvonplaten and @sgugger for the changes in `modeling_xlnet.py` Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-30-2022 14:55:51
04-30-2022 14:55:51
Hi @sijunhe Nice PR, but could you rebase tre branch to avoid getting all the recent commits on this PR ?<|||||>Hi @sijunhe thanks for this PR! Indeed as @ChainYo suggests, could you please rebase on `main` so that it is a bit easier to review the changes from your PR?<|||||>Opps! Sorry about that. Merged! @lewtun @ChainYo <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17027). All of your documentation changes will be reflected on that endpoint.<|||||>Any progress here? @lewtun <|||||>Thanks for the review folks. I tried what @lewtun suggested about stripping the kwargs but I couldn't really make it work. `model.forward = forward_without_kwargs(model.forward)` means `forward_without_kwargs` would need to change the input signature of `model.forward` and I didn't know if python can do that. If I try to return a new function based on `model.forward`, the call then becomes a infinite recursion. Instead I took @patrickvonplaten's suggestion and replace `**kwargs` with a single `use_cache` arg.<|||||>> Thanks for the review folks. > > I tried what @lewtun suggested about stripping the kwargs but I couldn't really make it work. `model.forward = forward_without_kwargs(model.forward)` means `forward_without_kwargs` would need to change the input signature of `model.forward` and I didn't know if python can do that. If I try to return a new function based on `model.forward`, the call then becomes a infinite recursion. > > Instead I took @patrickvonplaten's suggestion and replace `**kwargs` with a single `use_cache` arg. Since it's an edge case I'm ok with this! Thanks for making the change @sijunhe - what do you think @LysandreJik @sgugger we should add to the doc string that the param is deprecated as well I guess<|||||>No, the param is not documented since it's deprecated, and it should stay that way IMO.<|||||>If I'm not mistaken, can't we define a wrapper function to strip out `**kwargs` from the function signature? This is roughly what I had in mind to handle the forward pass: ```python from transformers import AutoModel import inspect import functools def forward_without_kwargs(forward): @functools.wraps(forward) def wrapper(*args, **kwargs): return forward(*args, **kwargs) # Override signature and strip out kwargs sig = inspect.signature(forward) sig = sig.replace(parameters=tuple(sig.parameters.values())[:-1]) wrapper.__signature__ = sig return wrapper # Load an XLNet checkpoint model = AutoModel.from_pretrained("xlnet-base-cased") # Has kwargs inspect.signature(model.forward) # Has no kwargs model.forward = forward_without_kwargs(model.forward) inspect.signature(model.forward) ``` This function could live in `onnx/utils.py` and then be called within the `export_pytorch()` function by checking if `kwargs` is present in the model's forward signature and stripping it out if so. Of course, this would also need to be tested properly - just an idea :)<|||||>> If I'm not mistaken, can't we define a wrapper function to strip out `**kwargs` from the function signature? This is roughly what I had in mind to handle the forward pass: Also fine with me<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,026
closed
Bert: relative_key position embedding causes error for long sequences
### System Info ```shell - `transformers` version: 4.9.2 - Platform: Linux-5.14.15-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Copy paste script from below and run 2. Script: ```python import torch from transformers import BertConfig, BertModel config = { 'hidden_size': 512, 'num_attention_heads': 8, 'position_embedding_type': 'relative_key', 'max_seq_length': 10, 'max_position_embeddings': 10 } encoder_config = BertConfig(**config) model = BertModel(encoder_config) batch_size, src_len = 1, 11 x = torch.zeros(batch_size, src_len).int() model(input_ids=x) ``` ### Expected behavior ```shell Since relative attention is used (Shaw et al.) the script should run without any errors. However, the script breaks because at least two implementations details (in the PyTorch implementation) prevent this use case: 1. Token type ids are buffered for a specific max. length: https://github.com/huggingface/transformers/blob/ede5e041911afed37c8284a980342d4a2625b1d5/src/transformers/models/bert/modeling_bert.py#L223 2. The distance in self-attention is not clipped to the maximum distance (as in Shaw et al.): https://github.com/huggingface/transformers/blob/ede5e041911afed37c8284a980342d4a2625b1d5/src/transformers/models/bert/modeling_bert.py#L328 There is currently no apparent way to prevent this (especially when the model is trained). ```
04-30-2022 13:18:52
04-30-2022 13:18:52
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,025
closed
force_words_ids not working
### System Info ```shell No inception occurs. ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import (GPT2LMHeadModel, GPT2Tokenizer, GPT2Config) m = GPT2LMHeadModel.from_pretrained('gpt2') t = GPT2Tokenizer.from_pretrained('gpt2') prompt = "I drink cocacola." input = t(prompt, return_tensors="pt") bad_words = t("alcohol", add_prefix_space=True, add_special_tokens=False).input_ids force_words = t("very sweet", add_prefix_space=True, add_special_tokens=False).input_ids print("bad_words: ", bad_words) print("force_words: ", force_words) gen = m.generate(**input, do_sample=True, temperature=0.9, num_beams = 10, top_p=1.0, bad_words_ids = [bad_words], force_words_ids=[force_words], max_length=100) gen = t.batch_decode(gen) if_exist_very = 'very' in gen if_exist_sweet = 'sweet' in gen print("gen: ", gen) print("if_exist_very: ", if_exist_very) print("if_exist_sweet: ", if_exist_sweet) ### Expected behavior ```shell Hi, I tried to use generate() with force_words_ids. But it does not work. bad_words_ids seems to work though. Here are the outputs: gen: ["I drink cocacola. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't"] if_exist_very: False if_exist_sweet: False ```
04-30-2022 12:59:35
04-30-2022 12:59:35
I suspect this is a version issue. The [constrained beam search](https://huggingface.co/blog/constrained-beam-search) wasn't introduced until 4.17 so if you are using an older version, that might be why it didn't work. Your code worked for me on 4.18 but not on 4.15. <|||||>Thanks @sijunhe! I change the version to 4.18 and it works. In addition to use force_word_id to make sure the generation contains some specific words, I'd like the forced words shown in one sentence in a generation, at best in a specified order. Would you be so kind to give me some advice on whether there's parameter in generate() function that can help me do this? Or I have to modify generate() function from its source code? Thanks!<|||||>> I'd like the forced words shown in one sentence in a generation I think this is the current behavior. As long as you are not using the Disjunctive Constraints, all the input_ids listed in `forced_word_id` should show up in the generation. > at best in a specified order I don't think the current `generation()` supports this yet. However, it is mentioned in the blog post that I lined above as future work, something like a `OrderedConstraint` that would inherit from the `Constraint` class. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, This could be a problem or not based on how `Phrasalconstraints` is implemented. I am using transformer==4.18. I observe that the forced words do not always appear in my generations. My guess is that the chance of having forced words in generation is limited by `num_beams`, as I find higher `num_beams` gives me more generations with forced words. I also notice that if a forced word is present in the prompt (or starting text), then basically it will not be forced to be generated again? Is that right? Can you please provide some insights?
transformers
17,024
closed
Clean up vision tests
# What does this PR do? This is a follow-up of #16799. It took me way too long to realize I don't need to overwrite `test_attention_outputs` and `test_hidden_states_outputs` as I can just set the `seq_length` attribute of the ModelTester. 😂
04-30-2022 09:10:36
04-30-2022 09:10:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,023
closed
wavlm s3prl emotion recognition
Hi I have trained a downstream emotion recognition task using s3prl wavlm where a checkpoint has been saved `dev-best.ckpt`. The inference setup in s3prl is not ideal requiring batches of wav files split by session rather a single wav file which is useful in production endpoints. @anton-l can you please share how you ported the wav2vec2-er s3prl model to do inference below. ![image](https://user-images.githubusercontent.com/52277510/166095746-7970ef1b-bdb3-4db4-98f2-14395337b3d9.png)
04-30-2022 07:07:23
04-30-2022 07:07:23
Hi @sciai-ai! You can find the rough model conversion script is here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py The command is: ```bash python convert_wavlm_original_s3prl_checkpoint_to_pytorch.py \ --base_model_name "microsoft/wavlm-base (depends on your base model)" \ --config_path "hf_model/config.json (should be modified by hand, probably just add id2label and label2id fields to the base WavLM config.json)" \ --checkpoint_path "path/to/s3prl/dev-best.ckpt" \ --model_dump_path "hf_model/output/dir/" ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,022
closed
update docs of length_penalty
# What does this PR do? This PR updates the docs of `length_penalty` fixing the issues mentioned in #16930 . c.c. @patrickvonplaten
04-30-2022 06:21:45
04-30-2022 06:21:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,021
closed
Added es version of language_modeling.mdx doc
# What does this PR do? Fixes(#15947) Added spanish version of language_modeling.mdx documentation file. ### Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). @sgugger
04-30-2022 05:44:21
04-30-2022 05:44:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>@omarespejel Could you confirm this is good to merge?
transformers
17,020
closed
add torch.no_grad when in eval mode
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes (https://github.com/huggingface/transformers/issues/17019) add `torch.no_grad` in some `run_xxx_no_trainer.py` file when in eval mode ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-30-2022 04:55:42
04-30-2022 04:55:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,019
closed
Missing torch.no_grad in run_xxx_no_trainer.py
### System Info ```shell None ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction missing `with torch.no_grad():` in some `run_xxx_no_trainner.py` file. ### Expected behavior ```shell add `with torch.no_grad():`. ```
04-30-2022 04:52:58
04-30-2022 04:52:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,018
closed
Fix typo in RetriBertTokenizer docstring
# What does this PR do? Fixes typo in RetriBertTokenizer docstring. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
04-30-2022 02:23:42
04-30-2022 02:23:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,017
closed
Add missing RetriBERT tokenizer tests
# What does this PR do? Addresses issue [#16627](https://github.com/huggingface/transformers/issues/16627). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @SaulLu ## Notes 1. There was no folder `tests/retribert/` yet, so I created one and put an `__init__.py` in it. Is there anything else I have to do for these tests to get picked up by CI? 2. `RetriBertTokenizer` is identical to `BertTokenizer`, so I mostly just duplicated `BertTokenizationTest`. Is that fine or should I rather a) write new tests from scratch or b) figure out a way to reuse the code in `BertTokenizationTest`?
04-30-2022 02:18:20
04-30-2022 02:18:20
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @SaulLu, can you give me some guidance on how to proceed? The CI is throwing the following error: ```Make sure the names of these test files match the name of the module or utils they are testing, or adapt the constant `SPECIAL_MODULE_TO_TEST_MAP` in `utils/tests_fetcher.py` to add them. If your test file is triggered separately and is not supposed to be run by the regular CI, add it to the `EXPECTED_TEST_FILES_NEVER_TOUCHED` constant instead.``` However, I think the naming is correct (`test_tokenization_retribert.py` tests `tokenization_retribert.py`) and adding it to neither constants makes sense to me. I'm also unable to reproduce this locally with `pytest`. Do you have any idea what triggered this?<|||||>The CI error seems to me to come from the fact that 3 days ago there was a re-organisation of the test folder (https://github.com/huggingface/transformers/pull/17034). To solves this, I suggest to1) merge the latest changes to main in your branch and 2) move the tests you added to conform to the new organisation (`tests/retribert` -> `tests/models/retribert`). Keep me updated! :smile: <|||||>Hi @SaulLu, thank you very much for the reply. I think I've got it now! Can you take another look?<|||||>My pleasure! I would like to keep contributing and I was wondering if you could help me with a question related to that @SaulLu. I noticed that the `RetriBERT` model itself is missing test files as well and I would like to write those. How do I make sure that no one else is writing them concurrently? Do I open an issue or perhaps a WIP pull request? I have already checked that there currently is no open issue or pull request related to this.
transformers
17,016
closed
Optionally return past key values from generate
### Feature request The idea would be to optionally return `past_key_values` inside the generation objects (`SampleEncoderDecoderOutput`, etc). This could be controlled by a flag called `output_past_key_values` that's passed to `generate` and then forwarded to `sample`, etc. ### Motivation Perhaps this is niche, but my team and I need often need to obtain the past keys and values when generating in order to manipulate them a bit and then feed them back in for subsequent calls to `generate`. We currently do this with a custom version of `sample`, but this results in us having to copy and paste a lot of code. Would it be possible to allow `past_key_values` to be optionally returned by `generate`? ### Your contribution If you all approve of the feature idea, I'd be able to implement it and submit a PR.
04-29-2022 22:28:54
04-29-2022 22:28:54
Hello, I have the exactly same issue! Could you please share an implementation of yours that return past key value when using generate?<|||||>> Hello, I have the exactly same issue! Could you please share an implementation of yours that return past key value when using generate? Hi, what we do is something like this: ```python class CustomSampleMixin: """Have an HF model return past_key_values from generate by inheriting this mixin. For example: ``` class YourModel(CustomSampleMixin, T5ForConditionalGeneration) ``` """ def sample(self, all_the_normal_args, output_past_key_values): """Custom sample method that's mostly copied from Huggingface's generation_utils.sample method """ ... past_key_values = None while True: # forward pass to get next token outputs = self(**model_inputs, return_dict=True) past_key_values = outputs.past_key_values ... if return_dict_in_generate: return CustomGenerationOutput( sequences=input_ids, scores=scores, decoder_attentions=decoder_attentions, cross_attentions=cross_attentions, decoder_hidden_states=decoder_hidden_states, past_key_values=past_key_values if output_past_key_values else None, encoder_outputs=model_kwargs["encoder_outputs"], ) return input_ids ``` Then when we're using it: ```python past_key_values = None while True: model_kwargs["past"] = past_key_values # Note that HF calls it "past" in the `model_kwargs` outputs = model.generate(output_past_key_values=True, **model_kwargs) past_key_values = outputs.past_key_values # post-process outputs, post-process past_key_values ... ``` This approach works for our purposes but does mean that we need to copy and maintain a lot of extra code from Huggingface. Plus, if you want to get `past_key_values` from `beam_sample`, `greedy_search`, etc instead of just `sample`, you have to make custom versions of each of those as well.<|||||>Thank you for sharing!<|||||>Hi @patrickvonplaten, what do you think about this idea?<|||||>I'd actually be fine with adding this to main generate, maybe already by default as soon as `return_dict_in_generate` is set to True, not sure if we necessarily need a new `output_...` input arg. @gante what do you think?<|||||>@patil-suraj what do you think here?<|||||>I'm fine with returning `past_key_values` from `generate` since we already allow to return other model outputs like attentions and hidden_states. And a new argument is not necessary IMO, since the model always returns past when `use_cache=True` (the default case), so a new argument to control this won't be necessary.<|||||>I agree, we can return `past_key_values` when `use_cache=True`. It will, however, be an API change (adds a field to the output, which is an `OrderedDict` subclass), so any user iterating over the output will be impacted. I suspect it is a very uncommon use case, and thus the utility of exposing `past_key_values` exceeds potential pain points. WDYT @patil-suraj @patrickvonplaten? If you agree, I can add this to my to-do list.<|||||>Think it's fine to extend the len of the tuple / `ModelOutput`, we don't consider this a breaking change. @patil-suraj do you want to give it a try to implement this? @gante you could then fully focus on finishing TF generate :heart_eyes: <|||||>Will open a PR for it this week :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Associated PR hasn't really been merged. So this issue cannot be closed.
transformers
17,015
closed
Result of new doc style with fixes
# What does this PR do? This PR shows the changes in Transformers that will be occasioned by the new release of `doc-builder` with some fixes in the style command. Code quality will fail util the next release of `hf-doc-builder` this PR will be merged just after.
04-29-2022 20:49:53
04-29-2022 20:49:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,014
closed
Replace dict/BatchEncoding instance checks by Mapping
# What does this PR do? We have several instance checks in the code base for `(dict, BatchEncoding)` (because `BatchEncoding` is a `UserDict` which is not an instance of `dict`). Those all miss the newer `BatchFeatures` (which is another `UserDict`) as was pointed out in #16983 In Accelerate we use the more general `Mapping` from `collections.abc` for those checks (which catches any kind of `dict`), this PR suggest to do this here too.
04-29-2022 20:44:13
04-29-2022 20:44:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,013
closed
Fix code examples for doctests
This PR fixes some code examples to pass the doctests for the pipeline and `AutoClass` tutorials. I was unable to pass the audio code examples on my local machine because soundfile is not supported on M1 yet. I was able to run and reproduce the code snippets in Colab though so I think they should also pass on the CI.
04-29-2022 20:03:56
04-29-2022 20:03:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the work. Other than the `>>>` things, there are 2 failures when I ran it. For audio pipeline: ``` Expected: [{'label': 'calm', 'score': 0.1315}, {'label': 'neutral', 'score': 0.1307}, {'label': 'sad', 'score': 0.1274}, {'label': 'fearful', 'score': 0.1261}, {'label': 'happy', 'score': 0.1242}] Got: [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}] ``` (this is just a format issue I think) For vision pipeline: ``` Expected: [{'score': 0.4403, 'label': 'lynx, catamount'}, {'score': 0.0343, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0321, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0235, 'label': 'Egyptian cat'}, {'score': 0.023, 'label': 'tiger cat'}] Got: [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ~~(This might be due to some random ops. I remembered I have similar situations before. I can take a look too.)~~ I get deterministic results, which is on Ubuntu 20.04. It's not very clear why the result is different than the previous one in the doc. I also get the same results on my local Windows machine. Maybe we could just update the values, cc @sgugger? <|||||>I don't know why you ask me @ydshieh this is not my PR ;-) <|||||>> I don't know why you ask me @ydshieh this is not my PR ;-) I know. Just to make sure you are also fine with my suggestion about `just update the values`. But I guess I should be more confident 😄
transformers
17,012
closed
Add a check on config classes docstring checkpoints
# What does this PR do? A follow-up for #16900: add a test to make sure all config classes have at least one valid checkpoint (unless explicitly specified to ignore). By `valid`, it only means the format is valid, i.e. of the form `[XXX](https://huggingface.co/XXX)` with `XXX` being any string. A more strict verification could be implemented by trying to load the config. But maybe it is a bit too much? Also fix 2 more config classes without valid checkpoint.
04-29-2022 18:09:02
04-29-2022 18:09:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,011
closed
Revert "Updating variable names. (#16445)"
This reverts commit 4f3a14e3c235c8b6b8cd2f5bc448a0cffacddf61. # What does this PR do? Broke `main` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-29-2022 16:03:36
04-29-2022 16:03:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,010
closed
[Data2Vec] Incompatibility with the original implementation
Hello dear HuggingFace team! According to the original paper, data2vec is not an actual model but more of a self-distilling training strategy. It takes an encoder model as backbone (RoBERTa for text, BEiT for vision, wav2vec for audio as mentioned in the paper) and pre-trains the encoder (student) to predict representations extracted from the EMA instance of the encoder (teacher), meaning the encoder can be any Transformer-based encoder model. After pretraining, in order to finetune or get predictions, the encoder itself is what matters and data2vec is of no use! (as seen [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec#finetuning-data2vec-text-on-glue)) I reviewed data2vec implementation in HF transformers and noticed that you decided to use static encoders (BERT for text, BEiT for vision and wav2vec2 for audio) so for example, using Data2VecVisionModel for any task would be the same as using BEiTModel. Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities. So my question is, what was the purpose behind all this? and did you train all those models or copied the weights from the original checkpoints in fairseq? Regards, Aryan
04-29-2022 14:39:29
04-29-2022 14:39:29
cc @patrickvonplaten @NielsRogge <|||||>> Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities. Can you elaborate on this? We converted the weights from the original repo, so they should be equivalent to the original implementation.<|||||>Hello @NielsRogge, sorry for the delayed response. Seems like I made a mistake regarding mismatch between architectures! Perhaps I loaded incorrect models using AutoModel. Today I reviewed all three models thoroughly and found no mismatch. But how about my first question? What was your intent behind reimplementing 3 models for data2vec while they're exactly the same as RoBERTa, BEiT and Wav2Vec2 which are already present in the transformers package? Thanks, Aryan <|||||>Regarding the fact that some minor differences exist in model architectures, what I attempted to do is that I tried to load weights directly from data2vec checkpoints to existing encoder models as below: 1. Loaded state dict from `facebook/data2vec-text-base` checkpoint into `roberta-base` and all keys matched successfully. 2. Loaded state dict from `facebook/data2vec-vision-base` checkpoint into `microsoft/beit-base-patch16-224` and got IncompatibleKeys warning: ` _IncompatibleKeys(missing_keys=['encoder.relative_position_bias.relative_position_bias_table', 'encoder.relative_position_bias.relative_position_index', 'layernorm.weight', 'layernorm.bias'], unexpected_keys=['pooler.layernorm.weight', 'pooler.layernorm.bias', 'encoder.layer.0.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.0.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.1.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.1.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.2.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.2.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.3.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.3.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.4.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.4.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.5.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.5.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.6.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.6.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.7.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.7.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.8.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.8.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.9.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.9.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.10.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.10.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.11.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.11.attention.attention.relative_position_bias.relative_position_index']) ` 3. Loaded state dict from `facebook/data2vec-audio-base` checkpoint into `facebook/wav2vec2-base` and got IncompatibleKeys warning: ` _IncompatibleKeys(missing_keys=['encoder.pos_conv_embed.conv.bias', 'encoder.pos_conv_embed.conv.weight_g', 'encoder.pos_conv_embed.conv.weight_v'], unexpected_keys=['feature_extractor.conv_layers.1.layer_norm.weight', 'feature_extractor.conv_layers.1.layer_norm.bias', 'feature_extractor.conv_layers.2.layer_norm.weight', 'feature_extractor.conv_layers.2.layer_norm.bias', 'feature_extractor.conv_layers.3.layer_norm.weight', 'feature_extractor.conv_layers.3.layer_norm.bias', 'feature_extractor.conv_layers.4.layer_norm.weight', 'feature_extractor.conv_layers.4.layer_norm.bias', 'feature_extractor.conv_layers.5.layer_norm.weight', 'feature_extractor.conv_layers.5.layer_norm.bias', 'feature_extractor.conv_layers.6.layer_norm.weight', 'feature_extractor.conv_layers.6.layer_norm.bias', 'encoder.pos_conv_embed.layers.0.conv.weight', 'encoder.pos_conv_embed.layers.0.conv.bias', 'encoder.pos_conv_embed.layers.1.conv.weight', 'encoder.pos_conv_embed.layers.1.conv.bias', 'encoder.pos_conv_embed.layers.2.conv.weight', 'encoder.pos_conv_embed.layers.2.conv.bias', 'encoder.pos_conv_embed.layers.3.conv.weight', 'encoder.pos_conv_embed.layers.3.conv.bias', 'encoder.pos_conv_embed.layers.4.conv.weight', 'encoder.pos_conv_embed.layers.4.conv.bias']) ` @NielsRogge <|||||>For BEiT, the problem was that there are some differences in the config; In order to load weights with no errors these values must be set in config: ```python ... beit_config = BeitConfig(use_relative_position_bias=False, use_mean_pooling=False, use_shared_relative_position_bias=True) ``` So in terms of architecutre, `transformers.models.BEiTModel` and `transformers.models.Data2VecVisionModel` are the same, but for `Wav2Vec2Model `vs `Data2VecAudioModel` it's not the same case! they're actually different in terms of design so I'd have to use another technique to transfer weights from `Data2VecAudio` to `Wav2Vec2`. I know that the reason is that the same case exists in `fairseq` too. There are some design differences between data2vec-audio and wav2vec2, so in order to transfer weights from there you had to make those changes to the `Data2VecAudioModel` codes.<|||||>> But how about my first question? What was your intent behind reimplementing 3 models for data2vec while they're exactly the same as RoBERTa, BEiT and Wav2Vec2 which are already present in the transformers package? We're planning to add `Data2VecAudioForPretraining` etc, which is why the implementations were duplicated. <|||||>Cool! looking forward to that. Thanks for putting your time replying. I'm closing this issue.
transformers
17,009
closed
[Data2Vec] Incompatibility with the original implementation
Hello dear HuggingFace team! According to the original paper, data2vec is not an actual model but more of a self-distilling training strategy. It takes an encoder model as backbone (RoBERTa for text, BEiT for vision, wav2vec for audio as mentioned in the paper) and pre-trains the encoder (student) to predict representations extracted from the EMA instance of the encoder (teacher), meaning the encoder can be any Transformer-based encoder model. After pretraining, in order to finetune or get predictions, the encoder itself is what matters and data2vec is of no use! (as seen [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec#finetuning-data2vec-text-on-glue)) I reviewed data2vec implementation in HF transformers and noticed that you decided to use static encoders (BERT for text, BEiT for vision and wav2vec2 for audio) so for example, using Data2VecVisionModel for any task would be the same as using BEiTModel. Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities. So my question is, what was the purpose behind all this? and did you train all those models or copied the weights from the original checkpoints in fairseq? Regards, Aryan
04-29-2022 14:39:25
04-29-2022 14:39:25
transformers
17,008
closed
Add Data2Vec for Vision in TF
This PR adds the data2vec [1] model for vision in TensorFlow. **Todo**: ~* Fix cross-loading.~ ~* Add integration test.~ ~* Add remaining tests.~ ~* Rest of the files remaining for the PR.~ ~* TF weight uploading to Hub (to be done by someone from the 🤗 team)~ ## Notes * This PR does not add `...ForSegmentation`. This can be done in a separate PR I think. * Locally, I ran the tests using: `RUN_SLOW=1 python -m pytest tests/data2vec/test_modeling_tf_data2vec_vision.py`. ## References [1] data2vec: https://arxiv.org/abs/2202.03555 @sgugger @Rocketknight1 @ydshieh
04-29-2022 12:04:08
04-29-2022 12:04:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>I used these steps for styling: https://github.com/huggingface/transformers/pull/16255#discussion_r830432539. On my end, when I am running `make style` I get the following: ``` ... doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source Overwriting content of src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py. Overwriting content of src/transformers/models/luke/modeling_luke.py. Overwriting content of src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py. Overwriting content of src/transformers/models/tapas/modeling_tapas.py. Overwriting content of src/transformers/models/tapas/modeling_tf_tapas.py. Overwriting content of src/transformers/models/data2vec/modeling_tf_data2vec_vision.py. Overwriting content of src/transformers/models/t5/modeling_flax_t5.py. Overwriting content of src/transformers/models/t5/modeling_t5.py. Overwriting content of src/transformers/models/t5/modeling_tf_t5.py. Overwriting content of src/transformers/models/rag/modeling_rag.py. Overwriting content of src/transformers/models/rag/retrieval_rag.py. Overwriting content of src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py. Overwriting content of src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py. Overwriting content of src/transformers/models/encoder_decoder/modeling_encoder_decoder.py. Overwriting content of src/transformers/models/xlm/modeling_xlm.py. Overwriting content of src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py. Overwriting content of src/transformers/models/imagegpt/modeling_imagegpt.py. Overwriting content of src/transformers/models/longformer/modeling_longformer.py. Overwriting content of src/transformers/models/xlnet/modeling_xlnet.py. Overwriting content of src/transformers/models/xlnet/modeling_tf_xlnet.py. Overwriting content of src/transformers/models/gpt2/modeling_tf_gpt2.py. Overwriting content of src/transformers/models/prophetnet/modeling_prophetnet.py. Overwriting content of src/transformers/models/realm/modeling_realm.py. Overwriting content of src/transformers/models/openai/modeling_tf_openai.py. Overwriting content of src/transformers/models/openai/modeling_openai.py. Overwriting content of docs/source/en/model_doc/luke.mdx. Overwriting content of docs/source/en/model_doc/bert-generation.mdx. Cleaned 27 files! ``` The [CI console](https://app.circleci.com/pipelines/github/huggingface/transformers/38997/workflows/f017efba-6409-4669-835d-e463043d3ea0/jobs/436193) is also suggestive of this change. **Should I add these cleaned files to the PR?** <|||||>Make sure you update `hf-doc-builder` to its latest version with `pip install hf-doc-builder -U`. We had a new release last week to fix some bugs in the example styling in our docs :-)<|||||>> Make sure you update `hf-doc-builder` to its latest version with `pip install hf-doc-builder -U`. We had a new release last week to fix some bugs in the example styling in our docs :-) @sgugger I see that `hf-doc-builder` is already up to date on my end (`Version: 0.3.0`).<|||||>You'll probably need to rebase your PR on master to get the changes in the setup for the quality check to pass (otherwise the CI uses the cached installed libraries).<|||||>> You'll probably need to rebase your PR on master to get the changes in the setup for the quality check to pass (otherwise the CI uses the cached installed libraries). Thanks, @sgugger! I first rebased my main with the upstream and then merged the main into the PR branch. And then I force-pushed. Let's see. <|||||>Hi, @sayakpaul - You can ignore `Model templates runner / run_tests_templates (pull_request)`. (You can even cancel that workflow run) - I have just merged a (big) PR that moved model test folders, like `tests/bert` to `tests/models/bert`. When you have time, could you - pull the changes (from upstream main) to your main - **rebase** your working branch on the main (better to avoid using `merge` in this case, I believe) - move your new test file `tests/data2vec/test_modeling_tf_data2vec_vision.py` from `tests/data2vec` to `tests/models/data2vec` - You might need to fix a few lines of `import`. - For example, `from ..test_configuration_common import ConfigTester` --> `from ...test_configuration_common import ConfigTester` please? 🙏 Thank you! <|||||>@ydshieh after rebasing, won't I need to merge the main into my PR branch so that it has the full effect?<|||||>> @ydshieh after rebasing, won't I need to merge the main into my PR branch so that it has the full effect? In order to incorporate the changes in main into your PR branch, you can either use `merge` or `rebase`. I am in favor of using `rebase` as it might be cleaner in some cases (won't introduce a lot of file changes). Once you have latest changes from upstream main in your local main, you can **checkout to your PR branch**, and do something like ``` git rebase main ``` (sometimes there might be conflicts to fix, but I think there won't be conflict in this case) Then you will have to force push.<|||||>@ydshieh oops looks like I have made things worse instead of making them work. I am not sure how I can revert to a mergeable state now. Any suggestion?<|||||>> @ydshieh oops looks like I have made things worse instead of making them work. I am not sure how I can revert to a mergeable state now. Any suggestion? Let me give it a try - I am definitely NOT a Git Pro 😢 (No guarantee though - hope 🤞 ) Could you let me know what steps you have done, please?<|||||>I just followed your suggestions: * Rebased my main with the upstream main. * Checked out to my PR branch and ran `git rebase main`. * Made the necessary changes you suggested regarding moving the test file. I think you mistakenly made a push to my PR branch which is what may have caused the unnecessary changes to reflect in this PR. ![image](https://user-images.githubusercontent.com/22957388/166462882-8e80d89e-dda0-4a0a-9e4d-e3f4357d5613.png) I am happy to work on the necessary steps per your suggestions too. <|||||>> I just followed your suggestions: > > * Rebased my main with the upstream main. > * Checked out to my PR branch and ran `git rebase main`. > * Made the necessary changes you suggested regarding moving the test file. > > I think you mistakenly made a push to my PR branch which is what may have caused the unnecessary changes to reflect in this PR. > > ![image](https://user-images.githubusercontent.com/22957388/166462882-8e80d89e-dda0-4a0a-9e4d-e3f4357d5613.png) > > I am happy to work on the necessary steps per your suggestions too. Hi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this.<|||||>> Hi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this. @ydshieh here's what I am thinking: * Revert to https://github.com/huggingface/transformers/pull/17008/commits/247a6c53dc6a64664ff58c862319116aff359d9c. * Follow [your suggestions](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) again. * Push the changes. <|||||>I am going to force push and see if it works 🙏 <|||||>Force push where?<|||||>To this PR, if you are OK with it. Please let me know, thanks.<|||||>> > Hi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this. > > @ydshieh here's what I am thinking: > > * Revert to [247a6c5](https://github.com/huggingface/transformers/commit/247a6c53dc6a64664ff58c862319116aff359d9c). > * Follow [your suggestions](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) again. > * Push the changes. Hi, I think we need to get to ``` [fix: tests due to removal of to_2tuple().](https://github.com/huggingface/transformers/pull/17008/commits/a0714e210c4f7da0f1321e50259ccf4fb40020ef) ``` and see what we can do to incorporate the main, that's what I am trying now.<|||||>Actually the commit I was referring to, it had the bits and pieces (like styling nits of the upstream files). <|||||>It may just work, let's see @ydshieh <|||||>If we revert to `[247a6c5](https://github.com/huggingface/transformers/commit/247a6c53dc6a64664ff58c862319116aff359d9c).`, we will still get a lot of changed file showing up in your PR. I am able to get something cleaner like <img width="399" alt="Screenshot 2022-05-03 162316" src="https://user-images.githubusercontent.com/2521628/166472233-75ccdf82-c653-4512-9131-9708a0015962.png"> by just revert to `ddd6b1c`, which I think it is the close to your PR with the changes from main in the clean way. Let me know if you want to try it by yourself, otherwise I can push to this PR.<|||||>Sounds good. Let me know the steps. <|||||>Here is what I would try (always a good idea to have a backup) ``` git checkout -b tf-data2vec-backup ``` Then ``` git checkout tf-data2vec git reset --hard ddd6b1c3 git push --force-with-lease ``` Once the commit history is clean on PR page, we can see if there is any style issues to fix. By that time, things should be easy.<|||||>@ydshieh fingers crossed 🤞<|||||>@sgugger, this is the step I need someone from the 🤗. team to perform. After that, I will remove `from_pt=True` from the code and will test. > TF weight uploading to Hub (to be done by someone from the 🤗 team)<|||||>Will look into this. It's just for the checkpoint `facebook/data2vec-vision-base-ft1k` right? Or is there another one?<|||||>> Will look into this. It's just for the checkpoint `facebook/data2vec-vision-base-ft1k` right? Or is there another one? There are four in the Facebook organization: [data2vec-vision](https://huggingface.co/models?sort=downloads&search=data2vec-vision)<|||||>@ydshieh forgot to say: THANK YOU VERY MUCH.<|||||>Currently, the follow checkpoint crashes (after the two suggestions I have made on the PR): ``` from transformers import TFAutoModel tf_model = TFAutoModel.from_pretrained("facebook/data2vec-vision-base", from_pt=True) ``` Same for "facebook/data2vec-vision-large", therefore I can't convert those checkpoints (and it looks like something needs fixing?) Here is the traceback: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_2749758/3199004601.py in <module> ----> 1 tf_model = TFAutoModel.from_pretrained("facebook/data2vec-vision-large", from_pt=True) ~/git/transformers/src/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 444 elif type(config) in cls._model_mapping.keys(): 445 model_class = _get_model_class(config, cls._model_mapping) --> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 447 raise ValueError( 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" ~/git/transformers/src/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1794 1795 # Load from a PyTorch checkpoint -> 1796 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True) 1797 1798 # we might need to extend the variable scope for composite models ~/git/transformers/src/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys) 122 logger.info(f"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters") 123 --> 124 return load_pytorch_weights_in_tf2_model( 125 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys 126 ) ~/git/transformers/src/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys) 153 154 if tf_inputs is not None: --> 155 tf_model(tf_inputs, training=False) # Make sure model is built 156 # Adapt state dict - TODO remove this and update the AWS weights files instead 157 # Convert old format to new format if needed from a PyTorch state_dict ~/anaconda3/lib/python3.9/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~/git/transformers/src/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs) 381 main_input = fn_args_and_kwargs.pop(main_input_name, None) 382 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs) --> 383 return func(self, **unpacked_inputs) 384 385 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This ~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict, training) 893 ) -> Union[tuple, TFData2VecVisionModelOutputWithPooling]: 894 --> 895 outputs = self.data2vec_vision( 896 pixel_values=pixel_values, 897 bool_masked_pos=bool_masked_pos, ~/git/transformers/src/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs) 381 main_input = fn_args_and_kwargs.pop(main_input_name, None) 382 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs) --> 383 return func(self, **unpacked_inputs) 384 385 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This ~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict, training) 712 embedding_output = self.embeddings(pixel_values, bool_masked_pos, training=training) 713 --> 714 encoder_outputs = self.encoder( 715 embedding_output, 716 head_mask=head_mask, ~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, hidden_states, head_mask, output_attentions, output_hidden_states, return_dict) 625 layer_head_mask = head_mask[i] if head_mask is not None else None 626 --> 627 relative_position_bias = self.relative_position_bias() if self.relative_position_bias is not None else None 628 layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions, relative_position_bias) 629 ValueError: Exception encountered when calling layer "encoder" (type TFData2VecVisionEncoder). The first argument to `Layer.call` must always be passed. Call arguments received: • hidden_states=tf.Tensor(shape=(3, 197, 1024), dtype=float32) • head_mask=['None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None'] • output_attentions=False • output_hidden_states=False • return_dict=True ``` I have converted `facebook/data2vec-vision-base-ft1k` and am doing `facebook/data2vec-vision-large-ft1k` now.<|||||>@sgugger thanks for providing the update. Let me check from my end once. <|||||>@sgugger should be all good now. I have verified from my end too: ![Screenshot 2022-05-03 at 8 57 18 PM](https://user-images.githubusercontent.com/22957388/166485490-9e115e28-ed34-40ef-9248-c1dc4348e031.png) ![Screenshot 2022-05-03 at 9 01 10 PM](https://user-images.githubusercontent.com/22957388/166485506-3f205bb3-9eaa-4c2a-88ab-96d7846b2ee0.png) <|||||>Can confirm it works. TF weights added for all 4 facebook Data2Vec Image models.<|||||>@sgugger thanks! I have also run the tests on my end locally, they're passing. Over to you. <|||||>Repinging @Rocketknight1 to have another set of eyes on this :-)<|||||>Thanks! Looks like they are all green now. <|||||>Thanks again for your contribution!
transformers
17,007
closed
use scale=1.0 in floats_tensor called in speech model testers
# What does this PR do? Fix the failure of `Speech2TextModelTest.test_pt_tf_model_equivalence`. This is caused by https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/tests/speech_to_text/test_modeling_speech_to_text.py#L134-L136 where the `input_features` get a large magnitude of `1e2` (from `self.vocab_size=99`). (probably this happens because we just copied the `input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)` from NLP models?) I changed it to `scale=1.0`, but need @patrickvonplaten's expertise **to make sure there was no particular reason to use `self.vocab_size`.** ### Details Current speech model testers have ``` def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.seq_length], self.vocab_size) ``` The ` self.vocab_size` argument is the `scale`, so the generated dummy `input_values` has the magnitude of `self.vocab_size`. For `Speech2TextModelTester`, we have `vocab_size=99`. Furthermore, `Speech2TextEncoder` has https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L705 and from the tester's `hidden_size=16,` we get `embed_scale=4`. The `input_features` goes through the conv layer(s) and being scaled: https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L767-L768 On `CPU` however, the conv layers of PT/TF gives diff. with a magnitude of `1e-7` for input values with 1s. So with the above 2 scalings, this error becomes `4e-5`, and the PT/TF equiv. test fails.
04-29-2022 10:28:06
04-29-2022 10:28:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing all the tests!
transformers
17,006
closed
Transfomers Pipline: Batching does not work for Sentence-Pair Text Classification
### System Info ```shell - `transformers` version: 4.6.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Using GPU in script?: yes / no (Both) - Using distributed or parallel set-up in script?: No ``` ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run: ```python from transformers import pipeline model = pipeline( task="text-classification", model="roberta-large-mnli", # does also happen for our own fine-tunes roberta models device=0 # does also happen on CPU ) n_samples = 10000 sample = ['The earth is not flat.', 'Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...'] data = [sample]* n_samples model([sample]) # works model(data, batch_size=1) # results in the follwing error ``` Output ``` Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-2-62b2d5a9a338>](https://localhost:8080/#) in <module>() 6 7 model([sample]) ----> 8 model(data, batch_size=1) 14 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py](https://localhost:8080/#) in forward(self, hidden_states) 347 def forward(self, hidden_states): 348 hidden_states = self.dense(hidden_states) --> 349 hidden_states = self.intermediate_act_fn(hidden_states) 350 return hidden_states 351 RuntimeError: CUDA out of memory. Tried to allocate 5.34 GiB (GPU 0; 14.76 GiB total capacity; 9.35 GiB already allocated; 4.00 GiB free; 9.37 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` ### Expected behavior ```shell The pipeline should run even with a very large list of inputs if the batch size is low enough. ```
04-29-2022 08:19:46
04-29-2022 08:19:46
I am on version `4.18.0` and the following code works with and without GPU ``` from transformers import pipeline model = pipeline( task="text-classification", model="roberta-large-mnli", # does also happen for our own fine-tunes roberta models device=0 # does also happen on CPU ) n_samples = 1000 sample = ['The earth is not flat.', 'Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...'] data = [sample]* n_samples model([sample], padding=True) model(data, batch_size=2, padding=True) ```<|||||>Hi @maximilianreimer, As @sijunhe said, can you try upgrading your `transformers` version just because a lot has been done to improve the batching in more recent version. For everyone here also another nice to have is to change the format from `list` to a `generator` which will iterate over results without having to maintain the list of all results (it also allows you to store results as they come in, allowing you to recover if sample number 10_014 fails for instance instead of having to rerun the whole thing). ```python from transformers import pipeline model = pipeline( task="text-classification", model="roberta-large-mnli", # does also happen for our own fine-tunes roberta models device=0, # does also happen on CPU ) n_samples = 1000 sample = [ "The earth is not flat.", "Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...", ] def data(): for i in range(n_samples): yield sample out = model([sample], padding=True) for out in model(data(), batch_size=2, padding=True): print(out) ``` Just a nice to have but should definitely help when processing large amounts of data. <|||||>Thanks for the helpful comments. Updating seems to fix the issue for me as well!<|||||>Closing this then.
transformers
17,005
closed
Added option to modify config parameter used by Tesseract in LayoutLMV2/LayoutXLM Processor
# What does this PR do? Giving user option to set config parameter used by Tesseract when performing feature extraction. Eg. to change psm levels while performing transcription by passing in '--psm 10' to config parameter while invoking image_to_data It is shown that changing the psm values greatly influences the end result of LayoutLMV2/XLM, and the specific psm value is different depending on the document formatting. Refer : [PSM](https://github.com/tesseract-ocr/tesseract/issues/434) ```python pytesseract.image_to_data(image, lang=lang, output_type="dict", config="--psm 10") ``` Users can now set the tesseract config parameter during Processor initialization, like so: ```python processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", ocr_lang="eng", tesseract_config="--psm 5") ``` ## Before submitting - [❌] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [✔️] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [❌] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [✔️] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [❌] Did you write any new necessary tests? Feel free to modify as needed. Thanks @NielsRogge @LysandreJik
04-29-2022 07:33:15
04-29-2022 07:33:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, Sorry for the late reply. I'll review now.<|||||>LayoutLMv2FeatureExtractor constructor must be modified to accept tesseract_config instead of tess_config for this change. Hang on, I'll work on it.<|||||>Tried to rebase and merge with upstream but it is now changing too many files. I've created a fresh new PR here https://github.com/huggingface/transformers/pull/17733
transformers
17,004
closed
Add translating guide
# What does this PR do? Add a translation guide so users have all the information they need to (1) contribute to a language that's already being translated, or (2) start their own issue for translating into a new language. # Next step Create a Translation Template for new issues (for example, [this template for Portuguese](https://github.com/huggingface/transformers/issues/16824) with all the docs that should be translated). I can do this. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
04-29-2022 04:41:37
04-29-2022 04:41:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the comments @sgugger! I moved the `_toctree.yml` tip to a part where it would be more relevant. Please let me know if you would prefer it in another part 🤗<|||||>LGTM!
transformers
17,003
closed
BertEmbeddings import missing for Torch in __init__ file
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `from transformers import BertEmbeddings` raises ``` ----> 1 from transformers import BertEmbeddings ImportError: cannot import name 'BertEmbeddings' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py) ``` ### Expected behavior [BertEmbeddings](https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/bert/modeling_bert.py#L182) is a class in Bert Model file that can be used for creating BertEmbeddings, the class is not imported in[ __init__ ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/__init__.py)file, so the user may not be able to import it. While the TF and jax version are imported it ``` It should be imported
04-29-2022 03:41:15
04-29-2022 03:41:15
Feel free to open a PR to fix this :)<|||||>I think in that case it's the TF and Jax versions which shouldn't add the embeddings to the main init. Those are not meant to be accessed from the main init, but to be accessed from their respective modules: ```py >>> from transformers.models.bert.modeling_bert import BertEmbeddings ``` We took that decision so that the internal may be modified without breaking to public root API. These have very rarely been updated, however. Removing the Flax and TF imports from the init isn't an option either, unfortunately, as it would result in a breaking change for users that do use it.<|||||>Agree with you, Can I close this issue then?<|||||>Yes, thanks for opening it in the first place!
transformers
17,002
closed
HuggingFace/BigBird RuntimeError: Internal: src/sentencepiece_processor.cc
### System Info ```shell I'm able run the HuggingFace/BigBird code for a binary classification on a proprietary essay dataset in Google Colab with no errors. I wanted to access more powerful GPU's and converted the code from .ipynb to .py to run on Marquette's supercomputer (called Raj). Raj does not allow me to access the roberta model remotely so I changed the first line of code below to the second for local access (and also copied the bigbird-roberta-base files to Raj): tokenizer = BigBirdTokenizer.from_pretrained('google/bigbird-roberta-base') tokenizer = BigBirdTokenizer.from_pretrained('<my user path on Raj>/bigbird-roberta-base') However, this gives me the following error: RuntimeError: Internal: src/sentencepiece_processor.cc(890) [model_proto->ParseFromArray(serialized.data(), serialized.size())] I did confirm that sentencepiece 0.1.96 is installed and I'm using Python version 3.6.8. Any help or suggestions is appreciated! ``` ### Who can help? @ydshieh, @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from transformers import BigBirdTokenizer, BigBirdForSequenceClassification print('Loading tokenizer...') tokenizer = BigBirdTokenizer.from_pretrained(<my user path>/bigbird-roberta-base') # Tokenize all of the sentences and map the tokens to thier word IDs. input_ids = [] # Record the length of each sequence (in terms of BERT tokens). lengths = [] print('Tokenizing comments...') # For every sentence... for sen in train.data: # Report progress. if ((len(input_ids) % 1000) == 0): print(' Read {:,} comments.'.format(len(input_ids))) # `encode` will: # (1) Tokenize the sentence. # (2) Prepend the `[CLS]` token to the start. # (3) Append the `[SEP]` token to the end. # (4) Map tokens to their IDs. encoded_sent = tokenizer.encode( str(sen), # Sentence to encode. Added str due to error. add_special_tokens = True, # Add '[CLS]' and '[SEP]' ) # Add the encoded sentence to the list. input_ids.append(encoded_sent) # Record the non-truncated length. lengths.append(len(encoded_sent)) print('DONE.') print(' Min length: {:,} tokens'.format(min(lengths))) print(' Max length: {:,} tokens'.format(max(lengths))) print('Median length: {:,} tokens'.format(np.median(lengths))) ### Expected behavior ```shell The first print statement should generate: Tokenizing comments... Read 0 comments. DONE. 454 comments The second group of three print statements should generate: Min length: 90 tokens Max length: 995 tokens Median length: 826.5 tokens ```
04-29-2022 00:27:21
04-29-2022 00:27:21
I'm not too sure what's the root cause of this, but I've created a [google colab](https://colab.research.google.com/drive/1x12Bc6aDU9sLOCI99bGKXh9zUQzKDcaR?usp=sharing) reproducing the bug - it seems like the vocab file path is not being passed in properly when it's not in the VOCAB_FILE_NAME mapping as well as a potential workaround. @jtfields do you think the workaround could work? Instantize your BigBirdTokenizer outside of Raj in your local, then save the pretrained tokenizer into a directory and copy the files into a directory on Raj, then instantize from that directory instead and but with an AutoTokenizer (cells 7-9).<|||||>Thank you for the fast response on this bug. I'm trying the workaround you provided but finding that Spacy is not very cooperative due to the length of my essays. I first had to develop a workaround for the max_length of 100,000 and now have a handle_filename_too_long error. Is there another option besides spacy which isn't so restrictive?<|||||>I was able to tokenize the essays in Google Colab and copy these files to Marquette's Raj supercomputer. However, this bug should stay open until a fix is available to run the tokenization files locally.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,001
closed
Text Generation for decoder
### Feature request BERT and most Transformer models can be used as the decoder with the cross-attention layer randomly intialized if we set is_decoder True. We could use these model as decoder in an encoder-decoder framework while the encoder is our own defined model, and use the model for multi-modal text generation tasks. For my case, I am doing audio captioning and I want to use the AutoModelForCausalLM as the decoder. The model can be trained properly now, by passing the outputs of our own encoder as encoder_hidden_states to the decoder. However, when doing inference such as doing greedy search and beam search, the encoder outouts can't be passed to the decoder if I want use the generate function or greedy_search function. I think this could be improved. Thus huggingface models can be used in more multi-modal text generation tasks when we want to use our own encoder. ### Motivation In this way, we can use huggingface models for more multi-modal text-geberation tasks with freedom to incorprate them with our own models. ### Your contribution In my case, I solve this problem by modifying the prepare_inputs_for_generation() function in BertLMHeadModel and add the encoder output to the return dict as "encoder_hidden_states". Then call model.greedy_search() and model.beam_search() for text generation.
04-28-2022 22:55:58
04-28-2022 22:55:58
@patrickvonplaten seems like the best person to answer your question!<|||||>Hey @XinhaoMei > However, when doing inference such as doing greedy search and beam search, the encoder outouts can't be passed to the decoder if I want use the generate function or greedy_search function. I think they can be passed to the decoder. Could you post a codesnippet that shows what doesn't work for your case? :-)<|||||>Hi @patrickvonplaten, thanks for your quick reply. Here are my code for the defination of the model: ` def __init__(self, config): super().__init__() self.encoder = set_encoder(config) self.tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') if config.hugging_face.pretrain: decoder_config = BertConfig(is_decoder=True, add_cross_attention=True) self.decoder = AutoModelForCausalLM.from_pretrained("bert-base-uncased", config=decoder_config) else: decoder_config = BertConfig(is_decoder=True, add_cross_attention=True, num_attention_heads=4, num_hidden_layers=2) self.decoder = AutoModelForCausalLM.from_config(decoder_config) self.pad_token = self.tokenizer.pad_token_id self.loss_func = nn.CrossEntropyLoss(ignore_index=self.pad_token) def generate_greedy(self, audio_src): audio_feats = self.encoder(audio_src) audio_feats = audio_feats.transpose(0, 1) input_ids = torch.zeros((audio_feats.shape[0], 1)).long().to(self.decoder.device) input_ids[:, 0] = 101 outputs = self.decoder.generate(input_ids=input_ids, encoder_hidden_states=audio_feats, do_sample=False, max_length=30) output_captions = self.tokenizer.batch_decode(outputs, skip_special_tokens=True) return output_captions def generate_beam(self, audio_src, beam_size=3): audio_feats = self.encoder(audio_src) audio_feats = audio_feats.transpose(0, 1) input_ids = torch.zeros((audio_feats.shape[0], 1)).long().to(self.decoder.device) input_ids[:, 0] = 101 outputs = self.decoder.generate(input_ids=input_ids, encoder_hidden_states=audio_feats, num_beams=beam_size, do_sample=False, max_length=30) output_captions = self.tokenizer.batch_decode(outputs, skip_special_tokens=True) return output_captions def forward(self, audio_src, caption): tokenized = self.tokenizer(caption, add_special_tokens=True, padding=True, return_tensors='pt') input_ids = tokenized['input_ids'].to(self.decoder.device) attention_mask = tokenized['attention_mask'].to(self.decoder.device) audio_feats = self.encoder(audio_src) audio_feats = audio_feats.transpose(0, 1) outputs = self.decoder(input_ids=input_ids, attention_mask=attention_mask, encoder_hidden_states=audio_feats ) logits = outputs.logits[:, :-1, :] labels = input_ids[:, 1:] loss = self.loss_func(logits.reshape(-1, self.decoder.config.vocab_size), labels.reshape(-1)) return loss` The encoder is my own CNN. The training is defined in forward function and it can be trained properly by passing the encoder outputs as encoder_hidden_states to the decoder. But in another two functions for text generation using the generate() function, I found the encoder outputs cannot be passed into the decoder usiing generate(). It generates the same sentences for all different encoder outputs. Thanks for your time!<|||||>Hey @XinhaoMei, Sorry we sadly cannot help too much with custom code as this is outside of the scope of Transformers. Could you try to make use of the forum instead: https://discuss.huggingface.co/ ? :-)<|||||>> Hey @XinhaoMei, > > Sorry we sadly cannot help too much with custom code as this is outside of the scope of Transformers. Could you try to make use of the forum instead: https://discuss.huggingface.co/ ? :-) Thank you for your reply. In fact, I have solved it by modifying some code in the Transformer library. Anyway, thanks a lot!
transformers
17,000
closed
Padding vs truncation logging mixup
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/tokenization_utils_base.py#L1470 Looks like this error should probably say truncation side instead of padding side.
04-28-2022 19:14:37
04-28-2022 19:14:37
Correct! Would you like to open a PR to patch it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,999
closed
Refactor all require decorators to use skipUnless when possible
# What does this PR do? I was refactoring the Accelerate tests today and I noticed we use `if .. else..` as a conditional for skipping tests based on imports. Unittest has `skipUnless` and `skipIf`, letting us simplify those decorators to be one line. E.g.: ```python if not _run_slow_tests: return unittest.skip("test is slow")(test_case) else: return test_case ``` Can be: ```python return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case) ``` (Adding you as a reviewer for this Sylvain, unsure who else should be added)
04-28-2022 18:36:22
04-28-2022 18:36:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you 🚀 for this PR, and also for pinning me so that I can learn!
transformers
16,998
closed
Question on model_max_length (DeBERTa-V3)
### System Info ```shell - `transformers` version: 4.18.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.4.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ``` ### Who can help? @LysandreJik @SaulLu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm interested in finding out the max sequence length that a model can be run with. After some code browsing, my current understanding that this is a property stored in the tokenizer `model_max_length`. I wrote a simple script to load a tokenzier for a pretrained model and print the model max length. This is the important part: ``` # initialize the tokenizer to be able to print model_max_length tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) logger.info(f"Model max length {tokenizer.model_max_length}") ``` I used this to print max seq length for models such as BERT, RoBERTa, etc. All with expected results. For DeBERTa, I get confusing results. If I run my script with DeBERTA-v3 as follows: ``` python check_model_max_len.py --model_name microsoft/deberta-v3-large --output_dir ./tmp --cache_dir ./tmp/cache ``` I get `Model max length 1000000000000000019884624838656` If I understand correctly, this is a large integer used for models that can support "infinite" size lengths. If I run my script with `--model_name microsoft/deberta-v2-xlarge`, I get `Model max length 512` I don't understand if this is a bug or a feature :) My understanding is that the main difference between DeBERTa V2 and V3 is the use of ELECTRA style discriminator during MLM pretraining in V3. I don't understand why this difference would lead to a difference in supported max sequence lengths between the two models. I also don't understand why some properties are hardcoded in the python files, e.g., ``` PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { "microsoft/deberta-v2-xlarge": 512, "microsoft/deberta-v2-xxlarge": 512, "microsoft/deberta-v2-xlarge-mnli": 512, "microsoft/deberta-v2-xxlarge-mnli": 512, } ``` I would expect these to be in the config files for the corresponding models. ### Expected behavior ```shell I would expect the max supported lengths for DeBERTa-V2 and DeBERTa-V3 models to be the same. Unless, I'm missing something. Thanks for your help! ```
04-28-2022 18:29:57
04-28-2022 18:29:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It's likely an error! Do you want to open a discussion on the model repo directly? https://huggingface.co/microsoft/deberta-v3-base/discussions/new<|||||>i get the same result 1000000000000000019884624838656<|||||>I'm seeing the same for the 125m and 350m OPT tokenizers (haven't checked the larger ones): ```python >>> AutoTokenizer.from_pretrained("facebook/opt-350m") PreTrainedTokenizer(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)}) >>> AutoTokenizer.from_pretrained("facebook/opt-125m") PreTrainedTokenizer(name_or_path='facebook/opt-125m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)}) ``` Is this definitely a bug?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>deberta v3 uses relative position embeddings which means it isn't limited to the typical 512 token limit. As taken from [section A.5 in their paper](https://arxiv.org/pdf/2006.03654.pdf): > With relative position bias, we choose to truncate the maximum relative distance to k as in equation 3. Thus in each layer, each token can attend directly to at most (2k - 1) tokens and itself. By stacking Transformer layers, each token in the l-th layer can attend to at most (2k-1)*l tokens implicitly. Taking DeBERTa_large as an example, where k = 512, L = 24, in theory, the maximum sequence length that can be handled is 24,528. That being said, it will start to slow down a ton once the sequence length gets bigger than 512<|||||>Yes, I thought this might be the case, however, the same is true for deberta v2 if I remember correctly and the answer for that is different. What I was asking in the original post is why the the difference between v2 and v3. Thanks for clarifying part of the question/answer. <|||||>I meant to add to my last post: The max length of 1000000000000000019884624838656 is typically an error when the max length is not specified in the tokenizer config file. There was a discussion about it here: https://huggingface.co/google/muril-base-cased/discussions/1 And the solution was to modify the tokenizer config file: https://huggingface.co/google/muril-base-cased/discussions/2<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is still an issue with the config file and/or config file parser.<|||||>@bcdarwin What is the issue?
transformers
16,997
closed
Update README to latest release
# What does this PR do? The main README (and its variants) all have multiple links to released models that point to the main doc and not the stable doc. This is because we did the last two releases on branches different from the main one, so the README cleaned by our tools was not set on the main branch. This PR fixes that and adds instructions in our release guide.
04-28-2022 18:00:08
04-28-2022 18:00:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,996
closed
Fix savedir for by epoch in translation example
# What does this PR do? Fixes up the `no_trainer` translation example to properly save the `by_epoch` to the right directory (before it saved to step, causing a slow test failure)
04-28-2022 17:39:53
04-28-2022 17:39:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,995
closed
[FlaxBert] Add ForCausalLM
# What does this PR do? Adds cross-attention blocks to the following module classes: - FlaxBertModule - FlaxRobertaModule (in part through copying FlaxBertModule) - FlaxBigBirdModule (in part through copying FlaxBertModule) - FlaxElectraModule (in part through copying FlaxBertModule) Adds the following ForCausalLM model classes: - FlaxBertForCausalLM - FlaxRobertaForCausalLM (in part through copying FlaxBertForCausalLM) - FlaxBigBirdForCausalLM (in part through copying FlaxBertForCausalLM) - FlaxElectraForCausalLM (in part through copying FlaxBertForCausalLM) Adds the following model tests: - FlaxRobertaForCausalLM - FlaxBigBirdForCausalLM - FlaxElectraForCausalLM Note: FlaxBertForCausalLM is excluded due to the name mismatch with the PyTorch equivalent BertLMHeadModel. It is implicitly tested through the FlaxRobertaForCausalLM model tests, as well as in the following encoder-decoder model tests: - Bert-2-Bert (encoder-decoder) - Wav2Vec2-2-Bert (speech encoder-decoder)
04-28-2022 16:46:19
04-28-2022 16:46:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Looks good to me - @sanchit-gandhi could you check though which models don't pass with `1e-5` and ideally why? > > Overall `4e-2` is fine for me though cc @ydshieh what do you think? Keep `1e-5` is much better, because so far I can always find some issues when I find something higher than `1e-5` (well, sometimes it took quite some time to figure out)
transformers
16,994
closed
[WIP] data2vec jax
# What does this PR do? This adds data2vec flax model. Work in progress, just an initial draft. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sanchit-gandhi Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-28-2022 16:12:04
04-28-2022 16:12:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @BirgerMoell, thanks for jumping on this so quickly! Looks like a solid start on getting the new Data2Vec2Audio feature extractor written in JAX. Feel free to ask me any questions, more than happy to lend a hand! :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @BirgerMoell, thanks again for jumping in on this one! Am happy to help with the JAX/Flax model port - see my previous comment for how to efficiently copy over the skeleton code from FlaxWav2Vec2! If busy, let's maybe close this one for now and re-open when there's time to look into it a bit more?
transformers
16,993
closed
Rename to reflect framework pattern AutoModelXxx -> TFAutoModelXxx
# What does this PR do? Fixes a small bug to make sure a TFAutoModel class keeps the TF naming pattern when being updated with `auto_class_update`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
04-28-2022 15:06:04
04-28-2022 15:06:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the fix, @amyeroberts 🚀 <|||||>@amyeroberts You can now merge the PR (`Squash and merge` button)
transformers
16,992
open
Undocumented distributed inference behaviour for `run_summarization.py`
### System Info ```shell Fails with error Traceback (most recent call last): File "/scratches/neuron/anaconda3/envs/T5DST-SGD/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 26, in <module> from .user import UserCommands File "/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/transformers/commands/user.py", line 20, in <module> from huggingface_hub.hf_api import HfFolder, create_repo, list_repos_objs, login, logout, whoami ImportError: cannot import name 'list_repos_objs' from 'huggingface_hub.hf_api' (/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/huggingface_hub/hf_api.py) However I am running `4.16.2` with python `3.8`. ``` ### Who can help? @sgugger @stevhliu @patil-suraj ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am working with a copy of the `run_summarization.py` (pytorch) example that the authors of this [paper](https://arxiv.org/pdf/2109.07506.pdf) modified to work for dialogue state tracking (implemented [here ](https://github.com/chiahsuan156/DST-as-Prompting) for reference) The `run_summarization.py` script can be launched with `torch.distributed.launch` and the `--do_predict` option. This shards the examples in the test set to various GPUs and therefore generation and task-oriented metrics is accelerated. The predictions are written to the `generated_predictions.txt` file in the output directory. To be able to compute dialogue-relevant task oriented metrics, one ought to run a postprocessing script that uses the `generated_predictions.txt`. Because the trainer erases all the columns that are not keys to the model `forward` method from the dataset, the metadata that informs us of what training examples the predictions are related to is lost. Therefore, we rely on the ordering of the `generated_predictions.txt` to match the order of the examples in the dataset. My question is: - Does `predictions` (`L675`) obey the order of the `dataset`? So if my dataset has 1m examples, will the 1m entries in the `predictions` list match the order of the dataset iterator? In my experience this depends on implementation* and the behaviour is not documented. *For example, in frameworks such as `ray` you have to explicitly enforce the order in which the results are returned and the predictions may be returned out of order - if a process finishes, it returns its results so it can be given more work by an external load balancer. ### Expected behavior ```shell Improved documentation about expected behaviour here. Happy to discuss where this should be added and contribute a small PR to clarify this important issue. ```
04-28-2022 14:31:38
04-28-2022 14:31:38
I'm not sure where the bug is. It sounds like you have a question, which should be asked on the [forums](https://discuss.huggingface.co/). `Trainer.predict` will return predictions in the same order as the underlying dataset, regardless of the setup you're using to get your predictions. I don't feel it requires documentation as it's the intended behavior of the method. There would be a warning if it was not the case.<|||||>Hi @sgugger, I apologise for raising this issue. I was expecting to find info about async behaviour in docs and did not. I just finished reading the code and I see that `L174` in `trainer_pt_utils.py` calls `torch.distributed.all_gather` with `async_op=False` so we preserve the order as you say. Do you think it would be worth expanding the docs? We could show how to run distributed inference for `run_summarization.py` or add one sentence in the part of the docs that tells us how to run distributed training with a short paragraph on how to do distributed inference with a note "the order of the underlying dataset will be preserved"? If you don't think this adds value, feel free to close the issue straight away. Thank you for your helpful answer.<|||||>If you feel the doc needs to be expanded, I'm happy to review a PR, I just told you why I thought it wasn't worth mentioning when I wrote the current doc of `Trainer.predict` ;-) But adding some lines on how to run distributed inference are more than welcome!<|||||>Ok, I'll add that to a TODO as this should be a quick one. Let's label this as WIP as I expect this would be a couple of weeks given my current workload.<|||||>Just an update - I did manage to successfully deploy training code written with the `Trainer` API on a `SLURM` cluster using `torchrun` ([here](https://pytorch.org/docs/stable/elastic/run.html?highlight=torchrun)). We can discuss where in the docs it would be best to show an example - I think it would help a lot of people. There are some posts on the forum I can update, to start with.
transformers
16,991
closed
The current equivalent of transformers.models.bert.modeling_bert.gelu
### System Info ```shell - `transformers` version: 4.5.0 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I believe it's an issue relevant to version migration. The code I'm using contains ``` x = transformers.models.bert.modeling_bert.gelu(x) ``` which seems to be no longer usable. Similar problem was discussed in https://stackoverflow.com/questions/66133626 but with no good answer. ### Expected behavior Please let me know what is the current API that can be a replacement of `transformers.models.bert.modeling_bert.gelu`, or it's safe to directly use `x = x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))` instead. Many thanks! PS: the link to the migration guide should be changed into https://huggingface.co/docs/transformers/migration. ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
04-28-2022 14:10:46
04-28-2022 14:10:46
All of those have been refactored in a `ACT2FN` dictionary which is used across the codebase. You can import it as such: ```py from transformers.activations import ACT2FN gelu_function = ACT2FN['gelu'] ``` Hope that helps!<|||||>Many thx. Closed : )
transformers
16,990
closed
[T5 Tokenizer] Model has no fixed position ids - there is no hardcode…
…d max length # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16986 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-28-2022 13:13:48
04-28-2022 13:13:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Oh, that's a serious change if a user forgot to set a `max_length`. I understand it fixes a bug, but still would like @LysandreJik 's take on it as well. Thanks for the PR in any case! Agree! We should at least put some :exclamation: mark in this PR stating that this change could lead to unexpected behavior OOM if `max_length` is not defined.<|||||>That is definitely a breaking change we want to avoid, IMO. This is likely to break user pipelines with OOM errors or a non consistent number of tokens generated. I'd advocate against this change, and would push to: - Document that while the limit is set to 512, T5 can handle longer lengths and encourage users to define their own max lengths - Document that this limit will be removed in v5 - Update the warning just for T5 (see below) <details> <summary>Updating the warning just for T5</summary> You can override this method, which is in `tokenization_utils_base.py`, in `tokenization_t5.py` and `tokenization_t5_fast.py` https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/tokenization_utils_base.py#L3379-L3397 I wouldn't recommend skipping the warning altogether as it still gives important information regarding why the text was eventually truncated or padded. But updating the message makes sense: ```diff def _eventual_warn_about_too_long_sequence(self, ids: List[int], max_length: Optional[int], verbose: bool): """ Depending on the input and internal state we might trigger a warning about a sequence that is too long for its corresponding model Args: ids (`List[str]`): The ids produced by the tokenization max_length (`int`, *optional*): The max_length desired (does not trigger a warning if it is set) verbose (`bool`): Whether or not to print more information and warnings. """ if max_length is None and len(ids) > self.model_max_length and verbose: if not self.deprecation_warnings.get("sequence-length-is-longer-than-the-specified-maximum", False): logger.warning( - "Token indices sequence length is longer than the specified maximum sequence length " - f"for this model ({len(ids)} > {self.model_max_length}). Running this sequence through the model " - "will result in indexing errors" + "The T5 model has no maximum length, but a maximum length is still set for backwards compatibility " + "purposes. To take advantage of the full capabilities of the model, we recommend setting a " + "max_length manually." ) self.deprecation_warnings["sequence-length-is-longer-than-the-specified-maximum"] = True ``` </details><|||||>Okey took some time to think about it - it's really not easy. I agree @LysandreJik that the previous change (while correct) is too strong as it might break quite some pipelines. To begin with, note that `model_max_length` or `max_length` is only relevant if `truncation=True` is set. So for all other cases this bug is not relevant. Now the problem is that by default T5 should **not** have a set maximum length. However it is completely reasonable for people to set their own maximum length. To me this means the following: If a user instantiates T5 Tokenizer with `model_max_length` or passes `max_length` when encoding/padding, then these values should **always** be the true max length values and in this case the (incorrectly) hard-coded max length values can be discarded. Only if a user does not pass `max_length` when encoding/padding and does not define `model_max_length` at init, then we should fall back to the (incorrect) hard-coded max length values until v5. In this PR there two things are changed the 2.) can be considered a small breaking change, but it's really a bug correction for me. 1. If T5 Tokenizer is instantiated without a custom `model_max_length` and one of the identifiers for which `model_max_length` is hardcoded is used, the following warning appears: ``` This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. ``` Previously no warning appeared. Note that this warning appears every time at init. However it can be disabled as described above and it's also good to warn the user about upcoming changes this way. 2. If T5 Tokenizer is instantiated with a `model_max_length`, this `model_max_length` always counts even if it's longer than the hardcoded ones. This means the following snippet: ```python #!/usr/bin/env python3 from transformers import T5TokenizerFast tok = T5TokenizerFast.from_pretrained("t5-base", model_max_length=600) out = tok(100 * "hello there is a", padding="longest", truncation=True).input_ids print(len(out)) ``` does **not** throw a warning (since the user defines `model_max_length`) and print a length of 600 (not 512). <- this behavior is different from how it was before. My rational on changing this is the following: - T5's hardcoded model max lengths are wrong, I'm fine with using those if no `model_max_length` is defined or no `max_length` is passed - **But**, if a user already passes a `model_max_length` <- then this should be the only source of truth. E.g. In the example above 600 should be tha max length and not 512. **To be crystal clear 2.) changes the behavior - e.g. run the code snippet before/after the PR, but it's really a bug correction here IMO** <|||||>Failure is unrelated
transformers
16,989
closed
set eos_token_id to None to generate until max length
# What does this PR do? Update `check_encoder_decoder_model_generate` to generate until max length. Otherwise, this check ```python self.assertEqual(generated_output.shape, (input_ids.shape[0],) + (decoder_config.max_length,)) ``` might fail. ### Remark In `generate()`, we have https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/generation_utils.py#L1129-L1133 So I think the (original) logic about `Generate until max length` in `check_encoder_decoder_model_generate` should be updated too. The case won't really happen in the tests, but in general, `config` might still have `eos_token_id`. I also leave the corresponding flax tests untouched for now. This PR will fix ``` FAILED tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::Swin2BartModelTest::test_encoder_decoder_model_generate tests/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py:280: in check_encoder_decoder_model_generate self.assertEqual(generated_output.shape, (inputs.shape[0],) + (decoder_config.max_length,)) AssertionError: torch.Size([13, 2]) != (13, 20) ```
04-28-2022 12:42:07
04-28-2022 12:42:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,988
closed
Add Tensorflow Swin model
# What does this PR do? Adds a tensorflow implementation of the Swin architecture and associated tests. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Will tag specific members / contributors when moved from drafts.
04-28-2022 11:06:47
04-28-2022 11:06:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik @Rocketknight1 @NielsRogge I added you as reviewers to cover the transformers, vision and tensorflow aspects of this PR. Apologies if you're not the right person to review this - and please feel free to remove yourselves or add others :) <|||||>I think those are good choices! Reviewing now.
transformers
16,987
closed
Memory calculator for transformer models
### Feature request This feature request is quite high level. The feature is some tool, function, object, etc. that takes in information such as model config, trainer arguments, and max sequence length and calculates the expected memory usage. It could for instance be used to produce a warning if expected memory exceeds current available memory, and lets you select your hyperparameters with memory usage taken into account (instead of trying out which params raises memory errors and not). I mentioned this on the HF discord and was encouraged to make a feature request ### Motivation When working with this library and the trainer API, I've been missing some type of tool that can calculate the expected memory consumption of your model training. `RuntimeError: CUDA error: out of memory` haunts us all and can perhaps be better understood if we're able to pre compute expected memory to see if the error is expected or not. It also makes it easier to select hyperparameters with memory constraints. ### Your contribution Should this be of interest: * how it will be integrated should probably be agreed upon first. * I'm willing to contribute to this in the summer if nobody has picked it up by then, should my help be wanted
04-28-2022 10:49:57
04-28-2022 10:49:57
Reminds me of your current work on accelerate @muellerzr!<|||||>On my to do (asap) is to integrate that work from Accelerate. Should be in the next few weeks here. Basically how that one works is we retry the training loop reducing the batch size until we escape the CUDA OOM (this was request *many* times in our internal slack to integrate it here as well). Said implementation: https://github.com/huggingface/accelerate/blob/main/src/accelerate/memory_utils.py<|||||>Very interesting, I wasn't aware of this. Looking forward to the integration Any thoughts on pre-calculating expected memory usage? Or any reason why this would be unfeasible or impractical?<|||||>Without lots of very specific code, currently it is unfeasible (though won't be soon!). The key is with pytorch's `meta` device. It currently doesn't work on all ops, but once it does we should be able to track all the sizes of the intermediate activation without real memory usage, getting us there. Otherwise we currently *could* by just doing the size of the model * the right number based on the optimizer selected, but we'd still be missing all of those intermediate activation sizes. For now, the bs reducer is a good way to only add a few minutes (if not seconds) to get it going, hence why I went with that approach. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,986
closed
Warning tells you you will get indexing errors in T5 for going beyond max length
### System Info ```shell - `transformers` version: 4.16.2 - Python version: 3.8.12 ``` ### Who can help? @patrickvonplaten @saul ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction To reproduce: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> inputs = tokenizer("foo " * 2000, return_tensors="pt") Outputs `Token indices sequence length is longer than the specified maximum sequence length for this model (4001 > 512). Running this sequence through the model will result in indexing errors` ``` ```python >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> model.generate(**inputs) tensor([[ 0, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575]]) ``` No indexing errors ### Expected behavior The warning is wrong for T5 since it uses relative positional embeddings. I would expect no warning, or otherwise, a warning about memory usage I suppose this issue should apply to all models that do no have fixed length postional encodings
04-28-2022 10:18:23
04-28-2022 10:18:23
Thanks a lot for the issue @marksverdhei . You're right T5 has no fixed max length - so this warning is confusing. The reason why lots of people associate T5 with a max length of 512 was that it was pretrained on a max length of 512, but is not limited to this length! It has shown to generalize well to longer sequences. Also see: https://github.com/huggingface/transformers/issues/5204<|||||>I think it is a bit confusing. As in the paper, "We use a maximum sequence length of 512". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it. But for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant. <|||||>> I think it is a bit confusing. As in the paper, "We use a maximum sequence length of 512". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it. > > But for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant. With T5 you can change max input length. Relative positional embeddings make it possible to process arbitrary lengths, which is what T5 uses, as opposed to classical positional embeddings such as in the original transformer architecture. It is just that when training, a length of 512 tokens is used because it is a trade-off between processing long-enough texts while not using too much time and memory.
transformers
16,985
closed
Beginning word ids
tokenizer = AutoTokenizer.from_pretrained("roberta-base") inputs = tokenizer('in an ideal situation, it works') print(inputs.word_ids()) Returns [None, 0, 1, 2, 3, 3, 3, 4, 5, 6, None] Are there any methods to identify which token is the beginning position for a word For example: Returns [None, 1, 1, 1, 1, 0, 0, 1, 1, 1, None] where 1 is the start token for a given word and 0 is a token that is not the start token for a given word.
04-28-2022 09:33:05
04-28-2022 09:33:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,984
closed
Getting a fixed size embedding from the last hidden state.
I am trying to play with the bare vision model's last hidden state (like ViT, BEiT). How can I get a fixed size representation out of the last_hidden_state which is of shape (channels, height, width) say of 1D? Should I simply flatten it? Please advice
04-28-2022 07:52:12
04-28-2022 07:52:12
Hi, First of all, please use the forum for these kind of questions, we'd like to keep Github issues for bugs/feature requests. Second, a last hidden state is typically of shape (batch_size, seq_len, hidden_size) for these kind of models, as they are Transformer-based. You feed a sequence of patches through a Transformer encoder, hence you end up with a vector for each of these patches at the end. You can permute it to turn it into a tensor of shape (batch_size, hidden_size, seq_len), and split the last dimension based on the patch size, to get a tensor of shape (batch_size, hidden_size, patch_size, patch_size). This gives you an image-like representation. In code: ``` from transformers import AutoModel import torch model = AutoModel.from_pretrained("microsoft/beit-base-patch16-224") pixel_values = torch.randn(1, 3, 224, 224) outputs = model(pixel_values) last_hidden_state = outputs.last_hidden_state[:,1:,:] # we discard the CLS token batch_size = last_hidden_state.shape[0] num_patches = model.config.image_size // model.config.patch_size image_like_representation = last_hidden_state.permute(0, 2, 1) image_like_representation = image_like_representation.view(batch_size, -1, num_patches, num_patches) ```<|||||>Rank Apology - I will use the forum going forward. But just to close this one out, maybe I am not explaining properly: I am using BEiT as the image encoder and BERT as the Text encoder. I am trying to get a fixed size 1D representation of the last_hidden_state of BEiT (Something like what CLIP obtains) to concatenate with a BERT embedding to feed into an MLP head. I could use the pooler_output of the image but it doesn’t seem to preserve certain spatial nuances hence I would like to use the last_hidden_state Can you help me? Moved to the forum as well - https://discuss.huggingface.co/t/how-to-get-a-fixed-size-embedding-from-the-last-hidden-state-of-vision-models/17275 <|||||>@PrithivirajDamodaran you can adapt code snippet from @NielsRogge as follows: ``` from transformers import AutoModel import torch model = AutoModel.from_pretrained("microsoft/beit-base-patch16-224") pixel_values = torch.randn(1, 3, 224, 224) outputs = model(pixel_values) img_embedding = outputs.last_hidden_state[0, 0, :] # CLS token of the last layer can be used as the image embedding ```<|||||>@nihit - To get the CLS token, I can directly get the ```pooler_output```, instead of slicing last_hidden_state. Because all HF bare models returns by default two keys ```pooler_output``` and ```last_hidden_state```. BTW pooler_output is nothing but the raw first entry (CLS) from the last_hidden_state but it was passed through a simple MLP pooler layer (linear + tanh). I am NOT interested in CLS embedding, I would like to have a fixed representation of the entire last layer itself i.e. last_hidden_state. I have figured this out.<|||||>@PrithivirajDamodaran - I noticed that sometimes the last_hidden_state has seq length dimension different from the number of input_ids from the pre-processor (for example layoutlm). Does anyone know why this happens?
transformers
16,983
closed
how huggingface process uneven input tensors
### Feature request For IterableDataset, it seems there's no way to handle the uneven input tensors in trainer.py. (Please correct me if I misunderstant this). Pytorch document suggests to use join: https://pytorch.org/tutorials/advanced/generic_join.html#what-is-join Secondly, the dataloader for IterableDataset doesn't have sampler, which may be an issue when num workers > 0. https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer.py#L672 ### Motivation Currently in trainer.py, the IterableDataset is wrapped into IterableDatasetShard in distributed training. It seems that it requires every process to have the same whole dataset and distribute the samples in [IterableDatasetShard](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L678), which makes every batch has the same amount of data by [pad the first batch data](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L770). When in training, since the data is guranteed to be equal in every process, there is no need to use [Join](https://pytorch.org/tutorials/advanced/generic_join.html#what-is-join) to process the uneven input datasets. Here's my case. I have a very large audo dataset of 1 million audio files and each file requires the processing step just like [this example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py). If I do the same step as this example, the large training datasets not only requires a lot of time processing but also a lot of disk storage to store the features. Therefore I decide to use IterableDataset to do on-the-fly preprocessing. However, I need to maintain a buffer to do some shuffling then the CPU memory increases greatly and every process is doing many duplicate work that the other process has already done. For example, let's say we have 2 processes to process 11 samples. Every process has to process the 11 samples instead of 5 + 6. So I decide to give up the IterableDatasetShard and use IterableDataset directly. Every process has its own part of data to process. For example, process0 has 5 samples and process 1 has 6 samples. However this also means some process will finish training earlier. But I see there's no join used in trainer.py. So I was wondering what's the best practice to do this training and inference in this use case. Thanks! ### Your contribution Not sure
04-28-2022 06:54:33
04-28-2022 06:54:33
Please use the [forum](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only. In this instance, I don't think the `Trainer` will help you as you want a very specific data processing, so you should use `Accelerate`. The dispatch feature it offers is done exactly for users with `IterableDataset` that don't want to process the data in each process.<|||||>> Please use the [forum](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only. In this instance, I don't think the `Trainer` will help you as you want a very specific data processing, so you should use `Accelerate`. The dispatch feature it offers is done exactly for users with `IterableDataset` that don't want to process the data in each process. Thank you. I solved this issue just by customizing the get_train_loader, doing padding in my own pipelining and removing data collator and adding a model.join() context. BTW, does the dispatch feature in Accelerate can solve every process has different number of batches in one epoch when dataloader.num_workers > 0 without duplicate data processing? Another thing I find is that the [find_batch_size](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L105) cannot support [BatchFeature](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/feature_extraction_utils.py#L62). Not sure if this is an issue. <|||||>The last one is a bug, will fix that!
transformers
16,982
closed
Exporting DeBerta using custom onnx configuration
### Feature request I am trying to export a DeBerta model, but since the current version of transformers[onnx] doesn't support DeBerta architecture, I am trying to do it by implementing a custom onnx configuration. Although, I am able to provide the required inputs, I am not really getting the required output shape for **Sequence Classification** task. I also tried to use the approach mentioned below, but to no good ~~~ from collections import OrderedDict from typing import Mapping from pathlib import Path from transformers.onnx import export from transformers.onnx import OnnxConfig from transformers import BartTokenizer, BartModel, BartConfig onnx_path = Path("C:/Users/Hp/zsc/onnx_deberta/model3.onnx") class DebertaConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ] ) config = AutoConfig.from_pretrained("Narsil/deberta-large-mnli-zero-cls") base_model = AutoModel.from_pretrained("Narsil/deberta-large-mnli-zero-cls") tokenizer = AutoTokenizer.from_pretrained("Narsil/deberta-large-mnli-zero-cls") onnx_config = DebertaConfig(config, task="sequence-classification") onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path) ~~~ the onnx_outputs shape is of kind (1, 10, 1024) instead of (1, 3). Any way to achieve it or am I doing something wrong?
04-28-2022 06:28:53
04-28-2022 06:28:53
cc @lewtun @michaelbenayoun <|||||>Hi @RaiAmanRai thanks for reporting this issue! Would you be able to share a reproducible code snippet that also shows the type of inputs you're feeding to the exported model (e.g. with ONNX Runtime)?<|||||>Hi @lewtun , thanks for stopping by. The above mentioned code snippet was used to export the model into .onnx format. The following code snippet was used to check the outputs of the model and its shape ~~~ import onnxruntime as ort import numpy as np ort_session = ort.InferenceSession('C:/Users/Hp/zsc/onnx_deberta/model3.onnx') inputs = tokenizer("Using BERT in ONNX and we are doing this as a test to check the output shape!", return_tensors="np", return_token_type_ids=False) inputs['attention_mask'] = inputs['attention_mask'].astype(np.int64) inputs['input_ids'] = inputs['input_ids'].astype(np.int64) outputs = ort_session.run(onnx_outputs, dict(inputs)) outputs[0].shape ~~~ Here, tokenizer is the same instance used above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun @michaelbenayoun can you guys please look into this issue. This has become a major issue in the development I am working on, and would request to resolve it. Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I try to fix the same issue. Is there any solution about this issue<|||||>Hi @forrestfaraday , @RaiAmanRai , The support for the ONNX export is now under [`optimum.exporters.onnx`](https://github.com/huggingface/optimum/releases/tag/v1.5.0), and we actually support the export of Deberta. All you need to do is installing `optimum`: ```bash pip install optimum ``` Then run the `optimum.exporters.onnx` CLI: ```bash python -m optimum.exporters.onnx --model Narsil/deberta-large-mnli-zero-cls deberta_onnx/ ```<|||||>**Thanks for the explanation. Could you please help me with this error. Because I tried a lot of time this pipeline with different deberta models.** `hg_checkpoint = "microsoft/deberta-v3-base" save_hg = "tmp/hg_onnx/"` **Load a model from transformers and export it to ONNX** `ort_model_hg = ORTModelForTokenClassification.from_pretrained(hg_checkpoint, from_transformers=True) tokenizer_hg = AutoTokenizer.from_pretrained(hg_checkpoint)` **Save the onnx model and tokenizer** `ort_model_hg.save_pretrained(save_hg) tokenizer_hg.save_pretrained(save_hg)` **Define the quantization methodology** `qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False) quantizer_hg = ORTQuantizer.from_pretrained(ort_model_hg)` **Apply dynamic quantization on the model** `quantizer_hg.quantize(save_dir=save_hg, quantization_config=qconfig) from optimum.onnxruntime import ORTModelForTokenClassification from transformers import pipeline, AutoTokenizer` `model_hg = ORTModelForTokenClassification.from_pretrained(save_hg, file_name="model_quantized.onnx") tokenizer_hg = AutoTokenizer.from_pretrained(save_hg) pipeline_hg = pipeline("token-classification", model=model_hg, tokenizer=tokenizer_hg, aggregation_strategy = 'first') results = pipeline_hg(text) results` **InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids** <|||||>I have no problems quantizing/optimizing the deberta model and then loading it. I am facing the below error when importing predict when using pipeline. **InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids**<|||||>Could you open a PR on the [optimum repo](https://github.com/huggingface/optimum/issues) please? We will try to figure it out there!
transformers
16,981
closed
Skip RoFormer ONNX test if rjieba not installed
# What does this PR do? This PR adds the `@require_rjieba` decorator to the slow ONNX tests to deal with the following error in our daily CI runs: ``` (line 164) ImportError: You need to install rjieba to use RoFormerTokenizer. See https://pypi.org/project/rjieba/ for installation. ``` ~~I wasn't sure if `rjieba` should actually be installed in the GitHub workflow, but it doesn't seem to be the case for the RoFormer tests and so I omitted that for now.~~ Edit: I've added `rjieba` to the `"tests"` extras and also tested that the slow ONNX test passes when this dep is installed: ``` RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "roformer" ```
04-28-2022 05:32:15
04-28-2022 05:32:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>So this test is then currently just skipped on our ONNX tests? Should we maybe not better add `rjieba` to the test package to the test RoFormer (I couldn't find it in the `setup.py` or in the Docker file) cc @LysandreJik @sgugger <|||||>It looks like RoFormer tokenization is completely untested yes, so this package should be added in the `"testing"` extra.<|||||>I agree we should include `rjieba` for the tests, unless there are reasons for not adding specific packages. ### Further remark If we could not add `rjieba`, it is not a good idea to add `@require_rjieba` for `test_pytorch_export`, otherwise this test won't be run for other models neither. In this case, I think we might need a specific way to skip this test for `RoFormer` (and others that require `rjieba`).<|||||>Thanks for the feedback - I'll add `rjieba` to our testing suite as well :)<|||||>Hey @sgugger @patrickvonplaten I'm hitting some peculiar issues with 2 of the slow tests of the RoFormer tokenizer. Would you mind taking a look and seeing if my decision to skip them is valid?<|||||>I'll let @patrickvonplaten decide as I know nothing on that model too :-)<|||||>There is a test dedicated to custom tokenizers with specific dependencies: https://github.com/huggingface/transformers/blob/main/.circleci/config.yml#L538 It installs `jieba` but not `rjieba`. Would it make sense to add it there? If you're testing for ONNX, it's very likely that it does not make sense as it's limited to tokenizer tests right now.<|||||>> There is a test dedicated to custom tokenizers with specific dependencies: https://github.com/huggingface/transformers/blob/main/.circleci/config.yml#L538 > > It installs `jieba` but not `rjieba`. Would it make sense to add it there? If you're testing for ONNX, it's very likely that it does not make sense as it's limited to tokenizer tests right now. Thanks for the tip! Done in [3cafcb2](https://github.com/huggingface/transformers/pull/16981/commits/3cafcb2e06bc7caf0eba2e03e817fedcf0cfe073)<|||||>Hey @patrickvonplaten @LysandreJik I think this PR is ready for a final pass :) The failing test is unrelated to the PR itself (a failing Pegasus generate test)
transformers
16,980
closed
Remove masked image modeling from BEIT ONNX export
# What does this PR do? This PR removes masked image modeling from the list of supported features in the ONNX exporter. As explained by @NielsRogge, BEiT cannot be loaded with the `AutoModelForMaskedImageModeling` class due to: > Well yeah that's because BEiT does masked image modeling by predicting visual tokens of a VQ-VAE, whereas the other ones predict pixel values (RGB) as in the [SimMIM paper](https://arxiv.org/abs/2111.09886). So I'm afraid BEiT cannot be added to this auto class. I've also added a note in the BEiT docs to help users who don't know these details. I've also checked that the slow tests pass for ONNX with ``` RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s ``` Edit: we should merge this after #16981 to ensure the RoFormer tests pass first
04-28-2022 05:16:16
04-28-2022 05:16:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, There's a reason I haven't added BEiT to the auto classes. It's because it can't be used with the run_mim.py script, because BEiT handles masked image modeling differently compared to the other ones (which do it similar to the way it's defined in SimMIM paper). So this may confuse users, maybe we should properly document it that BEiT is not the same as the other ones<|||||>Ah I see, but isn't a bit odd to exclude BEiT just because it isn't compatible with our example scripts? For instance, is there anything fundamentally wrong with loading `BeitForMaskedImageModeling` via the autoclass if I'm rolling my own masked image modeling code? If not, I'd prefer to keep BEIT in the autoclasses and put the warning inside the `run_mim.py` script if a user tries to run it with this architecture<|||||>Hmm maybe there is a fundamental issue with using BEiT in the autoclasses as I'm seeing the torch tests fail with: ``` self = BeitEmbeddings( (patch_embeddings): PatchEmbeddings( (projection): Conv2d(3, 32, kernel_size=(2, 2), stride=(2, 2)) ) (dropout): Dropout(p=0.1, inplace=False) ) pixel_values = tensor([[[[7.7614e-01, 1.7656e-01, 6.0460e-01, ..., 3.9106e-01, 5.2019e-01, 8.9339e-01], [2.7568...1, 9.9367e-01], [9.4963e-01, 1.6943e-01, 9.7946e-01, ..., 1.9085e-01, 1.9910e-01, 4.6059e-02]]]]) bool_masked_pos = tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]) def forward(self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None) -> torch.Tensor: embeddings = self.patch_embeddings(pixel_values) batch_size, seq_len, _ = embeddings.size() cls_tokens = self.cls_token.expand(batch_size, -1, -1) if bool_masked_pos is not None: > mask_tokens = self.mask_token.expand(batch_size, seq_len, -1) E AttributeError: 'NoneType' object has no attribute 'expand' ```<|||||>Well yeah that's because BEiT does masked image modeling by predicting visual tokens of a VQ-VAE, whereas the other ones predict pixel values (RGB) as in the [SimMIM paper](https://arxiv.org/abs/2111.09886). So I'm afraid BEiT cannot be added to this auto class.<|||||>OK thanks for the clarification. I'll remove this feature from the ONNX export and add a note to the BEiT docs :)
transformers
16,979
closed
Added translation of installation.mdx to Portuguese Issue #16824
# What does this PR do? Creates folder pt in docs/source for translating documentation to Portuguese Currently, only the installation.mdx file was translated as of this PR. Fixes issue #16824 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests?
04-28-2022 04:57:25
04-28-2022 04:57:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>I'm still working on translating the remaining files, although there's already some work done on three files.<|||||>Maybe ~@gante~ @omarespejel ? :smile: <|||||>Obrigado @rzimmerdev! @sgugger, LGTM. Ready to merge and start the Portuguese docs 🤗 I removed the preprocessing doc from this PR because it was not ready yet.
transformers
16,978
closed
Data collator using in Parallel training & Disable to use DistributedDataParallel
### System Info ```shell - `transformers` version: 4.12.0.dev0 - Platform: Linux-4.15.0-45-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Both are tried and failed ``` ### Who can help? Library/Trainer: @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction code sample that reproduces the problem(with data): https://drive.google.com/file/d/1jFLV9Ir0Um3C6MTPjCgZMUKJtmasrQPT/view?usp=sharing terminal input: For DDP: CUDA_VISIBLE_DEVICES=5,6 python -m torch.distributed.launch base-Trainer.py --mode "Data" --train_data_path utils/dev-small.json --output_dir outputs/Data_only_load_data/ --do_train --per_device_train_batch_size 4 --save_steps 100000 For DataParallel: CUDA_VISIBLE_DEVICES=5,6 python base-Trainer.py --mode "Data" --train_data_path utils/dev-small.json --output_dir outputs/Data_only_load_data/ --do_train --per_device_train_batch_size 4 --save_steps 100000 some explaination to the task and files: 1. The task is MLM, with specified mask tokens, not randomly choosed. 2. tokenizer is basically a roberta-base-tokenizer, adding a special token [pron] 3. Dataset returns one sample of data each time, with type str or (str, str) 4. tokenization and label-creating are realized in collater() error messages: when using DataParallel: /home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. when using DDP: 1. set 2 gpu devices but only one gpu is used. 5. terminal outputs as follows: Traceback (most recent call last): File "base-Trainer.py", line 67, in <module> main() File "base-Trainer.py", line 62, in main trainer.train() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1383, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1475, in _maybe_log_save_evaluate tr_loss_scalar = self._nested_gather(tr_loss).mean().item() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 2385, in _nested_gather tensors = distributed_concat(tensors) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 168, in distributed_concat dist.all_gather(output_tensors, tensor) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1185, in all_gather work = _default_pg.allgather([tensor_list], [tensor]) RuntimeError: All tensor operands to scatter/gather must have the same size 0%|▎ | 500/183354 [02:03<12:32:13, 4.05it/s] Traceback (most recent call last): File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/caoyq/.conda/envs/torch_cp37/bin/python', '-u', 'base-Trainer.py', '--local_rank=0', '--mode', 'Data', '--train_data_path', 'utils/dev-small.json', '--output_dir', 'outputs/Data_only_load_data/', '--do_train', '--per_device_train_batch_size', '4', '--save_steps', '100000']' returned non-zero exit status 1. ### Expected behavior ```shell 1. I want to use DDP, not just DP 2. From the error messages given above, my data collator may have problems. It would be so nice of you if you could tell me what's wrong and how can I fixed it. If there exists a sample I can follow, please let me know. best wish ```
04-28-2022 03:31:34
04-28-2022 03:31:34
Setting nproc_per_node=2 enables the multi-gpu training by ddp for the first 500 steps. But right after the 500th step, the same problem occurs as follows: Traceback (most recent call last): File "base-Trainer.py", line 67, in <module> main() File "base-Trainer.py", line 62, in main trainer.train() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1383, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1475, in _maybe_log_save_evaluate tr_loss_scalar = self._nested_gather(tr_loss).mean().item() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 2385, in _nested_gather tensors = distributed_concat(tensors) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 168, in distributed_concat dist.all_gather(output_tensors, tensor) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1185, in all_gather work = _default_pg.allgather([tensor_list], [tensor]) RuntimeError: All tensor operands to scatter/gather must have the same size Traceback (most recent call last): File "base-Trainer.py", line 67, in <module> main() File "base-Trainer.py", line 62, in main trainer.train() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1383, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1475, in _maybe_log_save_evaluate tr_loss_scalar = self._nested_gather(tr_loss).mean().item() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 2385, in _nested_gather tensors = distributed_concat(tensors) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 168, in distributed_concat dist.all_gather(output_tensors, tensor) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1185, in all_gather work = _default_pg.allgather([tensor_list], [tensor]) RuntimeError: All tensor operands to scatter/gather must have the same size 1%|▉ | 500/91677 [02:39<8:03:56, 3.14it/s] Traceback (most recent call last): File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/caoyq/.conda/envs/torch_cp37/bin/python', '-u', 'base-Trainer.py', '--local_rank=1', '--mode', 'Data', '--train_data_path', 'utils/dev-small.json', '--output_dir', 'outputs/Data_only_load_data/', '--do_train', '--per_device_train_batch_size', '4', '--save_steps', '100000', '--log_on_each_node', '0']' returned non-zero exit status 1.<|||||>Please use the [forums](https://discuss.huggingface.co/) to debug your code (which you should provide if you want people to be able to help you) as we keep issues for feature requests and identified bugs in the library.<|||||>Met the same problem. Have you fixed it? @CaoYiqingT <|||||>@Jun-jie-Huang This problem happens when using the log service. I just close the log, the the Trainer runs well. But this is only a temporary measure. Wish it can help you.<|||||>@CaoYiqingT Thanks for your quick response! Closing the log works for me. 👍 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,977
closed
Update README_zh-hans.md
null
04-28-2022 02:01:01
04-28-2022 02:01:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,976
closed
Bug: Finetuning large models resume checkpoint error
When finetuning a large model (e.g. Eleuther 6B), you shard the checkpoints upon saving [here](https://github.com/huggingface/transformers/blob/c79bbc3ba54a81dab2eac13d89f264ed64cb2460/src/transformers/modeling_utils.py#L193). However, upon resuming the checkpoint (and loading the best checkpoint after training), you confirm if there is a valid checkpoint assuming weights are no sharded [here](https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/trainer.py#L1196). This causes an error upon resuming training.
04-27-2022 23:09:21
04-27-2022 23:09:21
Indeed, I saw that yesterday and am working on a fix.<|||||>Should be fixed by the PR mentioned above :-)<|||||>Thanks!!
transformers
16,975
closed
Trainer: TypeError: an integer is required (got type NoneType)
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @lys ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer from transformers import TrainingArguments import numpy as np from datasets import load_metric import torch class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], download_mode="force_redownload" ) print(sentences) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() model_name = 'microsoft/xtremedistil-l6-h256-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=128) tokenized_datasets = sentences.map(tokenize_function, batched=True) full_train_dataset = tokenized_datasets["train"] full_eval_dataset = tokenized_datasets["test"] device = "cuda:0" if torch.cuda.is_available() else "cpu" print(device) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) model = model.to(device) metric = load_metric("accuracy") def compute_metrics(eval_pred): print(eval_pred) logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments("test_trainer", per_device_train_batch_size=128, num_train_epochs=24,learning_rate=3e-05, evaluation_strategy="epoch") from transformers import Trainer trainer = Trainer( model=model, args=training_args, train_dataset=full_train_dataset, eval_dataset=full_eval_dataset, compute_metrics=compute_metrics, ) trainer.train() ``` Stack trace: ``` <div class="stream output-id-8" style="display: inline; color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_text" style="display: inline;"><pre style="margin-bottom: 0px; margin-top: 0px; display: inline;">***** Running training ***** Num examples = 8256315 Num Epochs = 24 Instantaneous batch size per device = 128 Total train batch size (w. parallel, distributed &amp; accumulation) = 128 Gradient Accumulation steps = 1 Total optimization steps = 1548072 </pre></div></div><div class="display_data output-id-4976" style="color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_html rendered_html"><div><progress value="4942" max="1548072" style="width: 300px; height: 20px; vertical-align: middle;"></progress><span> </span>[ 4942/1548072 50:35 &lt; 263:23:43, 1.63 it/s, Epoch 0.08/24]</div> Epoch | Training Loss | Validation Loss -- | -- | -- <p style="margin-bottom: 6px; margin-top: 6px;"></p></div></div><div class="stream output-id-4535" style="display: inline; color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_text" style="display: inline;"><pre style="margin-bottom: 0px; margin-top: 0px; display: inline;">Saving model checkpoint to test_trainer/checkpoint-500 Configuration saved in test_trainer/checkpoint-500/config.json Model weights saved in test_trainer/checkpoint-500/pytorch_model.bin Saving model checkpoint to test_trainer/checkpoint-1000 Configuration saved in test_trainer/checkpoint-1000/config.json Model weights saved in test_trainer/checkpoint-1000/pytorch_model.bin Saving model checkpoint to test_trainer/checkpoint-1500</pre></div></div> ... Saving model checkpoint to test_trainer/checkpoint-4500 Configuration saved in test_trainer/checkpoint-4500/config.json Model weights saved in test_trainer/checkpoint-4500/pytorch_model.bin --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-10-3435b262f1ae>](https://localhost:8080/#) in <module>() ----> 1 trainer.train() 5 frames [/usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in torch_default_data_collator(features) 113 label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"] 114 dtype = torch.long if isinstance(label, int) else torch.float --> 115 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype) 116 elif "label_ids" in first and first["label_ids"] is not None: 117 if isinstance(first["label_ids"], torch.Tensor): TypeError: an integer is required (got type NoneType) ``` ### Expected behavior ```shell training complete successfully. ```
04-27-2022 21:17:48
04-27-2022 21:17:48
Could be this issue related to this **[SF](https://stackoverflow.com/questions/70699247/typeerror-an-integer-is-required-got-type-nonetype)** answer? The dataset looks like ``` Dataset({ features: ['label', 'text', 'input_ids', 'token_type_ids', 'attention_mask'], num_rows: 8256315 }) ``` and features ``` {'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'label': ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None), 'text': Value(dtype='string', id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)} ```<|||||>[UPDATE] I have changed the tokenize function like ```python def tokenize_function(batch): tokens = tokenizer(batch['text'], padding="max_length", truncation=True, max_length=128) tokens['label'] = features["label"].str2int(batch['label']) return tokens tokenized_datasets = sentences.map(tokenize_function, batched=True) ``` and removed the mapping as defined above, but now I'm facing a `None` label issue: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-39-3f04e6ec6f6e>](https://localhost:8080/#) in <module>() 14 tokens['label'] = features["label"].str2int(batch['label']) if batch["label"] is not None else None 15 return tokens ---> 16 tokenized_datasets = sentences.map(tokenize, batched=True) 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: 'None' ```<|||||>Solved filtering `None` rows ```python sentences = sentences.filter(lambda example: example['label'] is not None and example['text'] is not None) ``` and slightly changing the `tokenizer` ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], download_mode="force_redownload") sentences = sentences.filter(lambda example: example['label'] is not None and example['text'] is not None) sentences = sentences.shuffle() from transformers import AutoTokenizer model_name = 'microsoft/xtremedistil-l6-h256-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize(batch): tokens = tokenizer(batch['text'], padding="max_length", truncation=True, max_length=128) tokens['label'] = features["label"].str2int(batch['label']) return tokens tokenized_datasets = sentences.map(tokenize, batched=True) full_train_dataset = tokenized_datasets["train"] full_eval_dataset = tokenized_datasets["test"] import torch device = "cuda:0" if torch.cuda.is_available() else "cpu" print(device) from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) model = model.to(device) import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics(eval_pred): print(eval_pred) logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) from transformers import TrainingArguments training_args = TrainingArguments("test_trainer", per_device_train_batch_size=128, num_train_epochs=24,learning_rate=3e-05, evaluation_strategy="epoch") from transformers import Trainer trainer = Trainer( model=model, args=training_args, train_dataset=full_train_dataset, eval_dataset=full_eval_dataset, compute_metrics=compute_metrics, ) ```
transformers
16,974
closed
TF: XLA bad words logits processor and list of processors
# What does this PR do? This PR converts to XLA-compatible the `bad_words` logits processor. As per the discussion below, I was unable to convert the `ngrams` one -- added an exception and a TODO. Also makes a change to the list of processors -- XLA raised issues when the processors had different arguments, so I had to add `cur_len` to all processors. After the change, the list wrapper is also compatible with XLA.
04-27-2022 20:18:48
04-27-2022 20:18:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 @patrickvonplaten I'm stuck on the ngram logits processor, so I'd like to request your suggestions regarding what to try out next :D The bad words logits processor is ready and XLA-compatible. Context: 1. Without XLA, it works well; 1. With XLA, yields incorrect outputs (it masks the wrong tokens in some cases). It is not a CPU/GPU thing -- it has the same output regardless of the hardware; 2. The XLA/non-XLA mismatch is at the output of `_calc_row_banned_ngram_tokens`, which gets the tokens that should be banned for each row; 3. All intermediary variables I was able to pull out had the same contents. However, if I try to pull out all ngrams, I get a core dumped on XLA 🤔 Things I've tried (without any symptom change): 1. The current implementation is a `tf.while_loop` with `tf.TensorArray`. On https://github.com/huggingface/transformers/pull/16974/commits/ddc89115e88a24e3fd79e210ef3f4b9e51ba54c7, we can see my original implementation with a `tf.map_fn` (which is closer to the original code). Both versions have the exact same symptoms described above, and return the same errors for the same inputs when XLA is on (!); 2. Pulling the initialization of the `tf.TensorArray` to the start of `__call__`, pass `ngram_size` as an argument, and use `tf.function` as a decorator to `__call__`. The two first changes are to attempt a retrace trigger, the last one to rule out problems associated with attempting to compile a class instance (as opposed to a function); 3. Using `tf.shape` instead of `tensor.shape`, as the former is more suited for symbolic tensors; 4. Using batches with a single row as input; 5. Looking for other ways to implement the sliding window on the inputs (i.e. getting the ngrams), with no success.<|||||>I'd be very much in favor of just not converting the `ngram` Processor. I don't think it's a necessary requirement to publish the new TF generate method. Let's maybe leave this as a hard second issue in case the community is very interested in this feature. I think it's now more important to think about how to advertise, document XLA TF generate well and not loose too much time on this. <|||||>Also not that many models use this processor (only know of BART and T5 for some summarization tasks) <|||||>Agree that it's not necessary to convert this one, but examining it, I suspect that there are some sneaky changes in output size depending on inputs, and XLA is struggling to deal with it. It seems very tough to convert to XLA, but if we decide we need it later let me know and I'll do my best to dig into it.<|||||>Great 👍 I'm going to revert that one, add a TODO pointing at this PR, add a few final tests for the list of logits processors with XLA, and will ping you back.<|||||>@Rocketknight1 @patrickvonplaten ready for review
transformers
16,973
closed
Update check_models_are_tested to deal with Windows path
# What does this PR do? `TEST_FILES_WITH_NO_COMMON_TESTS` contains forward slash like `mt5/test_modeling_flax_mt5.py`. The condition `if test_file in TEST_FILES_WITH_NO_COMMON_TESTS:` in `check_models_are_tested` would give failures to fix on Windows, like ``` camembert\test_modeling_camembert.py should define `all_model_classes` to apply common tests ``` This PR uses `test_file.replace(os.sep, "/")` to make it work on Windows too 😄
04-27-2022 19:30:07
04-27-2022 19:30:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,972
closed
Issue in reformer: Reformer doesn't depend on its key feature -- `LSHSelfAttention`
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` conda create -n reformer-issue python=3.8 -y pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html pip install -e . # install from source python check_reformer.py ``` Make file changes (very minimal changes) as my PR here: https://github.com/leo-liuzy/transformers/pull/2 Changes are located [here](https://github.com/leo-liuzy/transformers/pull/2/files#diff-4f979561f9762bfd9333c74331153c4ee974120a4cf3c28052a29ec7e2c15ed7R1482) I made my fork from huggingface main two days ago. I also play with removing `LocalSelfAttention` and the perplexity greatly improve especially with `long_inputs_lst` (in the file). When just using `LSHSelfAttention`, increase num_hash doesn't help. My question is: **could this be caused by an innocent bug in transferring from Reformer's official code? Or is this intrinsic to the reformer?** I know in reformer they had a 20-layer transformer trained with 20 LSHSelfAttention and it shows good performance; that's why it further confused me. ### Expected behavior ```shell With `weight = 0 if isinstance(self.attention.self_attention, LSHSelfAttention) else 1` No. hash: 1 Seq_len(43) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.614 ppl: 6.123 Seq_len(85) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 3.808 ppl: 14.006 Seq_len(135) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.230 ppl: 4.693 Seq_len(53) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.261 ppl: 4.792 Seq_len(47) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.646 ppl: 6.258 Seq_len(78) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.347 ppl: 5.087 Seq_len(26) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.712 ppl: 6.553 Seq_len(63) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 3.568 ppl: 11.858 Seq_len(147) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0 bpd: 2.983 ppl: 7.907 With `weight = 1 if isinstance(self.attention.self_attention, LSHSelfAttention) else 1` No. hash: 1 Seq_len(43) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.614 ppl: 6.123 Seq_len(85) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 3.808 ppl: 14.006 Seq_len(135) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.218 ppl: 4.651 Seq_len(53) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.261 ppl: 4.792 Seq_len(47) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.646 ppl: 6.258 Seq_len(78) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.347 ppl: 5.087 Seq_len(26) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.712 ppl: 6.553 Seq_len(63) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 3.568 ppl: 11.858 Seq_len(147) Using LSHAttn: <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 <class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1 bpd: 2.973 ppl: 7.850 ```
04-27-2022 18:34:04
04-27-2022 18:34:04
Hey @leo-liuzy, Sorry what exactly is the issue here with Reformer? Is the training not working?<|||||>Hi @patrickvonplaten , I am evaluating released model trained on crime and punishment (with examples randomly grabbed from crime and punishment). I found if I remove LSHSelfAttention output from producing perplexity. The perplexity doesn't change much. But if I remove LocalSelfAttention, the PPL goes up by a lot. So, I wonder if this is caused a bug (even during training) in codebase, or it's intrinsic to the specific reformer's model structure -- (`attn_layers = ["lsh", "local", "lsh", "local", "lsh", "local"]`)<|||||>I'm not really sure @leo-liuzy sadly - I've never removed the local layers when training the model. Maybe you can also try asking on https://discuss.huggingface.co/ :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,971
closed
AttributeError: 'DataParallel' object has no attribute 'save_pretrained'
### System Info ```shell torch==1.10.2+cu113 transformers==4.18.0 Python 3.6.9 Linux "18.04.6 LTS (Bionic Beaver)" ``` I am training a T5 transformer (T5ForConditionalGeneration.from_pretrained(model_params["MODEL"])) to generate text. The model works well when I train it on a single GPU. But when I want to parallel the data across several GPUs by doing `model = nn.DataParallel(model)`, I can't save the model. The error is: > File "run.py", line 288, in T5Trainer > model.save_pretrained(path) > File "/home/USER_NAME/venv/pt_110/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1178, in __getattr__ > type(self).__name__, name)) > AttributeError: 'DataParallel' object has no attribute 'save_pretrained' ### Reproduction Wrap the model with `model = nn.DataParallel(model)`. ### Expected behavior ```shell The model should be saved without any issues. ```
04-27-2022 18:17:45
04-27-2022 18:17:45
`DataParallel` wraps the model. To access the underlying module, you can use the `module` attribute: ```py >>> from torch.nn import DataParallel >>> model = nn.DataParallel(model) >>> model.module.save_pretrained(<directory>) ```<|||||>> `DataParallel` wraps the model. To access the underlying module, you can use the `module` attribute: > > ```python > >>> from torch.nn import DataParallel > >>> model = nn.DataParallel(model) > >>> model.module.save_pretrained(<directory>) > ``` Thanks @LysandreJik!
transformers
16,970
closed
Fix check_all_models_are_tested
# What does this PR do? The block from L396 to L398 should be in the `else` block if I understand correctly. https://github.com/huggingface/transformers/blob/8d3f952adb8c98cec2ea1f59bb7acfbc08232381/utils/check_repo.py#L394-L398 Otherwise, when a model has no test file, I get errors like below, and the program stops immediately. (with `test_file = []` passed to `check_models_are_tested`) ```python File "/home/yih_dar_huggingface_co/transformers/utils/check_repo.py", line 362, in check_models_are_tested tested_models = find_tested_models(test_file) File "/home/yih_dar_huggingface_co/transformers/utils/check_repo.py", line 343, in find_tested_models with open(os.path.join(PATH_TO_TESTS, test_file), "r", encoding="utf-8", newline="\n") as f: File "/usr/lib/python3.9/posixpath.py", line 90, in join genericpath._check_arg_types('join', a, *p) File "/usr/lib/python3.9/genericpath.py", line 152, in _check_arg_types raise TypeError(f'{funcname}() argument must be str, bytes, or ' TypeError: join() argument must be str, bytes, or os.PathLike object, not 'list' ```
04-27-2022 18:04:35
04-27-2022 18:04:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,969
closed
Fix doc notebooks links
# What does this PR do? Notebooks for the documentation have moved under the `en` folder, this PR fixes all the links we have.
04-27-2022 17:25:31
04-27-2022 17:25:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,968
closed
Fixup no_trainer save logic
# Fix save logic in all `no_trainer` examples ## What does this add? This PR fixes a bug pointed out in https://github.com/huggingface/accelerate/issues/322, where the save and load logic was wrong in how it skipped over the steps in the training loop. This PR fixes it and changes the internals slightly to let saveing of a checkpoint be named right (before it always started at `epoch_0`, even if we resumed from epoch 1
04-27-2022 15:52:48
04-27-2022 15:52:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,967
closed
cannot import name 'RegNetModel' from 'transformers'
### System Info ```shell python 3.8 transformers 4.18.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import RegNetModel ### Expected behavior ```shell how to import RegNetModel ? ```
04-27-2022 15:03:18
04-27-2022 15:03:18
RegNet is currently only available from the main branch, it will be included it in the next release. You can install it as follows: `pip install git+https://github.com/huggingface/transformers.git`
transformers
16,966
closed
Fix add-new-model-like when model doesn't support all frameworks
# What does this PR do? This fixes the `transformers-cli add-new-model-like` command when the model used as a model is not implemented in all frameworks.
04-27-2022 14:59:31
04-27-2022 14:59:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,965
closed
Move test model folders new
# What does this PR do?
04-27-2022 14:32:39
04-27-2022 14:32:39
transformers
16,964
closed
Update custom_models.mdx
BertModelForSequenceClassification -> [BertForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/bert#transformers.BertForSequenceClassification) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2022 14:30:39
04-27-2022 14:30:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,963
closed
Fix `distributed_concat` with scalar tensor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> If a scalar tensor is passed to `distributed_concat`, the output tensors are correctly converted to one element vectors. However, this is not done for the tensor itself, which causes an exception to be thrown in `dist.all_gather` due to a tensor length mismatch. This PR fixes that. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2022 13:46:34
04-27-2022 13:46:34
cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
16,962
closed
Can't reproduce training of wav2vec2-large from documentation
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Pretraining a wav2vec-large model using the documentation under [examples/speech-pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) does not work. Running the following code (copy-pasted from the README) gives an error due to `model_path_or_dir` not found: ``` accelerate launch run_wav2vec2_pretraining_no_trainer.py \ --dataset_name=librispeech_asr \ --dataset_config_names clean clean other \ --dataset_split_names train.100 train.360 train.500 \ --output_dir=./test \ --max_train_steps=200000 \ --num_warmup_steps=32000 \ --gradient_accumulation_steps=8 \ --learning_rate=0.001 \ --weight_decay=0.01 \ --max_duration_in_seconds=20.0 \ --min_duration_in_seconds=2.0 \ --model_name_or_path=./ --logging_steps=1 \ --saving_steps=10000 \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=4 \ --adam_beta1=0.9 \ --adam_beta2=0.98 \ --adam_epsilon=1e-06 \ --gradient_checkpointing \ ``` I tried using ´facebook/wav2vec-large-lv60' in `model_name_or_path` but receive the following error: ``` Traceback (most recent call last): File "run_wav2vec2_pretraining_no_trainer.py", line 730, in <module> main() File "run_wav2vec2_pretraining_no_trainer.py", line 572, in main for step, batch in enumerate(train_dataloader): File "/home/ucloud/.local/lib/python3.8/site-packages/accelerate/data_loader.py", line 303, in __iter__ for batch in super().__iter__(): File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "run_wav2vec2_pretraining_no_trainer.py", line 326, in __call__ sampled_negative_indices = _sample_negative_indices( File "/home/ucloud/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 336, in _sample_negative_indices sampled_indices = np.random.randint(0, high, size=(high + 1, num_negatives)) File "mtrand.pyx", line 748, in numpy.random.mtrand.RandomState.randint File "_bounded_integers.pyx", line 1247, in numpy.random._bounded_integers._rand_int64 ValueError: high <= 0 ``` The demo script trains without issue. Using the parameters from the demo script and changing `model_name_or_path` from 'patrickvonplaten/wav2vec2-base-v2` to ´facebook/wav2vec-large-lv60´ gives the above error. Training on a single T4 GPU (benchmarking purposes) ### Expected behavior ```shell Wav2vec-large pretraining to run. ```
04-27-2022 13:03:20
04-27-2022 13:03:20
Hey @HLasse, Could you increase this parameter: https://huggingface.co/facebook/wav2vec2-large-lv60/blob/main/config.json#L62 to `0.5` and see if it works then? It seems like given the sequence length you are not sampling enough negative targets. Also it'll be really hard / impossible to do a full pretraining on a single T4 GPU<|||||>That works, thanks! > Also it'll be really hard / impossible to do a full pretraining on a single T4 GPU I know - this was mainly to get an estimate of training time on different hardware setups. Danish wav2vec models coming up soon! :)
transformers
16,961
closed
Add parameter --config_overrides for run_mlm_wwm.py
## WHY - I noticed that the parameter `--config_overrides` is only available in `run_clm.py`, `run_plm.py` and `run_mlm.py` in `examples/pytorch/language-modeling`, but not available in `run_mlm_wwm.py` in `examples/research_projects/mlm_wwm/run_mlm_wwm.py`. - However, I want to train a wwm model from scratch too, so we need this parameter. ## WHAT - Added the parameter `--config_overrides` in `run_mlm_wwm.py`.
04-27-2022 09:54:08
04-27-2022 09:54:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @wlhgtc @LowinLi Please review this PR if you have time thank you.<|||||>Thanks for your PR. Note that we don't maintain research projects, they are pinned to work with a specific version of Transformers. You will need approval from the original author of the script to have this merged :-)<|||||>@sgugger Thank you for reply. However I can not find the earliest history of `run_mlm_wwm.py`... Is @wlhgtc the original author?<|||||>@conan1024hao LGTM And @sgugger can you help merge this PR ? <|||||>Sure thing!
transformers
16,960
closed
Word limit with mBART-50 translation
I'm using facebook/mbart-large-50-many-to-many-mmt model to translate french texts to english, but it seems the translation is limited to the first 110 words of the input text. Do you confirm? and is there a way to fix this? Thanks in advance.
04-27-2022 09:27:36
04-27-2022 09:27:36
Hey @phayat ! Could you please post a code-snippet so we could reproduce the issue ? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>hey @patil-suraj @phayat have you found the solution
transformers
16,959
closed
Add -e flag to some GH workflow yml files
# What does this PR do? Current 2 GitHub actions workflow yml use `pip install [.dev]`. This installs `transformers` in `/home/runner/venv/lib/python3.6/site-packages`, and this is cached. In future job runs, the cache is restored, and that `transformers` version is used - instead of the latest commit, i.e. we want to use `/home/runner/work/transformers/transformers/src`. Without this PR, I have trouble after updating `add_new_model.py` (to change test model folders from `tests/` to `tests/models/`) because the `add_new_model.py` from `/home/runner/venv/lib/python3.6/site-packages` would be used, which would put the test template models under `tests/` This PR makes sure the latest `transformers` is used by using `pip install -e [.dev]`, and builds a new cache with it. - Add -e flag to some GH workflow yml files - change cache key in order to make the change effective - add a check on `transformers` location
04-27-2022 08:33:25
04-27-2022 08:33:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>The added check will produce something like (if the transformers is not from the expected location) <img width="777" alt="Screenshot 2022-04-27 131320" src="https://user-images.githubusercontent.com/2521628/165506295-901fd17d-17f6-4fb7-8eec-f96b87d63b03.png"> <|||||>after the new cache is built the first time and job completes (pass the check I added too), the next run when we have ``` Cache restored from key: v3-tests_model_like-ce386d6c28d7afcca58dc875de2ef1b7477e8246a0bfdb6ff4de0eb222eafef2 ``` the check failed with ``` transformers is from but it shoud be from /home/runner/work/transformers/transformers/src. A fix is required. Stop testing. ``` which means `pip show transformers` gives empty location! I will try to make it work - I really like to have this test. ~~(But things should work now if we just remove this test)~~ <|||||>Confirmed this (with `-e`) currently not working for the subsequentially run (i.e. cache loaded). I still think there might be some workaround, let me try. Set to draft for now ### currently error ``` Traceback (most recent call last): File "/home/runner/venv/bin/transformers-cli", line 33, in <module> sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')()) File "/home/runner/venv/bin/transformers-cli", line [22](https://github.com/huggingface/transformers/runs/6197315688?check_suite_focus=true#step:7:22), in importlib_load_entry_point for entry_point in distribution(dist_name).entry_points File "/home/runner/venv/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 815, in distribution return Distribution.from_name(distribution_name) File "/home/runner/venv/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 430, in from_name raise PackageNotFoundError(name) importlib_metadata.PackageNotFoundError: No package metadata was found for transformers Error: Process completed with exit code 1. ```
transformers
16,958
closed
Misc. fixes for Pytorch QA examples:
Thank you for the great library! This fixes a number of issues with Pytorch QA examples. All numbers are either the same or went up. However, there are still some issues, which I wasn't able to fix (in one example). Please, see the notes and benchmark results below. # What does this PR do? 1. Fixes evaluation errors popping up when you train/eval on squad v2 (one was newly encountered and one that was previously reported Running SQuAD 1.0 sample command raises IndexError #15401 but not completely fixed). 2. Removes boolean arguments that don't use store_true. Please, don't use these: **ANY** non-empty string is being converted to `True` in this case. This is clearly an **undesired**behavior, which creates a LOT of confusion. 3. All no-trainer test scripts are now saving metric values in the same way (with the right prefix eval_), which is consistent with the trainer-based versions. 4. Adds forgotten model.eval() in the no-trainer versions. This improved some results, but not everything (see the discussion in the end). Please, see the F1 scores and the discussion below. - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. This is a **reduced** PR [as discussed here](https://github.com/huggingface/transformers/pull/16926#issuecomment-1108479241). - [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation** - [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if running more QA tests automatically will be feasible. Do note that the existing "unit-test" is very crude and does not permit detecting small regressions in model quality. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, who reviewed a prior version of this PR. ## Comparing old and new performance + some potential issues Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern: | | previous | new | |-----------------------------------|:--------:|:----:| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v2 | N/A | 75.2 | | squad v2 (no trainer) | N/A | 77.1 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
04-27-2022 08:17:45
04-27-2022 08:17:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,956
closed
How to train over VERY LARGE dataset?
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. @sgugger ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell some tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html ```
04-27-2022 07:47:53
04-27-2022 07:47:53
transformers
16,955
closed
config.json not found!
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-1063-azure-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` ### Who can help? @sgugger I am training a NER model following tutorial: ```python from transformers import TrainingArguments args = TrainingArguments( "saved_models_bert-finetuned-ner-100examples-with-aug", learning_rate=2e-5, num_train_epochs=100, weight_decay=0.01, per_device_train_batch_size = 32, per_device_eval_batch_size = 32, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end = True, metric_for_best_model = 'f1' ) from transformers import Trainer trainer = Trainer( model=model, args=args, train_dataset=new_training_dataset, eval_dataset=tokenized_datasets["validation"].select(range(100)), data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=tokenizer, ) trainer.train() ``` Then I got this error: ```shell --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Input In [28], in <cell line: 14>() 1 from transformers import Trainer 3 trainer = Trainer( 4 model=model, 5 args=args, (...) 11 tokenizer=tokenizer, 12 ) ---> 14 trainer.train() File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1512, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1509 self.control.should_training_stop = True 1511 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control) -> 1512 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1514 if DebugOption.TPU_METRICS_DEBUG in self.args.debug: 1515 if is_torch_tpu_available(): 1516 # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1628, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 1625 self._report_to_hp_search(trial, epoch, metrics) 1627 if self.control.should_save: -> 1628 self._save_checkpoint(model, trial, metrics=metrics) 1629 self.control = self.callback_handler.on_save(self.args, self.state, self.control) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1700, in Trainer._save_checkpoint(self, model, trial, metrics) 1697 self.store_flos() 1699 output_dir = os.path.join(run_dir, checkpoint_folder) -> 1700 self.save_model(output_dir, _internal_call=True) 1701 if self.deepspeed: 1702 # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed 1703 # config `stage3_gather_16bit_weights_on_model_save` is True 1704 self.deepspeed.save_checkpoint(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2128, in Trainer.save_model(self, output_dir, _internal_call) 2125 self.deepspeed.save_checkpoint(output_dir) 2127 elif self.args.should_save: -> 2128 self._save(output_dir) 2130 # Push to the Hub when `save_model` is called by the user. 2131 if self.args.push_to_hub and not _internal_call: File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2180, in Trainer._save(self, output_dir, state_dict) 2178 torch.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME)) 2179 else: -> 2180 self.model.save_pretrained(output_dir, state_dict=state_dict) 2181 if self.tokenizer is not None: 2182 self.tokenizer.save_pretrained(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py:1352, in PreTrainedModel.save_pretrained(self, save_directory, save_config, state_dict, save_function, push_to_hub, max_shard_size, **kwargs) 1350 # Save the config 1351 if save_config: -> 1352 model_to_save.config.save_pretrained(save_directory) 1354 # Save the model 1355 if state_dict is None: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:440, in PretrainedConfig.save_pretrained(self, save_directory, push_to_hub, **kwargs) 437 # If we save using the predefined names, we can load using `from_pretrained` 438 output_config_file = os.path.join(save_directory, CONFIG_NAME) --> 440 self.to_json_file(output_config_file, use_diff=True) 441 logger.info(f"Configuration saved in {output_config_file}") 443 if push_to_hub: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:805, in PretrainedConfig.to_json_file(self, json_file_path, use_diff) 794 def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True): 795 """ 796 Save this instance to a JSON file. 797 (...) 803 is serialized to JSON file. 804 """ --> 805 with open(json_file_path, "w", encoding="utf-8") as writer: 806 writer.write(self.to_json_string(use_diff=use_diff)) FileNotFoundError: [Errno 2] No such file or directory: 'saved_models_bert-finetuned-ner-100examples-with-aug/checkpoint-6/config.json' ``` This is so wired! From my understanding, the config.json file should be written, so such error shouldn't occur. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am not sure if this can be reproduced in another machine. btw, I am using A100. ### Expected behavior ```shell A normal training... ```
04-27-2022 06:55:42
04-27-2022 06:55:42
Sorry to issue here, I have already asked this question in the forum but haven't receive any response.<|||||>This means the folders were not created. Are you sure you point to locations where the Python script can write?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,954
closed
Initialization
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-27-2022 03:08:54
04-27-2022 03:08:54