repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,757 | closed | Problem during the training with the parameter train_dataset. (Dict/Tensor problem) | Hi there, I'm following the [tutorial](https://huggingface.co/blog/fine-tune-vit), trying to fine-tune the net on Stanford Dog Dataset.
I am facing this problem: once that I try to do `trainer.train()`
this error appear:
```/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
---Running training
Num examples = 4160
Num Epochs = 4
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 1040
ValueError Traceback (most recent call last)
[<ipython-input-32-0f10542a3dd8>](https://86s6jsm55e-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220615-060045-RC00_455067423#) in <module>()
1 # RESULTS
2
----> 3 train_results = trainer.train()
4 trainer.save_model()
5 trainer.log_metrics("train", train_results.metrics)
13 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/vit/feature_extraction_vit.py](https://86s6jsm55e-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220615-060045-RC00_455067423#) in __call__(self, images, return_tensors, **kwargs)
125 if not valid_images:
126 raise ValueError(
--> 127 "Images must of type `PIL.Image.Image`, `np.ndarray` or `torch.Tensor` (single example), "
128 "`List[PIL.Image.Image]`, `List[np.ndarray]` or `List[torch.Tensor]` (batch of examples)."
129 ) ```
The error is the following:
**```ValueError: Images must of type `PIL.Image.Image`, `np.ndarray` or `torch.Tensor` (single example), `List[PIL.Image.Image]`, `List[np.ndarray]` or `List[torch.Tensor]` (batch of examples).```**
I think that the problem is on the definition of dataset['train'] that isn't an iterable or something like that: do you have any recommendation? I tried literally every kind of change of type but still cannot train it!!!
The dataset that I have are np.array with dim (256,256,3) that I'm processing with:
``` def transform(arr_x=x_te, arr_y=y_te):
inputs = extractor([x for x in arr_x], return_tensors='pt')
inputs['labels'] = [y for y in arr_y]
return inputs # <class 'transformers.feature_extraction_utils.BatchFeature'>
```
After that I create:
```
dd = datasets.DatasetDict({"train": Dataset.from_dict({'pixel_values': arr_transf_te['pixel_values'], 'labels':arr_transf_te['labels'] })})
```
And then I do the transform:
```
dataset = dd.with_transform(transform)
```
Once I try to:
```
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
compute_metrics=compute_metrics,
train_dataset = dataset['train'] , # type datasets.arrow_dataset.Dataset
tokenizer = extractor,
)
```
In:
```
train_results = trainer.train()
```
I get the error that I show you above
This is my [project](https://colab.research.google.com/drive/1CueCyVjyh6sRJF2gcs0WdF0u33rtzmLI?usp=sharing): if you can look at it would be AMAZING! | 06-17-2022 14:43:33 | 06-17-2022 14:43:33 | @NielsRogge have you seen this error? This comes from the ViT fine-tuning tutorial.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,756 | closed | GPT-NeoX missing Tokenizer | ### System Info
```shell
- `transformers` version: 4.21.0.dev0
- Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, deepspeed
```
### Who can help?
@patil-suraj @SaulLu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I try to load the model using the following script, it hangs and tells you that the tokenizer GPTNeoXTokenizer does not exist.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
This does not happen when using the "Fast" version.
### Expected behavior
```shell
Model should work, tokenizer should load.
```
| 06-17-2022 13:37:47 | 06-17-2022 13:37:47 | Error I am receiving:
```
ImportError: cannot import name 'GPTNeoXTokenizer' from 'transformers' (/opt/conda/lib/python3.8/site-packages/transformers/init.py)
```<|||||>Just tried your code sample and it works fine on my side. Are you sure you have `tokenizers` installed in your env? GPT-Neo-X does not have a slow tokenizer, so it requires this library.<|||||>> Just tried your code sample and it works fine on my side. Are you sure you have `tokenizers` installed in your env? GPT-Neo-X does not have a slow tokenizer, so it requires this library.
Just checked:
tokenizers 0.12.1<|||||>Only the fast tokenizer is available for GPT-NeoX-20B.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, @mrseeker did you resolve your issue? I have same problem, trying to get working version of transformers...<|||||>I have the exact same issue.
```
│ /usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:699 in │
│ from_pretrained │
│ │
│ 696 │ │ │ │ tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) │
│ 697 │ │ │ │
│ 698 │ │ │ if tokenizer_class is None: │
│ ❱ 699 │ │ │ │ raise ValueError( │
│ 700 │ │ │ │ │ f"Tokenizer class {tokenizer_class_candidate} does not exist or is n │
│ 701 │ │ │ │ ) │
│ 702 │ │ │ return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *input
```<|||||>For those having an issue with this, the resolution is as follows:
Use AutoTokenizer**Fast**, AutoTokenizer is not supported by NeoX.<|||||>if you use fastchat,modify fastchat/model/mode_adapter.py like this
def load_model(self, model_path: str, from_pretrained_kwargs: dict):
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_path, low_cpu_mem_usage=True,trust_remote_code=True, **from_pretrained_kwargs
)
return model, tokenizer
will fix this issue |
transformers | 17,755 | closed | CLI: use hub's `create_commit` | # What does this PR do?
This PR changes the method to open PRs in `pt-to-tf` to the permanent method defined by the hub -- `create_commit`. It also updates the commit description (it now supports line breaks 🎉 ) and adds a flag to add extra description (so I can programmatically tag the right HF maintainer in certain repos)
We can see an example PR [here](https://huggingface.co/joaogante/test_text/discussions/9) -- @Rocketknight1 confirms that the notification got to him!
After this PR gets merged, we can announce `pt-to-tf`, as it no longer depends on unreleased functionality 🚀 | 06-17-2022 13:11:49 | 06-17-2022 13:11:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Some CI runs have hub 0.7.0 cached, figuring how to best update it |
transformers | 17,754 | closed | Text classification pipeline outputs differ with 4.20 | ### System Info
```shell
transformers 4.20.0
```
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# 4.19.4
```python
from transformers import pipeline
nlp = pipeline("text-classification")
nlp("happy", return_all_scores=True)
```
Output:
```
[[{'label': 'NEGATIVE', 'score': 0.0001246821484528482}, {'label': 'POSITIVE', 'score': 0.9998753070831299}]]
```
# 4.20.0
```python
from transformers import pipeline
nlp = pipeline("text-classification")
nlp("happy", return_all_scores=True)
```
Output:
```
[{'label': 'NEGATIVE', 'score': 0.0001246821484528482}, {'label': 'POSITIVE', 'score': 0.9998753070831299}]
```
Running with top_k=None also produces a single list (only difference is labels are ordered by score desc).
```python
from transformers import pipeline
nlp = pipeline("text-classification")
nlp("happy", top_k=None)
```
Output:
```
[{'label': 'POSITIVE', 'score': 0.9998753070831299}, {'label': 'NEGATIVE', 'score': 0.0001246821484528482}]
```
4.19.4 returns a list of lists when return_all_scores=True. 4.20.0 only produces a single list.
It looks like this logic changed the outputs in the `__call__` method.
```python
if isinstance(args[0], str) and isinstance(result, dict):
# This pipeline is odd, and return a list when single item is run
return [result]
else:
return result
```
Previously, it was this:
```python
if isinstance(args[0], str):
# This pipeline is odd, and return a list when single item is run
return [result]
else:
return result
```
### Expected behavior
```shell
When passing a single text element, the pipeline up to 4.20 would return a list. If this change was expected, I can work with it but figured it was worth bringing to your attention in case it wasn't intentional.
```
| 06-17-2022 12:53:44 | 06-17-2022 12:53:44 | Just wanted to check in on this and see if it's considered an issue or the new way the pipeline works. <|||||>@davidmezzetti Didn't see this issue, but I didn't see the regression.
Should have been fixed here https://github.com/huggingface/transformers/pull/17906. Sorry it had time to ship with `4.20`. It will be reverted back in the next release.
We are really keen to not break anything format wise while we are in `v4`. But for `v5` harmonizing return types of pipelines is definitely on the agenda I want to push (some return lists, lists of lists, but we're not super consistent across pipelines.).
<|||||>Thank you for responding and the update!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,753 | closed | Attempt to change Push CI to workflow_run | # What does this PR do?
This is a fix for #17692 :
- Use the correct properties to get the information (branch/commit-SHA, etc.) for both `push` and `workflow_run` event type
- Even if we consider only the `workflow_run` event triggered by a push to `main` branch, we still need to use `github.event.workflow_run.head_sha` to get the correct SHA (otherwise, in the case where 2 PRs merged into `main` in a very short time period, the first one will get the latest commit SHA)
- Currently, the push CI could still be triggered by `push` event if the branch is **NOT** `main`. The main purpose is for testing a particular branch. This is why I need to consider both event type.
**NOTE**: I have verified the change extensively in my own (dummy) repo. However, the part regarding the actual CI tests + the part of slack report are not verified. **The part regarding preparing the necessary information for slack report is verified.** Since the `workflow_run` could be launched only when the PR is merged into `main`, I hope there is no other unexpected issue in this PR 🙏.
| 06-17-2022 12:38:42 | 06-17-2022 12:38:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I will merge this tomorrow (so unlikely to have other PRs merged), and revert it if anything breaks (hopefully not, I am running out of ideas 😄 )<|||||>This works well, one good example to check is
[TF: BART compatible with XLA generation](https://github.com/huggingface/transformers/commit/132402d752044301b37e54405832738b16f49df6) |
transformers | 17,752 | closed | `Trainer` has a weird way of determining whether a TPU device is present | ### System Info
```shell
transformer > 4.15.0
Vertex AI Notebook with Pytorch 1.11 using A100
```
### Who can help?
I am very unlucky to have encountered this issue where a TPU device is assumed to be present on the machine, which it doesn't.
It prompted this error `RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:274 : Missing XLA configuration` and hinted me that something related to TPU is causing the error.
After some debugging, I realized that
https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/src/transformers/utils/import_utils.py#L395
is simply checking if `torch_xla` is present as opposed to actually checking whether a TPU device is present. I managed to get it work by simply removing the `torch_xla` package. Yet, I also find it bizarre that there is no way to manually turn off TPU training. I hope that the library can be made to actually check the presence of the TPU.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Launch an A100 notebook on GCP Vertex AI and train any model using `Trainer`.
### Expected behavior
```shell
No error.
```
| 06-17-2022 12:14:29 | 06-17-2022 12:14:29 | I'm not sure what the bug is: when you have `torch_xla` installed, the `Trainer` will use it (the terminology uses TPU internally but it works on GPU and CPU as well). Could you give us a reproducer of your error?<|||||>@sgugger
Thank you for the response! Below is an example how I reproduced the problem
The only difference is the presence of the `torch_xla` package, installed via
`!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl`
Models, datasets, nor the versions of the installed packages matter.
## Error
```
!pip list | grep torch
pytorch-lightning 1.5.7
torch 1.11.0
torch-xla 1.11
torchmetrics 0.6.2
torchvision 0.10.0+cu111
```
### Error message
`RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration`
In certain scenarios, which I don't know how to reproduce right now, it also gives a error resembling `package 'xm' is not found`.
## No error
The template code gives a known error as expected`TypeError: forward() got an unexpected keyword argument 'labels'` . It's ok to ignore this because this is a piece of crude code.
```
!pip uninstall torch_xla -y
!pip list | grep torch
Found existing installation: torch-xla 1.11
Uninstalling torch-xla-1.11:
Successfully uninstalled torch-xla-1.11
pytorch-lightning 1.5.7
torch 1.11.0
torchmetrics 0.6.2
torchvision 0.10.0+cu111
```
## Training
``` python
from datasets import load_dataset
from transformers import (
AutoModel, AutoTokenizer, TrainingArguments, Trainer
)
import gc
import torch
model_name = "distilroberta-base"
ds = load_dataset('rotten_tomatoes', split='train')
default_train_args = {
"learning_rate": 6e-5,
"per_device_train_batch_size": 64,
"per_device_eval_batch_size": 128,
"num_train_epochs": 7,
"weight_decay": 1e-6,
"evaluation_strategy": "steps",
"eval_steps": 50,
"save_strategy": "epoch",
"remove_unused_columns": False,
}
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
trainer = Trainer(model=model, args=TrainingArguments("./", **default_train_args), train_dataset=ds)
trainer.train()
```<|||||>I have reproduced this error in multiple environments, here is one of the example, a Vertex AI Notebook.
```
Environment version
M82
Machine type
n1-standard-16 (16 vCPUs, 60 GB RAM)
GPU
NVIDIA Tesla T4 x 1
```<|||||>Should be fixed via https://github.com/huggingface/transformers/pull/17802<|||||>Thank you all so much!<|||||>I get the same error while training a SWIN transformer. It's probably related to the pytorch version. I had a older GCP VM with PyTorch 1.9 and its working fine in it
But with newer Vertex VMs (Pytorch 1.11) it's giving me the same error.
Uninstalling torch-xla does the work. <|||||>> I get the same error while training a SWIN transformer. It's probably related to the pytorch version. I had a older GCP VM with PyTorch 1.9 and its working fine in it
> But with newer Vertex VMs (Pytorch 1.11) it's giving me the same error.
>
> Uninstalling torch-xla does the work.
Yes, exactly the same scenario. The main reason is that Google's Pytorch 1.11 image is now pre-installed with `torch_xla` in addition to `Trainer`'s unconventional way of checking TPU devices.<|||||>Can you try with installing transformers from git to see if the problem still exists?
E.g.:
`pip install git+https://github.com/huggingface/transformers`<|||||>@muellerzr
Hi, I ran the same code on the previously mentioned notebook instance with both the dev version `transformers` and `torch_xla` installed, but it stuck indefinitely instead of prompting the expected error `TypeError: forward() got an unexpected keyword argument 'labels'`. The only way I can pause this is restarting/shutdown the notebook.
<img width="869" alt="image" src="https://user-images.githubusercontent.com/42510606/176066277-a902bb69-a2a2-466a-ba65-0eabb154c428.png">
|
transformers | 17,751 | closed | Use multiple workers for DataLoader at prediction step for Trainer | # What does this PR do?
Fixes #17749 by adding a parameter in the DataLoader init call of the test data_loader, so that we can use multiple workers for data preparation during prediction time.
@sgugger, I tag you because this is a Trainer related PR | 06-17-2022 11:23:26 | 06-17-2022 11:23:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,750 | closed | Improve performance docs | # What does this PR do?
As discussed with @stas00 this PR does the following things:
- adds files for missing sections so contributors can better find the place to add content.
- adds a disclaimer at the beginning stating that a lot of general training information is in the single GPU training section
- fixes the link of CPU inference
Looking at the ToC I was wondering whether we should add subsections like it is done for the tasks to make the main ToC a bit slimmer. Otherwise we have the main performance docs (the entry point) there plus all the other sections (~8-10). What do you think?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 06-17-2022 10:28:44 | 06-17-2022 10:28:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sounds good @sgugger, I added the "Coming soon" and restructured the ToC.<|||||>Communications are difficult.
what I proposed is to add normal documents - not "Coming Soon"
inside those documents they should all explicitly point the user to read the first document that is already filled out and that we will expand this other currently mostly empty doc with the missing material.
In other words e.g,, perf_infer_gpu_many.mdx should have:
1. read perf_train_gpu_one.mdx first
2. this document will be completed soon.
If the docs remain "Coming Soon" nobody will read them and will miss out on the already rich performance docs we have.
And the impetus for this change was that we went from a complete solution - all performance notes in one doc, to very incomplete solution, making it look like we only have advise for those with 1 gpu and doing training.
The original discussion back in winter was that all performance docs will be filled out, but it was dropped after the first document and no signs of new docs coming any time soon. So this proposal was my attempt to rescue the situation.<|||||>Ok I added references to each document and a small text explaining what will come there. Is that what you had in mind @stas00? |
transformers | 17,749 | closed | Test DataLoader never uses multiple workers | ### Feature request / Bug Fix
I realized that in the `get_test_dataloader` method of the `Trainer` class, for datasets that are not instances of `torch.utils.data.IterableDataset`, the `num_workers` argument is not given to the `DataLoader` init call.
https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L936-L943
Whereas it is the case juste above if `test_dataset` is an instance of `torch.utils.data.IterableDataset`:
https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L925-L931
Moreover, the DataLoader outputted in `get_eval_dataloader` is initialized with the num_workers argument
https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L888-L896
I know this is not really a feature request, but I do not think it is not a bug either, sorry if I have posted at the wrong place. I read that when talking about the trainer class, @sgugger was generally the one to ping.
### Motivation
Adding a line with `num_workers=self.args.dataloader_num_workers,` in the `DataLoader` init call of the `get_test_dataloader` method could fasten the prediction steps by using multiple workers to load the data.
https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L936-L943
### Your contribution
I have submitted #17751 | 06-17-2022 10:00:13 | 06-17-2022 10:00:13 | |
transformers | 17,748 | closed | Add easy extensibility of `logits_processor` to `generate` | ### Feature request
It is easy to change the behavior of `generate` with the config.
However, `logits_processor` is not read from the config, but only received [as a parameter](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L874).
This is hard to accomplish because both [`Trainer`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2798) and [`Seq2SeqTrainer`](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq/seq2seq_trainer.py#L223) don't tunnel this parameter.
I guess my feature request is to either have it easily read from the config, or easily sent as a parameter to the different `Trainer` and `Seq2SeqTrainer` methods, such as `predict`.
### Motivation
Generation of text is known to have many problems, as well described in this article https://huggingface.co/blog/how-to-generate .
As we advance towards different types of textual inputs which are not simply natural language texts, such as semi-structured texts (e.g., linearized graphs), the ability to research new beam search ideas for different use cases is paramount.
We run into this need in two of my latest research projects. The problem is that this is not always the main contribution, and if it becomes too complicated (e.g., stop using `Trainer` abstractions), researchers might not follow through.
### Your contribution
I could submit a PR, but would like to first discuss its design. | 06-17-2022 09:10:10 | 06-17-2022 09:10:10 | Hey @eranhirs 👋 On the `generate` end, if I recall correctly (I can't find the discussion), we want to steer away from controlling generation from the `config` file -- cc @patrickvonplaten.
Maybe we can pass generation kwargs to `Seq2SeqTrainer` -- WDYT @sgugger?<|||||>Yes, we could add all generation kwargs to `predict` and `evaluate` in the `Seq2SeqTrainer`.<|||||>@eranhirs would you like to open a PR? :D <|||||>Perfect, thanks! Yes I will open a PR 👍 <|||||>Could we also pass generate arguments to [Seq2SeqTrainerArguments](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/training_args_seq2seq.py#L28) directly? This would be very cool so that we could use things like `logits_processor` in the validation step!
cc @gante @sgugger <|||||>`Seq2SeqTrainingArguments` is only there to provide an easy way to have arguments in the CLI, and you can't pass instances of logits processors as CLI arguments, so I don't see what adding this here would add.<|||||>> `Seq2SeqTrainingArguments` is only there to provide an easy way to have arguments in the CLI, and you can't pass instances of logits processors as CLI arguments, so I don't see what adding this here would add.
What about adding the argument to `Seq2SeqTrainer` then? When `predict_with_generate=True`, we should be able to pass all the `generate` arguments we want, right?<|||||>Or you could just pass them along when you call `evaluate` and `predict`. There are already 94 arguments in `Seq2SeqTrainingArguments`.<|||||>What if I want to early stop with the metric being calculated with `logits_processor`, for example?<|||||>You should then use Accelerate to be able to customize the training loop to your needs :-) |
transformers | 17,747 | closed | Problem with GPU | ### System Info
Hello, I'm using windows 11, with RTX2080 and python 3.9. The packages configuration is based on pytorch 1.11 + transformer4.19.0 and when I try to use the transformer in my code with the GPU I receive an error that the data that I pass can't be converted to numpy array. I have posted the problem on the forum at that link: https://discuss.huggingface.co/t/vit-problem-with-gpu-usage-require-image-to-be-numpy/18678/2
I think that is a bug because the input of the network is on the gpu and I do not understand why it is necessary to convert to numpy array.
How can I solve that bug?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use the image from shapenet.
The image are extracted from a saved tensor.
### Expected behavior
```shell
I expect that when the gpu is used there are no problem on the conversion of the input to GPU.
```
| 06-17-2022 06:56:41 | 06-17-2022 06:56:41 | You could try to upgrade the **numpy** version with `pip install numpy --upgrade`<|||||>I have updated the numpy package to the latest one (1.23.0) but It not work. I don't understand where is the problem.<|||||>Are you using the correct version of pytorch that suits your machine? I have no other idea why it might not work...<|||||>I using torch 1.11 for windows with gpu support<|||||>I have no clue then<|||||>Hi @marcomameli1992
Please don't put the issue description inside **\`\`\`shell ... \`\`\`** block, as this makes it hard to read 🙏 .
Regarding the issue, it would be much easier if you can provide a **minimal** code snippet to reproduce the issue. Currently, we don't really know what goes wrong without detailed information.
<|||||>Dear I use the code here on [github](https://github.com/marcomameli1992/p2m) I make it public for simplicity.
The dataset that I use is from that [link](https://drive.google.com/file/d/1Z8gt4HdPujBNFABYrthhau9VZW10WWYe/view?usp=sharing)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @marcomameli1992 Sorry for this late reply.
In order for us to help (if you still need help), it would be very helpful to provide a **minimal** code snippet that reproduces the issue, which could be run directly.
With only a link to a GitHub repository page, we don't really know how to use it, and it also makes the debugging more difficult.
<|||||>With a very quick look, it looks like you create `pool = FeaturePooling(im)` which doesn't transform the image. Only when `pred_points = model_gcn(graph, pool)` which invokes
https://github.com/marcomameli1992/p2m/blob/7e64071ce2a701044cab58f3c0e7877562157ab1/model/mesh_network.py#L31
will perform the data transformation. I believe this is not the usual good practice. You could try to perform the data transformation (feature extraction) - on CPU (with `numpy` or `torch`), then feed the extracted features into a model (after put them on GPU). |
transformers | 17,746 | closed | Is there Any difference of performance when finetuning bert use the huggingface or the google official code? | Hi,
I tend to finetune BERT with a simple text classification task.
However, I got different results when using the huggingface library (torch 1.8.1+cu111) and google's official code (tf 1.15).
I wonder if there is any optimization in huggingface for fine-tuning bert?
By the way, I believe that I use the same hyper-parameters. But I got the higher performance using the huggingface library | 06-17-2022 03:57:31 | 06-17-2022 03:57:31 | Hi @Doragd 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>all right. Thanks anyway. @gante |
transformers | 17,745 | closed | GPT-NEOX RuntimeError | Hi, when I ran the model GPT-NEOX, I got the "RuntimeError: batch1 dim 2 must match batch2 dim1" in modeling_gpt_neox.py, line 212.
So I tried to debugg and fix this problem, I found the code "present = None if use_cache else (key, value)" in modeling_gpt_neox.py, line 146.
Is that logical wrong? and the correct coding should be "present = None if not use_cache else (key, value)" ?
| 06-17-2022 03:54:29 | 06-17-2022 03:54:29 | Hey @yupei9 - great catch! I think you're 100% right - do you want to open a PR to fix it? Also cc @sgugger <|||||>Ahah sorry, I had a PR ready already since I wanted to test if the fix worked. |
transformers | 17,744 | closed | Fix `top_k_top_p_filtering` having unintended behavior | # What does this PR do?
- Fix `top_k_top_p_filtering` not passing `filter_value` to
`TopPLogitsWarper` causing any top-p filtered logits to be -inf
instead of specified value
- Add corresponding test
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-17-2022 03:21:00 | 06-17-2022 03:21:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for the fix @unifyh! |
transformers | 17,743 | closed | Bump notebook from 6.4.10 to 6.4.12 in /examples/research_projects/lxmert | Bumps [notebook](http://jupyter.org) from 6.4.10 to 6.4.12.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-17-2022 00:13:22 | 06-17-2022 00:13:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,742 | closed | Bump notebook from 6.4.10 to 6.4.12 in /examples/research_projects/visual_bert | Bumps [notebook](http://jupyter.org) from 6.4.10 to 6.4.12.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-17-2022 00:11:48 | 06-17-2022 00:11:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,741 | closed | InvalidGitRepositoryError while running distillation train example | ### System Info
```shell
(akalia) akalia@data-workstation-akalia1-gpu-data-10:~/pretrained_rembert/distillation$ python train.py --student_type distilbert --student_config training_configs/distilbert-base-uncased.json --teacher_type bert --teacher_name bert-base-uncased --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm --freeze_pos_embs --dump_path serialization_dir/my_first_training --data_file data/binarized_text.bert-base-uncased.pickle --token_counts data/token_counts.bert-base-uncased.pickle --force
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Initializing GPUs
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Number of nodes: 1
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Node ID : 0
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Local rank : 0
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - World size : 1
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - GPUs per node : 1
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Master : True
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Multi-node : False
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Multi-GPU : False
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Hostname : data-workstation-akalia1-gpu-data-10.dm.vpc
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Experiment will be dumped and logged in serialization_dir/my_first_training
06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Param: Namespace(force=True, dump_path='serialization_dir/my_first_training', data_file='data/binarized_text.bert-base-uncased.pickle', student_type='distilbert', student_config='training_configs/distilbert-base-uncased.json', student_pretrained_weights=None, teacher_type='bert', teacher_name='bert-base-uncased', temperature=2.0, alpha_ce=5.0, alpha_mlm=2.0, alpha_clm=0.0, alpha_mse=0.0, alpha_cos=1.0, mlm=True, mlm_mask_prop=0.15, word_mask=0.8, word_keep=0.1, word_rand=0.1, mlm_smoothing=0.7, token_counts='data/token_counts.bert-base-uncased.pickle', restrict_ce_to_mask=False, freeze_pos_embs=True, freeze_token_type_embds=False, n_epoch=3, batch_size=5, group_by_size=True, gradient_accumulation_steps=50, warmup_prop=0.05, weight_decay=0.0, learning_rate=0.0005, adam_epsilon=1e-06, max_grad_norm=5.0, initializer_range=0.02, fp16=False, fp16_opt_level='O1', n_gpu=1, local_rank=0, seed=56, log_interval=500, checkpoint_interval=4000, n_nodes=1, node_id=0, global_rank=0, world_size=1, n_gpu_per_node=1, multi_gpu=False, is_master=True, multi_node=False)
Traceback (most recent call last):
File "/home/akalia/pretrained_rembert/distillation/train.py", line 324, in <module>
main()
File "/home/akalia/pretrained_rembert/distillation/train.py", line 245, in main
git_log(args.dump_path)
File "/home/akalia/pretrained_rembert/distillation/utils.py", line 40, in git_log
repo = git.Repo(search_parent_directories=True)
File "/home/akalia/anaconda3/envs/akalia/lib/python3.9/site-packages/git/repo/base.py", line 224, in __init__
self.working_dir: Optional[PathLike] = self._working_tree_dir or self.common_dir
File "/home/akalia/anaconda3/envs/akalia/lib/python3.9/site-packages/git/repo/base.py", line 307, in common_dir
raise InvalidGitRepositoryError()
git.exc.InvalidGitRepositoryError
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
python train.py \
--student_type distilbert \
--student_config training_configs/distilbert-base-uncased.json \
--teacher_type bert \
--teacher_name bert-base-uncased \
--alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm \
--freeze_pos_embs \
--dump_path serialization_dir/my_first_training \
--data_file data/binarized_text.bert-base-uncased.pickle \
--token_counts data/token_counts.bert-base-uncased.pickle \
--force
```
### Expected behavior
```shell
A distilled model will be generated
```
| 06-16-2022 22:05:13 | 06-16-2022 22:05:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, did you resolve this error?<|||||>> Hi, did you resolve this error?
I haven't. I am still waiting for a reply on this.<|||||>>
OK, I tried to upgrade the version of gitpython and gitdb2, but it doesn't work.<|||||>Hello, has this issue been resolved? |
transformers | 17,740 | closed | Add UL2 (just docs) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds docs for UL2: https://huggingface.co/google/ul2 -> important model that deserves its own doc page IMO
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-16-2022 20:02:01 | 06-16-2022 20:02:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Shouldn't we also add a conversion script? Or isn't this required?<|||||>> Shouldn't we also add a conversion script? Or isn't this required?
It should be the same as: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py
cc @DanielHesslow <|||||>Yeah I based the conversion off of that script and mostly added a bunch of hacks to work around limitations of my local system. At some point there should probably be a stable conversion script from t5x, but for now the script above is good enough.<|||||>> Thanks for adding this! I'm curious why the empty module? Does one of the script complain if we don't add it?
Copied it more or less from dialogpt after I noticed one check repo test was failing. Re-iterated and it looks like the only thing that is required is that the name is in `configuration_auto.py` - thanks for double-checking here!<|||||>> Thanks for adding this! I'm curious why the empty module? Does one of the script complain if we don't add it?
Copied it more or less from dialogpt after I noticed one check repo test was failing. Re-iterated and it looks like the only thing that is required is that the name is in `configuration_auto.py` - thanks for double-checking here! |
transformers | 17,739 | open | [WIP] DETR TF implementation | # What does this PR do?
Add TF implementation of DETR model
Dependent on the TF implementation of ResNets being merged in to provide a backbone: https://github.com/huggingface/transformers/pull/17427
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. | 06-16-2022 19:55:05 | 06-16-2022 19:55:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17739). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,738 | closed | deprecate is_torch_bf16_available | This is a follow up to https://github.com/huggingface/transformers/pull/17734 where @pacman100 discovered that the IPEX PR made `is_torch_bf16_available` ambiguous, as it went from gpu-only checks to cpu or gpu which is undefined behavior.
So this PR deprecates this function in favor of the very specific `is_torch_bf16_gpu_available` and `is_torch_bf16_cpu_available` that were added in https://github.com/huggingface/transformers/pull/17734
@sgugger | 06-16-2022 18:01:41 | 06-16-2022 18:01:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>does it look good now, Sylvain? <|||||>Yes, all good :-) Thanks again! |
transformers | 17,737 | open | CLI: detect and store weights as float16 | # What does this PR do?
Updates the `pt-to-tf` CLI to detect and store weights as float16. Battle-tested with OPT. | 06-16-2022 16:28:03 | 06-16-2022 16:28:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17737). All of your documentation changes will be reflected on that endpoint.<|||||>> Will TensorFlow automatically load stored FP16 weights in FP32 like PyTorch does?
No, if we inspect the variables after loading, they are FP16. However, as soon as we pass some input, we can see that the outputs of the internal computations are FP32, despite the weights being FP16.
The model error (vs PT) is exactly the same, before and after adding these lines :)
The documentation is not clear, but I believe `tf.keras.backend.set_floatx` sets the precision of the internal computations. For instance, if we don't reset to float32 (the default) after storing as float16, the error is much much larger (~1e3 times larger).<|||||>Mmmm, in this case I would avoid storing the weights in FP16 on the Hub before adding something in `from_pretrained` that will convert them back to FP32 like it's done for PyTorch.<|||||>> Mmmm, in this case I would avoid storing the weights in FP16 on the Hub before adding something in from_pretrained that will convert them back to FP32 like it's done for PyTorch.
👍 I can give it a go (and leave this PR open meanwhile) |
transformers | 17,736 | closed | importing 'LongT5Model' from 'transformers' | Hello Team,
I am getting the following error when I am trying to import the new LongT5Model into my notebook:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipykernel_55187/343105862.py in <cell line: 3>()
1 #from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
2
----> 3 from transformers import AutoTokenizer, LongT5Model
4
5 tokenizer = AutoTokenizer.from_pretrained("google/longt5-tglobal-base")
ImportError: cannot import name 'LongT5Model' from 'transformers' (/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/__init__.py)
Transformers version:
4.19.4
Python version:
3.8 | 06-16-2022 15:21:42 | 06-16-2022 15:21:42 | Hi,
LongT5 is only available in Transformers v4.20.<|||||>I guess it got released today. Thank you @NielsRogge you are the GOAT.<|||||>@NielsRogge do I still need to use the prefix:
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
for summarization with the LongT5 model or not?
Thanks,
Jorge<|||||>@NielsRogge Also I think the model name in the long-t5 README is wrong (https://huggingface.co/google/long-t5-tglobal-large). It says
```
tokenizer = AutoTokenizer.from_pretrained("google/longt5-tglobal-large")
model = LongT5Model.from_pretrained("google/longt5-tglobal-large")
```
But I think it should be
```
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-large")
model = LongT5Model.from_pretrained("google/long-t5-tglobal-large")
```
<|||||>Feel free to open an issue/PR on the repo on the hub!<|||||>The code examples were fixed. Closing this issue!<|||||>@jorgeutd Hi!
Have you figured out of the question? In the official document, it said that LongT5 does not use prefix. How do we use it in different down tasks? Thanks. |
transformers | 17,735 | closed | The messy code generated by opt125m. | ### System Info
```shell
transformers version is 4.19.3
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoModelForCausalLM
from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
transformers_opt = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
transformers_opt.eval()
text = "The trophy doesn’t fit in the suitcase because "
text_generator = TextGenerationPipeline(transformers_opt, tokenizer)
out = text_generator(text, max_length=300, do_sample=True, top_p=0.9)
print(f"transformers model out is {out}")
### Expected behavior
```shell
I want to get normal output.
```
| 06-16-2022 15:20:29 | 06-16-2022 15:20:29 | Hi @920232796 👋 `facebook/opt-125m` is a relatively small model, so it's normal that its outputs are not great (especially with `do_sample=True`). I'd suggest trying `facebook/opt-350m`.
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>Thank you for your patient reply.<|||||>@920232796 actually we have found a problem in the OPT files -- https://github.com/huggingface/transformers/pull/17785
It may improve the quality of the generation, but the comment above remains true :)<|||||>Thank you very much! |
transformers | 17,734 | closed | Refine Bf16 test for deepspeed | # What does this PR do?
This PR refines the `is_torch_b16_available` test in two separate ones for GPU and CPU are the DeepSpeed tests require the GPU bfloat16. | 06-16-2022 14:16:03 | 06-16-2022 14:16:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,733 | closed | Layoutlmv2 tesseractconfig | # What does this PR do?
Giving user option to set config parameter used by Tesseract when performing feature extraction. Eg. to change psm levels while performing transcription by passing in '--psm 10' to config parameter while invoking image_to_data
It is shown that changing the psm values greatly influences the end result of LayoutLMV2/XLM/V3, and the specific psm value is different depending on the document formatting. Refer : [PSM](https://github.com/tesseract-ocr/tesseract/issues/434)
```python
pytesseract.image_to_data(image, lang=lang, output_type="dict", config="--psm 10")
```
Users can now set the tesseract config parameter during Processor initialization, like so:
```python
# LayoutLMV2
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", ocr_lang="eng", tesseract_config="--psm 5")
# LayoutLMV3
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", ocr_lang="eng", tesseract_config="--psm 5")
```
## Before submitting
- [❌] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [✔️] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [❌] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [✔️] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [❌] Did you write any new necessary tests? | 06-16-2022 14:10:12 | 06-16-2022 14:10:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge kindly review. Thanks!<|||||>> LGTM! Could you add it to LayoutLMv3's feature extractor as well?
Sounds good! I'll ping you when it's done. Do I need to do anything to merge this to huggingface:main? You'll trigger the merge is that right?<|||||>>You'll trigger the merge is that right?
Yes, indeed.<|||||>Hi @kelvinAI, could you add it to LayoutLMv3 as well in this PR?
Thanks!<|||||>> Hi @kelvinAI, could you add it to LayoutLMv3 as well in this PR?
>
> Thanks!
Done! @NielsRogge <|||||>@NielsRogge pls review.
Thanks!<|||||>Hi @kelvinAI, could you apply the suggestions such that I can merge your PR?
Thanks! <|||||>@NielsRogge done! :) |
transformers | 17,732 | closed | Issue with trainer.py Line#1022,1025,1035,1043,1051,1059,1061 | ### System Info
```shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.18.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
os.environ['COMET_MODE'] = 'DISABLED'
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
training_args.log_level = 'INFO'
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=build_compute_metrics_fn(data_args.task))
if training_args.do_train:
trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
trainer.save_model()
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-feef8f1bc697> in <module>
1 if training_args.do_train:
----> 2 trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
3 trainer.save_model()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1262 self.lr_scheduler = lr_scheduler
1263 elif not delay_optimizer_creation:
-> 1264 self.create_optimizer_and_scheduler(num_training_steps=max_steps)
1265
1266 self.state = TrainerState()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in create_optimizer_and_scheduler(self, num_training_steps)
827 `create_scheduler`) in a subclass.
828 """
--> 829 self.create_optimizer()
830 self.create_scheduler(num_training_steps=num_training_steps, optimizer=self.optimizer)
831
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in create_optimizer(self)
851 ]
852
--> 853 optimizer_cls, optimizer_kwargs = Trainer.get_optimizer_cls_and_kwargs(self.args)
854
855 if self.sharded_ddp == ShardedDDPOption.SIMPLE:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in get_optimizer_cls_and_kwargs(args)
912 raise ValueError("Trainer tried to instantiate apex FusedAdam but apex is not installed!")
913 else:
--> 914 raise ValueError(f"Trainer cannot instantiate unsupported optimizer: {args.optim}")
915 return optimizer_cls, optimizer_kwargs
916
**ValueError: Trainer cannot instantiate unsupported optimizer: adamw_hf**
### Expected behavior
```shell
if args.optim == OptimizerNames.ADAFACTOR:
optimizer_cls = Adafactor
optimizer_kwargs.update({"scale_parameter": False, "relative_step": False})
elif args.optim == OptimizerNames.ADAMW_HF:
from .optimization import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
elif args.optim == OptimizerNames.ADAMW_TORCH:
from torch.optim import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
elif args.optim == OptimizerNames.ADAMW_TORCH_XLA:
try:
from torch_xla.amp.syncfree import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer failed to import syncfree AdamW from torch_xla.")
elif args.optim == OptimizerNames.ADAMW_APEX_FUSED:
try:
from apex.optimizers import FusedAdam
optimizer_cls = FusedAdam
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer tried to instantiate apex FusedAdam but apex is not installed!")
elif args.optim == OptimizerNames.ADAMW_BNB:
try:
from bitsandbytes.optim import Adam8bit
optimizer_cls = Adam8bit
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer tried to instantiate bnb Adam8bit but bnb is not installed!")
elif args.optim == OptimizerNames.SGD:
optimizer_cls = torch.optim.SGD
elif args.optim == OptimizerNames.ADAGRAD:
optimizer_cls = torch.optim.Adagrad
else:
raise ValueError(f"Trainer cannot instantiate unsupported optimizer: {args.optim}")
return optimizer_cls, optimizer_kwargs
We can use OptimizerNames.ADAMW_HF.name == args.optim
instead of OptimizerNames.ADAMW_HF == args.optim
```
| 06-16-2022 12:17:18 | 06-16-2022 12:17:18 | Please include a *complete* reproducer for the bug you are raising. Your code sample does not include many things, in particular the `TrainingArguments`.<|||||>```python
import logging
import os
from statistics import mean, stdev
import sys
from typing import Callable, Dict
import numpy as np
from pprint import pformat
from scipy.special import softmax
import torch
from transformers import (
AutoTokenizer,
AutoConfig,
HfArgumentParser,
Trainer,
EvalPrediction,
set_seed
)
from utils import calc_classification_metrics, calc_regression_metrics, create_dir_if_not_exists
from data import load_data_from_folder
from model import TabularConfig, AutoModelWithTabular
from multimodal_args import MultiModalDataArguments, ModelArguments, MultiModalTrainingArguments
from transformers.debug_utils import DebugOption
from transformers.training_args import OptimizerNames
print(MultiModalDataArguments)
parser = HfArgumentParser((ModelArguments,MultiModalDataArguments,MultiModalTrainingArguments))
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath('config.json'))
# training_args.debug = [DebugOption.UNDERFLOW_OVERFLOW]
# training_args.optim = OptimizerNames.ADAMW_HF
stream_handler = logging.StreamHandler(sys.stdout)
file_handler = logging.FileHandler(filename=os.path.join(training_args.output_dir, 'eval_log.txt'),
mode='w+')
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
set_seed(training_args.seed)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
create_dir_if_not_exists(training_args.output_dir)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
train_dataset, val_dataset, test_dataset = load_data_from_folder(
data_args.data_path,
data_args.text_cols,
tokenizer,
label_col=data_args.label_col,
label_list=data_args.label_list,
categorical_cols=data_args.cat_cols,
numerical_cols=data_args.num_cols,
categorical_encode_type=data_args.categorical_encoding,
numerical_transformer_method=data_args.numerical_encoding,
sep_text_token_str=tokenizer.sep_token,
do_train = training_args.do_train,
do_eval = training_args.do_eval,
do_predict = training_args.do_predict
)
train_datasets = [train_dataset]
val_datasets = [val_dataset]
test_datasets = [test_dataset]
def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:
def compute_metrics_fn(p: EvalPrediction):
if task_name == "classification":
preds_labels = np.argmax(p.predictions, axis=1)
if p.predictions.shape[-1] == 2:
pred_scores = softmax(p.predictions, axis=1)[:, 1]
else:
pred_scores = softmax(p.predictions, axis=1)
return calc_classification_metrics(pred_scores, preds_labels,
p.label_ids)
elif task_name == "regression":
preds = np.squeeze(p.predictions)
return calc_regression_metrics(preds, p.label_ids)
else:
return {}
return compute_metrics_fn
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
tabular_config = TabularConfig(num_labels=len(data_args.label_list),
cat_feat_dim=test_dataset.cat_feats.shape[
1] if test_dataset.cat_feats is not None else 0,
numerical_feat_dim=test_dataset.numerical_feats.shape[
1] if test_dataset.numerical_feats is not None else 0,
**vars(data_args))
config.tabular_config = tabular_config
model = AutoModelWithTabular.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir
)
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
os.environ['COMET_MODE'] = 'DISABLED'
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
training_args.log_level = 'INFO'
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=build_compute_metrics_fn(data_args.task))
if training_args.do_train:
trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
trainer.save_model()
```
Config
```json
{"text_cols" : [],
"num_cols" : [],
"label_col" : "label",
"label_list" : [],
"model_name_or_path" : "bert-base-uncased",
"data_path" : "input",
"combine_feat_method" : "gating",
"task" : "classification",
"create_folds" : false,
"num_classes" : 3,
"numerical_transformer_method" : "min_max",
"output_dir" : "run/output",
"logging_dir" : "run/log",
"overwrite_output_dir" : true,
"do_train" : true,
"do_eval" : true,
"do_predict" : true,
"per_device_train_batch_size" : 256,
"per_device_eval_batch_size" : 256,
"num_train_epochs" : 10,
"evaluate_during_training" : true,
"logging_steps" : 25,
"eval_steps" : 50,
"save_steps" : 50,
"log_level" : "INFO",
"report_to" : []
}
```
The training_args.optim = "adamw_hf" ( default choice )<|||||>Same as in the other issue, this is not a reproducer. We have no idea how you defined the class `MultiModalTrainingArguments` in particular, which is probably where the problem lies.
Please use the [forums](https://discuss.huggingface.co/) to get helps from the community to debug your code (and find a small reproducer of the bug if it is indeed a bug in the lbirary).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,731 | closed | Improve vision models | # What does this PR do?
This PR improves the vision models by:
- removing `to_2tuple`
- sanity checking whether the channel dimension of pixel values provided to the model match with `config.num_channels`
- replacing hardcoded 3 with `config.num_channels` for `xxxForMaskedImageModeling` models (fixes #17727)
- replacing hardcoded 3 by `config.num_channels` in Flax models (ViT, BEiT)
To do:
- [x] ViT
- [x] BEiT
- [x] DeiT
- [x] Swin
- [x] PoolFormer
- [x] DPT
- [x] YOLOS
- [x] ViLT
- [x] GLPN
- [x] DPT
- [x] Data2VecVision
- [x] MaskFormer
- [x] ViTMAE
- [x] TF and Flax implementations
- [x] Corresponding test files
- [x] add more Copied from statements (e.g. DropPath) | 06-16-2022 12:16:03 | 06-16-2022 12:16:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,730 | closed | Fix tf shared embedding | # What does this PR do?
A hack was used to properly import the shared embedding weights, but it can be removed (removing it also is convenient for the sharding PR)
Found this while testing #17713. In HF's `save_pretrained` and `load_pretrained` the layer name is changed using ` name = "/".join(weight_name.split("/")[1:])`. This was breaking for OPT as the layer name was ` 'decoder.embed_tokens/model.decoder.embed_tokens/weight:0'` instead of `'tfopt_model/model/decoder/embed_tokens/weight:0'`. The naming is strange and had to use a scope hack. The hack comes from `BART` | 06-16-2022 11:56:55 | 06-16-2022 11:56:55 | All the TF weights of OPT will need to be updated if this is approved. I think I can handle that along with #17713.
<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,729 | closed | Issue with trainer.py class line #1460, 2643 and 1745. | ### System Info
```shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.18.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
os.environ['COMET_MODE'] = 'DISABLED'
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
training_args.log_level = 'INFO'
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=build_compute_metrics_fn(data_args.task))
trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
trainer.save_model()
```
Error :
TypeError Traceback (most recent call last)
<ipython-input-13-feef8f1bc697> in <module>
1 if training_args.do_train:
----> 2 trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
3 trainer.save_model()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1239 )
1240
-> 1241 if DebugOption.UNDERFLOW_OVERFLOW in self.args.debug:
1242 if self.args.n_gpu > 1:
1243 # nn.DataParallel(model) replicates the model, creating new variables and module
**TypeError: 'in <string>' requires string as left operand, not DebugOption**
### Expected behavior
```shell
The solution can be to use
DebugOption.UNDERFLOW_OVERFLOW.value in self.args.debug:
instead of
DebugOption.UNDERFLOW_OVERFLOW in self.args.debug:
```
| 06-16-2022 11:19:27 | 06-16-2022 11:19:27 | Please include a *complete* reproducer for the bug you are raising. Your code sample does not include many things, in particular the `TrainingArguments`.<|||||>```python
import logging
import os
from statistics import mean, stdev
import sys
from typing import Callable, Dict
import numpy as np
from pprint import pformat
from scipy.special import softmax
import torch
from transformers import (
AutoTokenizer,
AutoConfig,
HfArgumentParser,
Trainer,
EvalPrediction,
set_seed
)
from utils import calc_classification_metrics, calc_regression_metrics, create_dir_if_not_exists
from data import load_data_from_folder
from model import TabularConfig, AutoModelWithTabular
from multimodal_args import MultiModalDataArguments, ModelArguments, MultiModalTrainingArguments
from transformers.debug_utils import DebugOption
from transformers.training_args import OptimizerNames
print(MultiModalDataArguments)
parser = HfArgumentParser((ModelArguments,MultiModalDataArguments,MultiModalTrainingArguments))
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath('config.json'))
# training_args.debug = [DebugOption.UNDERFLOW_OVERFLOW]
# training_args.optim = OptimizerNames.ADAMW_HF
stream_handler = logging.StreamHandler(sys.stdout)
file_handler = logging.FileHandler(filename=os.path.join(training_args.output_dir, 'eval_log.txt'),
mode='w+')
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
set_seed(training_args.seed)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
create_dir_if_not_exists(training_args.output_dir)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
train_dataset, val_dataset, test_dataset = load_data_from_folder(
data_args.data_path,
data_args.text_cols,
tokenizer,
label_col=data_args.label_col,
label_list=data_args.label_list,
categorical_cols=data_args.cat_cols,
numerical_cols=data_args.num_cols,
categorical_encode_type=data_args.categorical_encoding,
numerical_transformer_method=data_args.numerical_encoding,
sep_text_token_str=tokenizer.sep_token,
do_train = training_args.do_train,
do_eval = training_args.do_eval,
do_predict = training_args.do_predict
)
train_datasets = [train_dataset]
val_datasets = [val_dataset]
test_datasets = [test_dataset]
def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:
def compute_metrics_fn(p: EvalPrediction):
if task_name == "classification":
preds_labels = np.argmax(p.predictions, axis=1)
if p.predictions.shape[-1] == 2:
pred_scores = softmax(p.predictions, axis=1)[:, 1]
else:
pred_scores = softmax(p.predictions, axis=1)
return calc_classification_metrics(pred_scores, preds_labels,
p.label_ids)
elif task_name == "regression":
preds = np.squeeze(p.predictions)
return calc_regression_metrics(preds, p.label_ids)
else:
return {}
return compute_metrics_fn
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
tabular_config = TabularConfig(num_labels=len(data_args.label_list),
cat_feat_dim=test_dataset.cat_feats.shape[
1] if test_dataset.cat_feats is not None else 0,
numerical_feat_dim=test_dataset.numerical_feats.shape[
1] if test_dataset.numerical_feats is not None else 0,
**vars(data_args))
config.tabular_config = tabular_config
model = AutoModelWithTabular.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir
)
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
os.environ['COMET_MODE'] = 'DISABLED'
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO,
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[stream_handler, file_handler]
)
logger = logging.getLogger(__name__)
training_args.log_level = 'INFO'
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=build_compute_metrics_fn(data_args.task))
if training_args.do_train:
trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)
trainer.save_model()
```
Config File
```json
{"text_cols" : [],
"num_cols" : [],
"label_col" : "label",
"label_list" : [],
"model_name_or_path" : "bert-base-uncased",
"data_path" : "input",
"combine_feat_method" : "gating",
"task" : "classification",
"create_folds" : false,
"num_classes" : 3,
"numerical_transformer_method" : "min_max",
"output_dir" : "run/output",
"logging_dir" : "run/log",
"overwrite_output_dir" : true,
"do_train" : true,
"do_eval" : true,
"do_predict" : true,
"per_device_train_batch_size" : 256,
"per_device_eval_batch_size" : 256,
"num_train_epochs" : 10,
"evaluate_during_training" : true,
"logging_steps" : 25,
"eval_steps" : 50,
"save_steps" : 50,
"log_level" : "INFO",
"report_to" : []
}
```
The training_args.dedug = "" ( default value )<|||||>This is not a reproducer, as it relies on modules you have define on your environment. We have no idea how you defined the class `MultiModalTrainingArguments` in particular, which is probably where the problem lies.
Please use the [forums](https://discuss.huggingface.co/) to get helps from the community to debug your code (and find a small reproducer of the bug if it is indeed a bug in the lbirary).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,728 | closed | CI Tests are failing in "run_tests_pipelines_tf" | ### System Info
```shell
CI setup for `run_tests_pipelines_tf`:
run_tests_pipelines_tf:
working_directory: ~/transformers
docker:
- image: circleci/python:3.7
environment:
OMP_NUM_THREADS: 1
RUN_PIPELINE_TESTS: yes
TRANSFORMERS_IS_CI: yes
resource_class: xlarge
parallelism: 1
steps:
- checkout
- restore_cache:
keys:
- v0.4-tf-{{ checksum "setup.py" }}
- v0.4-{{ checksum "setup.py" }}
- run: pip install --upgrade pip
- run: pip install .[sklearn,tf-cpu,testing,sentencepiece]
- run: pip install tensorflow_probability
- save_cache:
key: v0.4-tf-{{ checksum "setup.py" }}
paths:
- '~/.cache/pip'
- run: python utils/tests_fetcher.py | tee test_preparation.txt
- store_artifacts:
path: ~/transformers/test_preparation.txt
- run: |
if [ -f test_list.txt ]; then
python -m pytest -n 8 --max-worker-restart=0 --dist=loadfile -rA -s --make-reports=tests_pipelines_tf $(cat test_list.txt) -m is_pipeline_test | tee tests_output.txt
fi
- store_artifacts:
path: ~/transformers/tests_output.txt
- store_artifacts:
path: ~/transformers/reports
```
### Who can help?
@ydshieh , @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following CI tests are failing in "run_tests_pipelines_tf" for PR #17623 when no changes are done with respect to TF or speech functionalities
```
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_chunking_fast
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_return_timestamps_ctc_fast
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_small_model_pt
===== 3 failed, 680 passed, 578 skipped, 270 warnings in 222.67s (0:03:42) =====
```
```=================================== FAILURES ===================================
__________ AutomaticSpeechRecognitionPipelineTests.test_chunking_fast __________
[gw1] linux -- Python 3.7.12 /usr/local/bin/python
self = Audio(sampling_rate=16000, mono=True, decode=True, id=None)
value = '/home/circleci/.cache/huggingface/datasets/downloads/extracted/06793a6d1707e1987473fd67ba38f9156c15e5a8d2956a4fff1d9690877b20a8/dev_clean/1272/128104/1272-128104-0000.flac'
def encode_example(self, value: Union[str, dict]) -> dict:
"""Encode example into a format for Arrow.
Args:
value (:obj:`str` or :obj:`dict`): Data passed as input to Audio feature.
Returns:
:obj:`dict`
"""
try:
> import soundfile as sf # soundfile is a dependency of librosa, needed to decode audio files.
E ModuleNotFoundError: No module named 'soundfile'
../.local/lib/python3.7/site-packages/datasets/features/audio.py:83: ModuleNotFoundError
```
```
_________ AutomaticSpeechRecognitionPipelineTests.test_small_model_pt __________
[gw3] linux -- Python 3.7.12 /usr/local/bin/python
self = <tests.pipelines.test_pipelines_automatic_speech_recognition.AutomaticSpeechRecognitionPipelineTests testMethod=test_small_model_pt>
@require_torch
def test_small_model_pt(self):
speech_recognizer = pipeline(
task="automatic-speech-recognition",
model="facebook/s2t-small-mustc-en-fr-st",
tokenizer="facebook/s2t-small-mustc-en-fr-st",
> framework="pt",
)
tests/pipelines/test_pipelines_automatic_speech_recognition.py:136:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/pipelines/__init__.py:638: in pipeline
feature_extractor, revision=revision, _from_pipeline=task, **model_kwargs
src/transformers/models/auto/feature_extraction_auto.py:326: in from_pretrained
return feature_extractor_class.from_dict(config_dict, **kwargs)
src/transformers/utils/import_utils.py:809: in __getattr__
requires_backends(cls, cls._backends)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <class 'transformers.utils.dummy_speech_objects.Speech2TextFeatureExtractor'>
backends = ['speech']
def requires_backends(obj, backends):
if not isinstance(backends, (list, tuple)):
backends = [backends]
name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
checks = (BACKENDS_MAPPING[backend] for backend in backends)
failed = [msg.format(name) for available, msg in checks if not available()]
if failed:
> raise ImportError("".join(failed))
E ImportError:
E Speech2TextFeatureExtractor requires the torchaudio library but it was not found in your environment. You can install it with pip:
E `pip install torchaudio
```
### Expected behavior
```shell
No tests fail
```
| 06-16-2022 07:48:00 | 06-16-2022 07:48:00 | |
transformers | 17,727 | closed | SimMIM output num_channels should not be hardcoded | ### Feature request
In all 3 simmim models, reconstructed images channel is hardcoded as 3. This should be configurable as num_channels
```
deit/modeling_deit.py
swin/modeling_swin.py
vit/modeling_vit.py
nn.Conv2d(in_channels=config.hidden_size, out_channels=config.encoder_stride**2 * 3, kernel_size=1)
```
@NielsRogge
### Motivation
I'm training a grayscale model, but the reconstructed image has different dimension as the input image.
### Your contribution
None | 06-16-2022 06:19:06 | 06-16-2022 06:19:06 | |
transformers | 17,726 | closed | Input Packing | ### Feature request
Sequence packing when tokenizing inputs.
Most modern large language models pack together multiple sequences to saturate their large context windows. Otherwise, they risk wasted computation with excess padding. For example T5, GPT3, and PALM all implemented input packing.
"During training we always train on sequences of the full nctx = 2048 token context window, packing multiple
documents into a single sequence when documents are shorter than 2048, in order to increase computational efficiency.
Sequences with multiple documents are not masked in any special way but instead documents within a sequence
are delimited with a special end of text token, giving the language model the information necessary to infer that
context separated by the end of text token is unrelated. This allows for efficient training without need for any special
sequence-specific masking." [1]
"We use a maximum sequence length of 512 and a batch size of 128 sequences. Whenever possible, we “pack” multiple sequences into each entry of the batch..." [2]
I would suggest a change to tokenizers. When tokenizing multiple sequences, sufficiently small inputs are automatically packed together with a special delimiter token separating them. The simplest approach would be a greedy method, where inputs under something like 70% of the context window are added to a queue. Or when truncating longer inputs, the remaining chunk is added to the queue.
When the queue grows to be larger than the size of the window, the first $n_{window}$ tokens are flushed and added as another input.
A more aggressive strategy would be something like this - https://arxiv.org/pdf/2107.02027.pdf
In addition, it would be useful to generate and supply optional masks to the model that prevent models from attending to different sequences.
[1] - Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901.
[2] - Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 11.
### Motivation
I have been dealing with a dataset with a very high variance in its sequence lengths. I implemented something like this for myself to pack inputs together and thought it could be quite useful as a general feature.
### Your contribution
I could probably make the greedy packer and support the custom masks if I have time in the next few weeks. | 06-16-2022 04:58:15 | 06-16-2022 04:58:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Sanger2000 this is cool - surprised you didn't get much interest. Would you be willing to expand on your approach here so that others can pick it up even if it doesn't get incorporated into the library?
Specifically:
1. what is the delimiter token you used
2. how did you modify the masks/outputs/model to handle the packed inputs?
If you have any pointers to existing code or other details, please share them here.
#6661 is another issue that asks about packing |
transformers | 17,725 | closed | Fix bug in the example of VisualBertForPreTraining | VIsualBert uses bert-base-uncased tokenizer, therefore, instead of {mask}, the mask token should be [MASK] :)
- Documentation: @sgugger | 06-16-2022 04:54:25 | 06-16-2022 04:54:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,724 | closed | Inference benchmarks of Torchdynamo + FX2TRT(now in Torch-TensorRT) | Hey HF folks:
I propose this PR as performance results preview for our SOTA inference engine combining torchdynamo+fx2trt. This PR is the based on the sound and great work in #17240 (cc @Chillee @jansel). Since the importance of torchdynamo is already emphasized in #17240, I will skip it and focus on the inference efforts we proposed by combining torchdynamo + fx2trt.
# A short introduction about fx2trt
It is library tool developed by Meta team (cc @yinghai) to lower the FX graph to TensorRT on GPU to take advantage of its various optimization paths. The library itself was just merged into [Torch-TensorRT](https://github.com/pytorch/TensorRT) as one of its two lowering paths to TensorRT running on GPU.
# Results
This script primarily comes from a great effort from @anijain2305 and I added some inference implementation logic there.
Some highlights about the results:
1. Compared with FX integration, it is important that torchdynamo could handle those corner cases where FX could not like control flow or set item operation. That is the main the reason we could extend our implementation to these 12 models. Actually, maybe more models could be run but I just follow the same model pool from #17204
2. The experiments are run across different batch sizes from 1 to 4 and 8. The speedup is compared against the eager model. The trend is that speedup is pretty high on small batch size like 1 but get slower down when the batch size is increased. The reason behind it could be that torch eager mode is low efficiency on handling the small computation kernel while on large kernels, TRT and eager model would call similar kernel implementations. So their perf gap is reduced on large batch size.
3. TensorRT has accuracy degradation problem in float16 mode because of its optimization techniques. I changed the accuracy validation standard as [Cosine Similarity](https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html) which is also another important evaluation standard on the accuracy against eager mode. For float32 mode, the absolute difference is used as standard.
Run on A100:
```
To run for fp32 mode:
$ python hf_dynamo.py --run-dynamo-fx2trt-fp32 --use-eval-mode
To run for fp16 mode:
$ python hf_dynamo.py --run-dynamo-fx2trt-fp16 --use-eval-mode
```
cc @stas00
=== Final results for fp16 ===
**Batch size = 1**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float16 | True | eager | 0.010 | 0.458 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.240 | 2.930 | 1.906 |
| AlbertForMaskedLM | torch.float16 | True | eager | 0.012 | 0.417 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.004 | 0.051 | 2.775 | 8.257 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.013 | 0.630 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.316 | 2.314 | 1.991 |
| T5ForConditionalGeneration | torch.float16 | True | eager | 0.021 | 0.504 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.162 | 0.898 | 3.115 |
| DistilBertForMaskedLM | torch.float16 | True | eager | 0.006 | 0.273 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.158 | 2.119 | 1.722 |
| RobertaForMaskedLM | torch.float16 | True | eager | 0.011 | 0.506 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.290 | 1.986 | 1.743 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.006 | 0.446 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.220 | 1.909 | 2.026 |
| ElectraForMaskedLM | torch.float16 | True | eager | 0.010 | 0.458 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.241 | 3.044 | 1.900 |
| ConvBertForMaskedLM | torch.float16 | True | eager | 0.020 | 0.459 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.021 | 0.231 | 0.944 | 1.985 |
| MobileBertForMaskedLM | torch.float16 | True | eager | 0.040 | 0.297 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.128 | 6.693 | 2.312 |
| CamembertForMaskedLM | torch.float16 | True | eager | 0.011 | 0.460 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.005 | 0.241 | 2.075 | 1.906 |
| LayoutLMForMaskedLM | torch.float16 | True | eager | 0.011 | 0.466 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.004 | 0.247 | 3.031 | 1.885 |
**Batch size = 4**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float16 | True | eager | 0.012 | 1.185 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.326 | 1.445 | 3.631 |
| AlbertForMaskedLM | torch.float16 | True | eager | 0.013 | 1.455 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.005 | 0.137 | 2.508 | 10.610 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.014 | 1.826 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.515 | 1.705 | 3.549 |
| T5ForConditionalGeneration | torch.float16 | True | eager | 0.025 | 1.684 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.291 | 1.055 | 5.778 |
| DistilBertForMaskedLM | torch.float16 | True | eager | 0.007 | 0.687 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.253 | 1.227 | 2.721 |
| RobertaForMaskedLM | torch.float16 | True | eager | 0.015 | 1.288 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.012 | 0.427 | 1.246 | 3.012 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.008 | 1.120 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.394 | 1.430 | 2.842 |
| ElectraForMaskedLM | torch.float16 | True | eager | 0.013 | 1.186 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.329 | 1.517 | 3.605 |
| ConvBertForMaskedLM | torch.float16 | True | eager | 0.021 | 1.243 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.020 | 0.321 | 1.077 | 3.876 |
| MobileBertForMaskedLM | torch.float16 | True | eager | 0.043 | 0.893 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.221 | 4.923 | 4.046 |
| CamembertForMaskedLM | torch.float16 | True | eager | 0.014 | 1.189 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.011 | 0.331 | 1.302 | 3.597 |
| LayoutLMForMaskedLM | torch.float16 | True | eager | 0.012 | 1.189 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.330 | 1.443 | 3.599 |
**Batch size = 8**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float16 | True | eager | 0.015 | 2.156 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.014 | 0.439 | 1.014 | 4.916 |
| AlbertForMaskedLM | torch.float16 | True | eager | 0.017 | 2.840 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.250 | 1.892 | 11.361 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.021 | 3.393 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.015 | 0.780 | 1.376 | 4.349 |
| T5ForConditionalGeneration | torch.float16 | True | eager | 0.025 | 3.243 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.027 | 0.465 | 0.929 | 6.977 |
| DistilBertForMaskedLM | torch.float16 | True | eager | 0.009 | 1.237 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.371 | 0.904 | 3.334 |
| RobertaForMaskedLM | torch.float16 | True | eager | 0.018 | 2.335 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.019 | 0.618 | 0.958 | 3.781 |
| GPT2LMHeadModel | torch.float16 | True | eager | 0.013 | 2.001 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.010 | 0.626 | 1.285 | 3.197 |
| ElectraForMaskedLM | torch.float16 | True | eager | 0.015 | 2.156 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.014 | 0.444 | 1.023 | 4.857 |
| ConvBertForMaskedLM | torch.float16 | True | eager | 0.022 | 2.273 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.439 | 0.937 | 5.180 |
| MobileBertForMaskedLM | torch.float16 | True | eager | 0.042 | 1.685 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.013 | 0.343 | 3.265 | 4.909 |
| CamembertForMaskedLM | torch.float16 | True | eager | 0.016 | 2.171 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.017 | 0.453 | 0.950 | 4.792 |
| LayoutLMForMaskedLM | torch.float16 | True | eager | 0.015 | 2.163 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.015 | 0.445 | 1.010 | 4.859 |
=== Final results for fp32 ===
**Batch size = 1**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float32 | True | eager | 0.010 | 0.905 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.468 | 2.371 | 1.936 |
| AlbertForMaskedLM | torch.float32 | True | eager | 0.011 | 0.839 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.100 | 2.563 | 8.401 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.011 | 1.237 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.005 | 0.616 | 2.163 | 2.009 |
| T5ForConditionalGeneration | torch.float32 | True | eager | 0.015 | 0.648 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.008 | 0.318 | 1.879 | 2.038 |
| DistilBertForMaskedLM | torch.float32 | True | eager | 0.006 | 0.534 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.003 | 0.313 | 2.080 | 1.705 |
| RobertaForMaskedLM | torch.float32 | True | eager | 0.011 | 0.998 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.006 | 0.563 | 1.707 | 1.773 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.006 | 0.879 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.003 | 0.429 | 1.987 | 2.050 |
| ElectraForMaskedLM | torch.float32 | True | eager | 0.010 | 0.902 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.471 | 2.478 | 1.914 |
| ConvBertForMaskedLM | torch.float32 | True | eager | 0.019 | 0.926 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.017 | 0.462 | 1.088 | 2.004 |
| MobileBertForMaskedLM | torch.float32 | True | eager | 0.039 | 0.593 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.257 | 5.388 | 2.305 |
| CamembertForMaskedLM | torch.float32 | True | eager | 0.011 | 0.904 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.006 | 0.474 | 1.833 | 1.908 |
| LayoutLMForMaskedLM | torch.float32 | True | eager | 0.011 | 0.918 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.005 | 0.483 | 2.391 | 1.900 |
**Batch size = 4**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float32 | True | eager | 0.015 | 2.361 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.643 | 1.334 | 3.672 |
| AlbertForMaskedLM | torch.float32 | True | eager | 0.015 | 2.906 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.009 | 0.271 | 1.740 | 10.734 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.016 | 3.631 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.012 | 1.012 | 1.412 | 3.589 |
| T5ForConditionalGeneration | torch.float32 | True | eager | 0.017 | 1.982 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.579 | 1.472 | 3.424 |
| DistilBertForMaskedLM | torch.float32 | True | eager | 0.007 | 1.364 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.498 | 1.109 | 2.741 |
| RobertaForMaskedLM | torch.float32 | True | eager | 0.014 | 2.568 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.014 | 0.850 | 1.028 | 3.022 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.009 | 2.227 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.777 | 1.336 | 2.866 |
| ElectraForMaskedLM | torch.float32 | True | eager | 0.013 | 2.360 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.648 | 1.182 | 3.640 |
| ConvBertForMaskedLM | torch.float32 | True | eager | 0.020 | 2.475 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.639 | 1.054 | 3.874 |
| MobileBertForMaskedLM | torch.float32 | True | eager | 0.041 | 1.783 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.441 | 3.610 | 4.047 |
| CamembertForMaskedLM | torch.float32 | True | eager | 0.013 | 2.375 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.013 | 0.658 | 1.010 | 3.610 |
| LayoutLMForMaskedLM | torch.float32 | True | eager | 0.013 | 2.373 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.655 | 1.182 | 3.620 |
**Batch size = 8**
| model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression |
|:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | torch.float32 | True | eager | 0.024 | 4.309 | 1.000 | 1.000 |
| BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.874 | 1.184 | 4.928 |
| AlbertForMaskedLM | torch.float32 | True | eager | 0.029 | 5.679 | 1.000 | 1.000 |
| AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.016 | 0.500 | 1.787 | 11.367 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.031 | 6.769 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.022 | 1.548 | 1.436 | 4.374 |
| T5ForConditionalGeneration | torch.float32 | True | eager | 0.020 | 3.725 | 1.000 | 1.000 |
| T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.924 | 1.018 | 4.033 |
| DistilBertForMaskedLM | torch.float32 | True | eager | 0.014 | 2.470 | 1.000 | 1.000 |
| DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.012 | 0.743 | 1.115 | 3.325 |
| RobertaForMaskedLM | torch.float32 | True | eager | 0.026 | 4.670 | 1.000 | 1.000 |
| RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.024 | 1.236 | 1.070 | 3.779 |
| GPT2LMHeadModel | torch.float32 | True | eager | 0.018 | 3.993 | 1.000 | 1.000 |
| GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.013 | 1.243 | 1.356 | 3.211 |
| ElectraForMaskedLM | torch.float32 | True | eager | 0.024 | 4.315 | 1.000 | 1.000 |
| ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.892 | 1.210 | 4.837 |
| ConvBertForMaskedLM | torch.float32 | True | eager | 0.030 | 4.525 | 1.000 | 1.000 |
| ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.031 | 0.882 | 0.991 | 5.133 |
| MobileBertForMaskedLM | torch.float32 | True | eager | 0.044 | 3.371 | 1.000 | 1.000 |
| MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.685 | 2.338 | 4.921 |
| CamembertForMaskedLM | torch.float32 | True | eager | 0.024 | 4.338 | 1.000 | 1.000 |
| CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.022 | 0.903 | 1.098 | 4.802 |
| LayoutLMForMaskedLM | torch.float32 | True | eager | 0.024 | 4.322 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.888 | 1.185 | 4.869 |
| 06-16-2022 00:12:26 | 06-16-2022 00:12:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17724). All of your documentation changes will be reflected on that endpoint.<|||||>This looks excellent, @frank-wei, thank you for sharing the extensive benchmarks.
So the next step is to document how the users can deploy the proposed solution so that they don't need to try to fish it out from the benchmark code. Does it make sense?
wrt the benchmark script in this PR, I'm not sure where it'd best belong. Perhaps somewhere in your repo as I'm sure it's going to evolve and then we could link to it from our documentation? how does that sound?
also cc: @sgugger <|||||>This should just be a simple addition to the existing TorchDynamo integration with NVFuser (cc: @anijain2305).<|||||>> This should just be a simple addition to the existing TorchDynamo integration with NVFuser (cc: @anijain2305).
If we do that, as I proposed originally looking into the 8 ball, it should go under the same cmd arg and have a new value - as we aren't going to add a new cmd arg for each variation.
It'd like let's discuss the proposed integration API modifications before implementing those, to save everybody's time.
As I suggested probably the best future-proofing is to have the value comprised of possible multiple "keys" key1:key2:...:keyn - so that multiple combos could be supported down the road.<|||||>@stas00 and @Chillee any context about the design of the integration API? Does the one API could work both for inference (fx2trt) and training (AOT)?<|||||>It's the one that was added recently to integrate torchdynamo with the nvfuser backend:
https://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L467-L469
you can see the PR here: https://github.com/huggingface/transformers/pull/17308<|||||>Since the fx2trt needs some preprocessing time to trace the model and create TRT model engine, it is not suitable for inference in training process.
Does hf has pure inference scenarios where fx2trt could be leveraged?<|||||>> Does hf has pure inference scenarios where fx2trt could be leveraged?
Of course it has. Everything else besides training is inference
https://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L123-L127
https://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L267-L272
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@frank-wei have you tried `PEGASUS` ? It seemed to be not supported yet. I filled an issue with details: https://github.com/pytorch/torchdynamo/issues/777<|||||>> @frank-wei have you tried `PEGASUS` ? It seemed to be not supported yet. I filled an issue with details: [pytorch/torchdynamo#777](https://github.com/pytorch/torchdynamo/issues/777)
@philschmid , I did not try `PEGASUS` before.
The problem in your case is that torch_tensorrt fx path nightly version is out of sync with pytorch nightly. I am updating it today. |
transformers | 17,723 | closed | Sort the model doc Toc Alphabetically | # What does this PR do?
This PR sorts a bit the ToC that has some G models in the middle of the Os | 06-15-2022 19:44:15 | 06-15-2022 19:44:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,722 | closed | normalize keys_to_ignore | as discussed at https://github.com/huggingface/transformers/issues/16719#issuecomment-1156599137 this PR normalizes `_keys_to_ignore_on*` to not backslash on dot, unless it's an actual regex with regex patterns - this is just for consistency and easier troubleshooting. As sometimes it's unclear if `\` is needed or not when different modeling files use different styles.
@sgugger
| 06-15-2022 18:25:07 | 06-15-2022 18:25:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,721 | closed | [tests] workaround for relative dataset path | `datasets==2.3.1` introduced a bug where it fails to load a dataset via a path that contains `..`.
This PR works around it by avoiding this situation in the tests.
@sgugger, @ydshieh | 06-15-2022 17:03:14 | 06-15-2022 17:03:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>the just released `datasets==2.3.2` fixed the bug, so closing this PR. |
transformers | 17,720 | closed | CLI: Add flag to push TF weights directly into main | # What does this PR do?
Adds the `--push` flag to the `pt-to-tf` CLI, to enabling pushing straight to main (assuming the user has the right permissions).
Why am I adding this flag? A few users mentioned that they would be interested in having TF weights silently pushed straight into their repos. With this flag, I can start building a local midnight cronjob to automate the conversions (starting with pushes for users that requested it, then PR opening on users interested in PRs, then 1 PR per user if not whitelisted), which can eventually be moved into a separate machine to continuously feed the TF ecosystem 🔥 | 06-15-2022 16:47:48 | 06-15-2022 16:47:48 | Yes yes yes! Maybe users could just set a flag, like "auto-convert my weights" and then we could have a machine that tracks their repos, looks for weights files that have changed and auto-updates versions for the other frameworks?<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,719 | closed | [example] image classification example requires newer datasets version | # What does this PR do?
The DeepSpeed + Transformers integration tests are showing issues with this example. We install `datastes >= 1.8.0` in the requirements.txt for this example. However, in running the example it requires `datasets.Image()` which wasn't introduced in datasets until 1.17.0 (as far as I can tell). This PR bumps up the min version to one that works with the example.
Example output showing the error: https://github.com/microsoft/DeepSpeed/runs/6869668978?check_suite_focus=true
We also are seeing issues with some of the examples with the latest datasets release but will file a different issue for that.
/cc @mrwyattii
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00, @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-15-2022 16:41:12 | 06-15-2022 16:41:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,718 | closed | `max_length` and `stopping_criteria` in generate() | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
Hi,
I don't get the logic behind `max_length` and `stopping_criteria` for `generate(self,...)` function for encoder-decoder models.
Accordingly, when you pass `max_length` you get the deprecated warning, which is ok - however, it recommends using the `StoppingCriteriaList` object with the `MaxLengthCriteria`.
Now the real problem happens:
The `generate()` function uses the following code:
*******************************
# 5. Prepare `max_length` depending on other stopping criteria
# if `max_new_tokens` is passed, but not `max_length` -> set `max_length = max_new_tokens`
if max_length is None and max_new_tokens is not None:
max_length = max_new_tokens + input_ids_seq_length
elif max_length is not None and max_new_tokens is not None:
# Both are set, this is odd, raise a warning
warnings.warn(
"Both `max_length` and `max_new_tokens` have been set "
f"but they serve the same purpose. `max_length` {max_length} "
f"will take priority over `max_new_tokens` {max_new_tokens}.",
UserWarning,
)
# default to config if still None
max_length = max_length if max_length is not None else self.config.max_length
*******************************
As you can see, `max_length ` is going to have a value no matter what (even if you pass `max_length=None` the value is set to be `self.config.max_length` which is equal to 20 for T5, and this extremely bad for users who are not aware to it... in older versions this wasn't the behavior of the `generate()` function.)
Now, if you pass `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(100))` you get an Exception (because max_length is not None and you init the default StoppingCriteriaList with MaxLengthCriteria(max_length==self.config.max_length)).
If you pass `max_length=100` you get the warning.
If you don't pass `max_length ` you still get the warning (because `max_length = max_length if max_length is not None else self.config.max_length`)
So how exactly someone is supposed to set the max length?
Should I change self.config.max_length? This is not a good practice...
Of course, I can pass `max_length` for `generate()`, however, the warning says not to do that...
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
call model.generate(max_length=...) and model.generate(stopping_criteria=StoppingCriteriaList([MaxLengthCriteria=100)]),...)
### Expected behavior
```shell
change the warning,
change the logic behind default StoppingCriteriaList
change how you infer max_length
make sure users are aware of max_length=model.configs.max_length when max_length is None
```
| 06-15-2022 16:14:15 | 06-15-2022 16:14:15 | Hey @nitaytech 👋 The behavior surrounding `max_length` can't be changed, as it would not be backwards compatible. But we also don't like its behavior for the reasons you described, hence the plans to deprecate (and the warning).
As per [this recent discussion](https://github.com/huggingface/transformers/issues/17414#issuecomment-1148836312), we have decided to give preference to the `max_new_tokens` argument, as it is clearer for all types of models. We are working on updating warnings and documentations to make it clear it is the correct way to control the maximum length of the generated text :)
Meanwhile, can you confirm that `max_new_tokens` works properly in your case? <|||||>@nitaytech [this comment](https://github.com/huggingface/transformers/pull/17196#issuecomment-1159147143) might also be relevant to your pain points.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>related: #18018<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./kirah/fcv_s2t", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=7,
gradient_checkpointing=True,
fp16=False,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
max_new_tokens=1000, # <======== Before max_length=1000,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
__init__() got an unexpected keyword argument 'max_new_tokens'
<|||||>Hey @jhoanmartinez -- Seq2SeqTrainingArguments doesn't have `max_new_tokens` yet |
transformers | 17,717 | closed | Revert "Change push CI to run on workflow_run event" | Reverts huggingface/transformers#17692
Really sorry, but the `notification_service.py` has error
```
Traceback (most recent call last):
File "utils/notification_service.py", line 766, in <module>
ci_author = ci_details["author"]["login"]
KeyError: 'author'
```
as the GH event is no longer coupled with a commit, and we lose some information about commit/author.
I need to change more things.
(On my own repo for testing, there is no `notification_service.py`. And this change of `workflow_run` can only be verified when being merged to `main`) | 06-15-2022 15:51:09 | 06-15-2022 15:51:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>merged to avoid further test failures |
transformers | 17,716 | closed | Prepare transformers for v0.8.0 huggingface-hub release | Updates the staging endpoint to use `hub-ci` instead of `moon-staging`.
This should be merged only once v0.8.0 is released. | 06-15-2022 15:31:47 | 06-15-2022 15:31:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,715 | closed | Make datasets<=2.2.2 for a quick fix for test failures | # What does this PR do?
Currently, we have a lot of test failures (scheduled CI) with
```
FileNotFoundError: Unable to find '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json' at /transformers
```
which is caused by the release of `datasets 2.3.x`.
This PR changes to "datasets<=2.2.2" temporarily to avoid the test failures. | 06-15-2022 15:15:21 | 06-15-2022 15:15:21 | A full error message is
```
2022-06-15T03:19:39.4149310Z =================================== FAILURES ===================================
2022-06-15T03:19:39.4149596Z _______________________ TestTrainerExt.test_run_seq2seq ________________________
2022-06-15T03:19:39.4149773Z
2022-06-15T03:19:39.4150008Z self = <test_trainer_ext.TestTrainerExt testMethod=test_run_seq2seq>
2022-06-15T03:19:39.4150218Z
2022-06-15T03:19:39.4150293Z @slow
2022-06-15T03:19:39.4150745Z def test_run_seq2seq(self):
2022-06-15T03:19:39.4151132Z > output_dir = self.run_trainer(
2022-06-15T03:19:39.4151468Z eval_steps=2,
2022-06-15T03:19:39.4151820Z max_len=128,
2022-06-15T03:19:39.4152199Z model_name=MARIAN_MODEL,
2022-06-15T03:19:39.4153137Z learning_rate=3e-4,
2022-06-15T03:19:39.4153380Z num_train_epochs=10,
2022-06-15T03:19:39.4154587Z distributed=False,
2022-06-15T03:19:39.4155199Z )
2022-06-15T03:19:39.4155329Z
2022-06-15T03:19:39.4155456Z tests/extended/test_trainer_ext.py:180:
2022-06-15T03:19:39.4155723Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-06-15T03:19:39.4156015Z tests/extended/test_trainer_ext.py:375: in run_trainer
2022-06-15T03:19:39.4156267Z main()
2022-06-15T03:19:39.4156535Z examples/pytorch/translation/run_translation.py:346: in main
2022-06-15T03:19:39.4156923Z raw_datasets = load_dataset(
2022-06-15T03:19:39.4157383Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1656: in load_dataset
2022-06-15T03:19:39.4157724Z builder_instance = load_dataset_builder(
2022-06-15T03:19:39.4158171Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1439: in load_dataset_builder
2022-06-15T03:19:39.4158514Z dataset_module = dataset_module_factory(
2022-06-15T03:19:39.4159311Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1097: in dataset_module_factory
2022-06-15T03:19:39.4159770Z return PackagedDatasetModuleFactory(
2022-06-15T03:19:39.4160536Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:743: in get_module
2022-06-15T03:19:39.4161066Z data_files = DataFilesDict.from_local_or_remote(
2022-06-15T03:19:39.4161777Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:588: in from_local_or_remote
2022-06-15T03:19:39.4162298Z DataFilesList.from_local_or_remote(
2022-06-15T03:19:39.4163081Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:556: in from_local_or_remote
2022-06-15T03:19:39.4163523Z data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
2022-06-15T03:19:39.4164078Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:194: in resolve_patterns_locally_or_by_urls
2022-06-15T03:19:39.4164508Z for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
2022-06-15T03:19:39.4164839Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-06-15T03:19:39.4164997Z
2022-06-15T03:19:39.4165147Z base_path = '/transformers'
2022-06-15T03:19:39.4165579Z pattern = '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json'
2022-06-15T03:19:39.4165899Z allowed_extensions = None
2022-06-15T03:19:39.4166043Z
2022-06-15T03:19:39.4166160Z def _resolve_single_pattern_locally(
2022-06-15T03:19:39.4166478Z base_path: str, pattern: str, allowed_extensions: Optional[List[str]] = None
2022-06-15T03:19:39.4166811Z ) -> List[Path]:
2022-06-15T03:19:39.4167017Z """
2022-06-15T03:19:39.4167301Z Return the absolute paths to all the files that match the given patterns.
2022-06-15T03:19:39.4167622Z It also supports absolute paths in patterns.
2022-06-15T03:19:39.4167923Z If an URL is passed, it is returned as is.
2022-06-15T03:19:39.4168165Z """
2022-06-15T03:19:39.4168412Z pattern = os.path.join(base_path, pattern)
2022-06-15T03:19:39.4168678Z data_files_ignore = FILES_TO_IGNORE
2022-06-15T03:19:39.4168931Z fs = LocalFileSystem()
2022-06-15T03:19:39.4169275Z glob_iter = [PurePath(filepath) for filepath in fs.glob(pattern) if fs.isfile(filepath)]
2022-06-15T03:19:39.4169591Z matched_paths = [
2022-06-15T03:19:39.4169981Z Path(filepath).resolve()
2022-06-15T03:19:39.4170218Z for filepath in glob_iter
2022-06-15T03:19:39.4170568Z if filepath.name not in data_files_ignore and not any(part.startswith((".", "__")) for part in filepath.parts)
2022-06-15T03:19:39.4170869Z ]
2022-06-15T03:19:39.4171068Z if allowed_extensions is not None:
2022-06-15T03:19:39.4171286Z out = [
2022-06-15T03:19:39.4171477Z filepath
2022-06-15T03:19:39.4171706Z for filepath in matched_paths
2022-06-15T03:19:39.4172173Z if any(suffix[1:] in allowed_extensions for suffix in filepath.suffixes)
2022-06-15T03:19:39.4172436Z ]
2022-06-15T03:19:39.4172649Z if len(out) < len(matched_paths):
2022-06-15T03:19:39.4173032Z invalid_matched_files = list(set(matched_paths) - set(out))
2022-06-15T03:19:39.4173296Z logger.info(
2022-06-15T03:19:39.4173809Z f"Some files matched the pattern '{pattern}' at {Path(base_path).resolve()} but don't have valid data file extensions: {invalid_matched_files}"
2022-06-15T03:19:39.4174147Z )
2022-06-15T03:19:39.4174332Z else:
2022-06-15T03:19:39.4174527Z out = matched_paths
2022-06-15T03:19:39.4174772Z if not out and not contains_wildcards(pattern):
2022-06-15T03:19:39.4175185Z error_msg = f"Unable to find '{pattern}' at {Path(base_path).resolve()}"
2022-06-15T03:19:39.4175489Z if allowed_extensions is not None:
2022-06-15T03:19:39.4175807Z error_msg += f" with any supported extension {list(allowed_extensions)}"
2022-06-15T03:19:39.4176181Z > raise FileNotFoundError(error_msg)
2022-06-15T03:19:39.4176703Z E FileNotFoundError: Unable to find '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json' at /transformers
2022-06-15T03:19:39.4176964Z
2022-06-15T03:19:39.4177238Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:144: FileNotFoundError
```<|||||>This shouldn't be merged before the release branch is cut, to avoid the pin being in the release.<|||||>@lhoestq
Are you already aware of this issue (regarding `load_dataset`)? Otherwise I can try to make a simple reproducible example. Thank you :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Do you think it is worth changing Dockerfile (for testing) to install datasets 2.2.2 for now? And discard this PR maybe ?<|||||>superseded by https://github.com/huggingface/transformers/pull/17721
and https://github.com/huggingface/datasets/pull/4505 |
transformers | 17,714 | closed | SegFormer feature extractor `do_normalize=False` | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
# [...]
feature_extractor = SegformerFeatureExtractor.from_pretrained('nvidia/mit-b1',
do_resize=False,
do_normalize=False)
# [...]
# call the extractor
# minimal example
img = Image.fromarray(np.random.randint(0, 255, (100, 100, 3), dtype=np.uint8))
msk = Image.fromarray(np.random.randint(1, 10, (100, 100), dtype=np.uint8))
feature_extractor(images=img, segmentation_maps=msk, return_tensors="pt")
```
This will raise:
```pythonoutput
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in convert_to_tensors(self, tensor_type)
167 if not is_tensor(value):
--> 168 tensor = as_tensor(value)
169
4 frames
/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in as_tensor(value)
149 value = np.array(value)
--> 150 return torch.tensor(value)
151
RuntimeError: Could not infer dtype of Image
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-70-9cd4a6ca6f4b> in <module>()
2 msk = Image.fromarray(np.array((100,100,1), dtype=np.uint8))
3
----> 4 feature_extractor(images=img, segmentation_maps=msk, return_tensors="pt")
/usr/local/lib/python3.7/dist-packages/transformers/models/segformer/feature_extraction_segformer.py in __call__(self, images, segmentation_maps, return_tensors, **kwargs)
208 data["labels"] = labels
209
--> 210 encoded_inputs = BatchFeature(data=data, tensor_type=return_tensors)
211
212 return encoded_inputs
/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in __init__(self, data, tensor_type)
77 def __init__(self, data: Optional[Dict[str, Any]] = None, tensor_type: Union[None, str, TensorType] = None):
78 super().__init__(data)
---> 79 self.convert_to_tensors(tensor_type=tensor_type)
80
81 def __getitem__(self, item: str) -> Union[Any]:
/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in convert_to_tensors(self, tensor_type)
173 raise ValueError("Unable to create tensor returning overflowing values of different lengths. ")
174 raise ValueError(
--> 175 "Unable to create tensor, you should probably activate padding "
176 "with 'padding=True' to have batched tensors with the same length."
177 )
ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.
```
### Expected behavior
I was expecting that no error occurs in the conversion to tensor if I don't perform the normalization.
This example is for SegFormer model, but I think the DETR have the same issue (#16715)
A workaround, is use:
```python
feature_extractor = SegformerFeatureExtractor.from_pretrained('nvidia/mit-b1',
do_resize=False,
do_normalize=True,
image_mean= [0., 0., 0.],
image_std = [1., 1., 1.])
)
```
I don't know if `convert_to_tensors` is to work with `PIL.Image`, maybe just need to add this conversion as a default step in the extractor:
```python
images = [self.to_numpy_array(image) for image in images if isinstance(image, Image.Image)]
```
| 06-15-2022 13:32:38 | 06-15-2022 13:32:38 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This hasn't been fixed yet <|||||>cc @alaradirik @amyeroberts as well :)<|||||>Hi @johnnv1,
thanks for reporting, we're aware of this issue with feature extractors (see #15055 for a detailed description) and are planning to take it into account when updating the preprocessing pipeline for our vision models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,713 | closed | TF Sharded | # What does this PR do?
Introduces the sharding of TF models following the pytroch implementation.
A simple working example is the following :
```python
from transformers import TFOPTModel
save_directory = "opt-350m"
model = TFOPTModel.from_pretrained("facebook/opt-350m")
model.save_pretrained(save_directory, max_shard_size = "1GB")
tf_model = TFOPTModel.from_pretrained(save_directory)
```
| 06-15-2022 13:20:43 | 06-15-2022 13:20:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Okay so the tfopt_for_causal_lm/tfopt_model
prefix from the tfopt_for_causal_lm/model/decoder/embed_positions/weight:0 in the index json comes from the actual name of the layer (so tf side). This also creates the hack that we sometime need when some layer is shared : for OPT we have the following : 'decoder.embed_tokens/model.decoder.embed_tokens/weight:0' which thus becomes model.decoder.embed_tokens/weight:0 . Most interesting part is that the ‘decoder.embed_tokens’ comes from https://github.com/ArthurZucker/transformers/blob/e950ff48a91840e30966abaf86bdb02dc16fcdab/src/transformers/models/opt/modeling_tf_opt.py#L499-L511 (the load weight prefix hack using load_weight_prefix) I am sure that there is something to do about that so I will detail that and dig a bit further<|||||>Looks very nice to me!
Only did a very high-level review. Defering to @gante and @sgugger here :-) |
transformers | 17,712 | closed | Fix Automatic Download of Pretrained Weights in DETR | # What does this PR do?
Fixes #15764 | 06-15-2022 13:00:04 | 06-15-2022 13:00:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge <|||||>@NielsRogge Any Update here? |
transformers | 17,711 | closed | Make attention_mask axes dynamic when exporting onnx | # What does this PR do?
Make `attention_mask` axes dynamic when exporting onnx
## Who can review?
[@fatcat-z](https://github.com/fatcat-z)
| 06-15-2022 11:29:20 | 06-15-2022 11:29:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17711). All of your documentation changes will be reflected on that endpoint.<|||||>cc @michaelbenayoun @lewtun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @unbuilt just checking if you were able to test that the script runs correctly with your change and the default settings? If yes, this looks good to merge IMO :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,710 | closed | [ViTMAE] Fix docstrings and variable names | # What does this PR do?
Fixes #17473
Fixes #17665
This PR improves the docstrings and variable names of the patchify, unpatchify and forward_loss methods of ViTMAE. This way, also the number of channels isn't hardcoded anymore.
cc @sayakpaul | 06-15-2022 10:24:09 | 06-15-2022 10:24:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,709 | closed | [Wav2Vec2Conformer] Official release | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add link to paper and improve readme
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-15-2022 08:14:09 | 06-15-2022 08:14:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Wait, how was the repo-consistency check passing without this? It should have given a big red cross.<|||||>> Wait, how was the repo-consistency check passing without this? It should have given a big red cross.
What is the problem here exactly? <|||||>There is a check in the CI that shouldn't let models be present without being in the README, I was wondering why it was not failing but found the reason. `Wav2Vec2-Conformer` is whitelisted for this test. Could you remove it from `MODELS_NOT_IN_README` in `utils/check_copies.py` in this PR?<|||||>I see - will do! |
transformers | 17,708 | closed | Fix duplicate code at T5Model | # What does this PR do?
Unlike `T5ForConditionalGeneration`, it doesn't seem to be necessary at `T5Model`.
@patrickvonplaten
| 06-15-2022 06:28:33 | 06-15-2022 06:28:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @lkm2835,
could you give a bit more feedback on why this is not necessary? I don't exactly see why this is necessary at the moment.<|||||>Oh, sorry @patrickvonplaten
1411-1412 and 1414-1415 are same code.
```
1411 if self.model_parallel:
1412 torch.cuda.set_device(self.decoder.first_device)
1413 # Set device for model parallelism
1414 if self.model_parallel:
1415 torch.cuda.set_device(self.decoder.first_device)
1416 hidden_states = hidden_states.to(self.decoder.first_device)
...
```
https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1411
<|||||>In T5ForConditionalGeneration,
```
1619 if self.model_parallel:
1620 torch.cuda.set_device(self.decoder.first_device)
1622 if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:
1623 # get decoder inputs from shifting lm labels to the right
1624 decoder_input_ids = self._shift_right(labels)
1626 # Set device for model parallelism
1627 if self.model_parallel:
1628 torch.cuda.set_device(self.decoder.first_device)
1629 hidden_states = hidden_states.to(self.decoder.first_device)
```
1619-1620 and 1627-1628 are same code. But If 1622-1624 need `set_device`, 1619-1620 is necessary.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1619 |
transformers | 17,707 | closed | DensePhrase: StopIteration: Caught StopIteration in replica 0 on device 0. | ### System Info
```shell
densephrase == 1.0
- `transformers` version: 2.9.0
- Platform: Linux-4.14.0_1-0-0-39-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.13
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
I installed densephrase absolutely followed with readme, [here](https://github.com/princeton-nlp/DensePhrases)
and when I run `make draft MODEL_NAME=test`,it raises error as follows.

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
after installed densephrase(https://github.com/princeton-nlp/DensePhrases), just run:
make draft MODEL_NAME=test
### Expected behavior
```shell
no error
```
| 06-15-2022 06:28:22 | 06-15-2022 06:28:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,706 | closed | QuestionAnsweringPipeline returns full context in Japanese | ### System Info
```shell
- `transformers` version: 4.19.4
- Platform: Linux-5.10.0-13-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.1.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
```
### Who can help?
@Narsil @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
QuestionAnsweringPipeline (almost always) returns full `context` in Japanese, for example:
```py
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, QuestionAnsweringPipeline
tokenizer = AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
model = AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
qap = QuestionAnsweringPipeline(tokenizer=tokenizer, model=model)
print(qap(question="国語", context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
returns `{'score': 0.9999955892562866, 'start': 0, 'end': 30, 'answer': '全学年にわたって小学校の国語の教科書に挿し絵が用いられている'}`. On the other hand, directly with `torch.argmax`
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
model = AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
question = "国語"
context = "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs = tokenizer(question,context, return_tensors="pt", return_offsets_mapping=True)
offsets = inputs.pop("offset_mapping").tolist()[0]
outputs = model(**inputs)
start, end = torch.argmax(outputs.start_logits), torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
the model returns the answer "教科書" correctly.
### Expected behavior
```shell
Return the right answer "教科書" instead of full context.
```
| 06-15-2022 04:37:25 | 06-15-2022 04:37:25 | I suspect that "encoding" in Japanese models do not work at https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L452
but I'm vague how to fix.<|||||>Hi @KoichiYasuoka 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗
(Since the issue is about the quality of the output, it's probably model-related, and not a bug per se. In any case, if you suspect it is due to a bug in `transformers`, please add more information here)<|||||>Hi @KoichiYasuoka ,
This seems to be linked to the pipeline attempts to align on "words". The problem is that this japanese tokenizer does not ever cut on "words" so the whole context is a single word, so the realignment just forgets all about the actual answer, which is a bit sad.
I created a PR to include a new parameter to disable this so it can work on your use case (I personally think it should be the default but we cannot change this because of backward compatibility)
<|||||>Thank you @Narsil for creating new PR with `align_to_words=False` option. Well, can I use the option in the `widget` of [deberta-base-japanese-aozora-ud-head](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora-ud-head) page?<|||||>Hi, the PR is not merged yet, and it will take a few days before it lands on the API (API doesn't run master).
Afterwards, while being undocumented and thus maybe deactivated at anytime (though we rarely do this), you could send `align_to_words: false` within the `parameters` part of your query to the API.
Unfortunately the widget itself will not use parameters.
Does that answer your question ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,705 | closed | mBART generate random strings in end of sentence | ### System Info
```shell
transformers==4.19.2
```
### Who can help?
@patil-suraj@patrickvonplaten, @Narsil, @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Since this happens to my personal project and the code is too custom, I just past the results and ```generete```call. As below show, sometimes mBART don't stop generating sentence properly but generate some strange tokens.
```
if "25" in self.hparams.model_name_or_path:
generated_tokens = self.model.generate(input_ids=batch["input_ids"],
attention_mask=batch["attention_mask"],
decoder_start_token_id=self.tokenizer.lang_code_to_id[self.hparams.tgt_lang],
do_sample=True,
temperature=self.hparams.temperature,
num_return_sequences=self.hparams.n_hypothesis,
max_length=MAX_LENGTH)
else:
generated_tokens = self.model.generate(input_ids=batch["input_ids"],
attention_mask=batch["attention_mask"],
forced_bos_token_id=self.tokenizer.lang_code_to_id[self.hparams.tgt_lang],
do_sample=True,
temperature=self.hparams.temperature,
num_return_sequences=self.hparams.n_hypothesis,
max_length=MAX_LENGTH)
```
<img width="867" alt="屏幕快照 2022-06-15 上午10 40 25" src="https://user-images.githubusercontent.com/38279341/173725168-c867fc87-6f0f-4205-a76e-bf70b5a946d6.png">
<img width="412" alt="屏幕快照 2022-06-15 上午10 44 52" src="https://user-images.githubusercontent.com/38279341/173725722-271fb4d4-ccdc-47b0-be5c-31296c596fb2.png">
### Expected behavior
```shell
mBART all ways generate valid sentence.
```
| 06-15-2022 02:47:04 | 06-15-2022 02:47:04 | |
transformers | 17,704 | closed | Cannot run run_qa.py due to "ImportError: cannot import name 'send_example_telemetry'" | ### System Info
```shell
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-180-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to run the [run_qa.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py) using the corresponding command provided in the instructions. However, I face the following issue:
Traceback (most recent call last):
File "run_qa.py", line 45, in <module>
from transformers.utils import check_min_version, send_example_telemetry
ImportError: cannot import name 'send_example_telemetry'
### Expected behavior
```shell
I would expect to see the following values, as mentioned in the instructions.
f1 = 88.52
exact_match = 81.22
```
| 06-14-2022 17:11:15 | 06-14-2022 17:11:15 | Hi, did you solve this problem, I have the same problem now<|||||>@maria364
Hi, did you solve this problem, I have the same problem now TAT<|||||>Hi, I am experiencing the same problem when trying to run "run_mlm.py".<|||||>Hi, @maria364 @xueqianyi @DidiDerDenker Could you try the latest version of `transformers`? This should fix the issue I believe.
<|||||>>
Thanks so much!And that makes sense:
`pip install git+https://github.com/huggingface/transformers` |
transformers | 17,703 | open | Add Flax implementation for BLOOM | ### Model description
I'm interested in adding an implementation of BLOOM in Flax.
The implementation shouldn't be too bad since the pytorch implementation can serve as a guide and a way to check correctness.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
@younesbelkada @stas00 @patrickvonplaten
If someone is already planning to work on this then no worries, but if not I will start on this as soon as I have time! | 06-14-2022 15:51:59 | 06-14-2022 15:51:59 | Hi!
Thank you very much for the contribution!
On my side it's a green light since I am not working on it, and it is not on my plans for now. Therefore, I'll be happy to review it! Let us know if you want to work on that :)<|||||>Thanks! I will open a WIP PR soon and tag you there once I do.<|||||>Very cool idea - think this can also be a flagship project where we can showcase how to fine-tune BLOOM with Flax cc @patil-suraj @sanchit-gandhi <|||||>Awesome! Would be very happy to help with it :) <|||||>Great idea! Would also be interested in getting involved, this would be a super cool model addition!<|||||>Thanks everyone for the interest! I'd love to collaborate with you all.
I'm hoping to push a rough draft of modeling code by the end of the weekend (earlier if I have time), and will tag you all when I open the PR with that. Does that sound alright?<|||||>I've opened a PR (and documented the state of the in-progress code I'm still working on) at #17761 ! We can discuss further in that PR how to collaborate / proceed. |
transformers | 17,702 | closed | [LongT5] disable model parallel test | # What does this PR do?
LongT5 doesn't implement the old model parallel logic. This PR disables the model parallel tests for longt5. | 06-14-2022 15:11:26 | 06-14-2022 15:11:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thank you for the quick action, @patil-suraj ❤️
>
> Just to know: we no longer add `parallelize` to new models, right, like what @patrickvonplaten said it's outdated?
Yes, because now any model can be parallelized using the sharded checkpoint and accelerate utils that Sylvain added. cf https://github.com/huggingface/transformers/pull/17341 |
transformers | 17,701 | closed | Add a TF in-graph tokenizer for BERT | Hey all! I've done some testing and the in-graph BERT tokenizer is now yielding the same outputs as our tokenizers, even for multi-part texts where we need to concatenate and get `token_type_ids` right. There's still several things left to do before this is ready, but I figured now is the time to lay it out and get some feedback!
Left to do:
- [x] Add input normalization
- [x] Is texts_a / texts_b the right way to handle inputs? Should it just be a multidimensional tensor?
- [x] Should this be a complete class rather than reading attributes from an existing tokenizer?
- [x] Add imports and maybe some kind of AutoModel to make this findable by users
- [x] Add dependency for tensorflow_text and import checks
- [x] Do we need to change the name? Most TF users will still want the normal tokenizers
- [x] Add tests, particularly one with a full model and one with saving to savedmodel, as these are main use cases
- [x] Currently always pads to max length - this should be an option
- [x] Add documentation
- [ ] Consider adding docs on how to add other TF tokenizers, so we can see if users want to add them once we have a framework in place? | 06-14-2022 14:16:31 | 06-14-2022 14:16:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @gante I'm happy with this PR now, so I'm ready for a final review! Also, the tests are slightly slow (~1min for the whole test suite on a single core). Should I mark some of them as `@slow`, or will they only be run nightly and when a PR affects the BERT directory anyway? |
transformers | 17,700 | closed | [LongT5] Rename checkpoitns | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
T5Long checkpoints names didn't follow the "standard" Transformers naming. Changed already on the Hub, need to change in Transformers as well . cc @patil-suraj
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-14-2022 11:20:27 | 06-14-2022 11:20:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten Thanks for fixing this! I realized later it's not ideal to use any capital letters. |
transformers | 17,699 | closed | Update-longt5 | # What does this PR do?
Fix checkpoint names in LongT5.
cc @stancld | 06-14-2022 11:14:18 | 06-14-2022 11:14:18 | |
transformers | 17,698 | closed | Italian/accelerate | # What does this PR do?
Italian translation of accelerate.mdx
See issue: https://github.com/huggingface/transformers/issues/17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@omarespejel
@sgugger | 06-14-2022 11:11:50 | 06-14-2022 11:11:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,697 | closed | Need the ability to modify PSM values for Tesseract call in LayoutLM V2/ XLM / V3 Processor | ### Feature request
There exists a need to modify PSM values while calling Tesseract during feature extraction stage with OCR enabled.
https://github.com/huggingface/transformers/blob/31ee80d55673f32c0f5d50936f371e661b74b21a/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py#L53
### Motivation
Changing Page Segmentation Modes(PSM) values have significant impact on the output of all LayoutLM models, depending on the type/formatting of input document. The default psm value is set to 3, and is not optimal in every situation. It is helpful if users can modify PSM values based on different document types. [PSM Reference](https://stackoverflow.com/questions/44619077/pytesseract-ocr-multiple-config-options)
### Your contribution
I've already created a PR in a branch (https://github.com/huggingface/transformers/pull/17005) for LMV2/XLM and currently using it for my own projects, but it would be better if the official repo have it so there is no need to keep maintaining/updating my own fork. Hoping to see this feature added to LayoutLMV3! | 06-14-2022 07:04:29 | 06-14-2022 07:04:29 | cc @NielsRogge <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,696 | closed | TypeError: can't pickle _thread.lock objects | ### System Info
```shell
- `transformers` version: 4.19.4
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
[Google Colab]
```
### Who can help?
@amogkam
### Information
- [ ] My own modified scripts
- [ ] The official example scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Note: this is related to [this closed issue](https://github.com/huggingface/transformers/issues/11249).
This is the code I'm using:
```
args = TrainingArguments(
f"{model_name}-hyperp-{task}",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01,
skip_memory_metrics=True, # https://github.com/huggingface/transformers/issues/12177 [picking error]
)
trainer = Trainer(
model_init=model_init, # function to initialize model (using 'from_pretrained')
args=args,
train_dataset = tokenized_datasets["train"],
eval_dataset = tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.hyperparameter_search(
hp_space=lambda _: tune_config,
backend="ray",
n_trials=10,
resources_per_trial={"cpu": 1, "gpu": 0},
scheduler=scheduler,
keep_checkpoints_num=1,
checkpoint_score_attr="training_iteration",
progress_reporter=reporter,
local_dir="/ray_results/",
name="tune_transformer_pbt",
log_to_file=True,
)
```
Error:
```
TypeError Traceback (most recent call last)
[<ipython-input-50-3716493001d6>](https://localhost:8080/#) in <module>()
33 local_dir="/ray_results/",
34 name="tune_transformer_pbt",
---> 35 log_to_file=True,
36 )
37
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)
2083 HPSearchBackend.WANDB: run_hp_search_wandb,
2084 }
-> 2085 best_run = backend_dict[backend](self, n_trials, direction, **kwargs)
2086
2087 self.hp_search_backend = None
[/usr/local/lib/python3.7/dist-packages/transformers/integrations.py](https://localhost:8080/#) in run_hp_search_ray(trainer, n_trials, direction, **kwargs)
296 config=trainer.hp_space(None),
297 num_samples=n_trials,
--> 298 **kwargs,
299 )
300 best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3])
[/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, max_concurrent_trials, _experiment_checkpoint_dir, loggers, _remote)
363
364 if not trial_executor or isinstance(trial_executor, RayTrialExecutor):
--> 365 _ray_auto_init()
366
367 if _remote:
[/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in _ray_auto_init()
876 "call `ray.init(...)` before `tune.run`."
877 )
--> 878 ray.init()
[/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
103 if func.__name__ != "init" or is_client_mode_enabled_by_default:
104 return getattr(ray, func.__name__)(*args, **kwargs)
--> 105 return func(*args, **kwargs)
106
107 return wrapper
[/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in init(address, num_cpus, num_gpus, resources, object_store_memory, local_mode, ignore_reinit_error, include_dashboard, dashboard_host, dashboard_port, job_config, configure_logging, logging_level, logging_format, log_to_driver, namespace, runtime_env, storage, _enable_object_reconstruction, _redis_max_memory, _plasma_directory, _node_ip_address, _driver_object_store_memory, _memory, _redis_password, _temp_dir, _metrics_export_port, _system_config, _tracing_startup_hook, _node_name, **kwargs)
1120
1121 for hook in _post_init_hooks:
-> 1122 hook()
1123
1124 node_id = global_worker.core_worker.get_current_node_id()
[/usr/local/lib/python3.7/dist-packages/ray/tune/registry.py](https://localhost:8080/#) in flush(self)
230 self.references[k] = v
231 else:
--> 232 self.references[k] = ray.put(v)
233 self.to_flush.clear()
[/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
103 if func.__name__ != "init" or is_client_mode_enabled_by_default:
104 return getattr(ray, func.__name__)(*args, **kwargs)
--> 105 return func(*args, **kwargs)
106
107 return wrapper
[/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in put(value, _owner)
1892 with profiling.profile("ray.put"):
1893 try:
-> 1894 object_ref = worker.put_object(value, owner_address=serialize_owner_address)
1895 except ObjectStoreFullError:
1896 logger.info(
[/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in put_object(self, value, object_ref, owner_address)
305 ), "Local Mode does not support inserting with an ObjectRef"
306
--> 307 serialized_value = self.get_serialization_context().serialize(value)
308 # This *must* be the first place that we construct this python
309 # ObjectRef because an entry with 0 local references is created when
[/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in serialize(self, value)
419 return RawSerializedObject(value)
420 else:
--> 421 return self._serialize_to_msgpack(value)
[/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_msgpack(self, value)
398 metadata = ray_constants.OBJECT_METADATA_TYPE_PYTHON
399 pickle5_serialized_object = self._serialize_to_pickle5(
--> 400 metadata, python_objects
401 )
402 else:
[/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value)
359 except Exception as e:
360 self.get_and_clear_contained_object_refs()
--> 361 raise e
362 finally:
363 self.set_out_of_band_serialization()
[/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value)
355 self.set_in_band_serialization()
356 inband = pickle.dumps(
--> 357 value, protocol=5, buffer_callback=writer.buffer_callback
358 )
359 except Exception as e:
[/usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dumps(obj, protocol, buffer_callback)
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
75
[/usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dump(self, obj)
618 def dump(self, obj):
619 try:
--> 620 return Pickler.dump(self, obj)
621 except RuntimeError as e:
622 if "recursion" in e.args[0]:
TypeError: can't pickle _thread.lock objects
```
I've already added `skip_memory_metrics=True` in the `TrainingArguments`.
### Expected behavior
```shell
Expecting to not give this error.
```
| 06-14-2022 07:04:17 | 06-14-2022 07:04:17 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello! Facing the same issue with the following system info:
```
datasets==2.5.1
huggingface-hub==0.10.0
multidict==6.0.2
multiprocess==0.70.13
numpy==1.23.3
tokenizers==0.12.1
torch==1.9.0+cu111
torchaudio==0.9.0
tqdm==4.64.1
transformers==4.22.2
```
This issue is only triggered when I keep load_best_model_at_end as True (I am not doing any hyperparameter search): Training code and stack trace are:
### Training Code with Trigger
```
training_args = TrainingArguments(
output_dir=f'../asr/models_src_raw/{args.lang}',
overwrite_output_dir = True,
group_by_length=True,
per_device_train_batch_size=16,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=80,
gradient_checkpointing=True,
fp16=True,
save_steps=10,
eval_steps=10,
logging_steps=10,
learning_rate=3e-4,
warmup_steps=300,
save_total_limit=1,
load_best_model_at_end = True,
metric_for_best_model = wer_metric,
skip_memory_metrics = True
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train,
eval_dataset =test,
tokenizer=processor.feature_extractor,
)
trainer.train()
```
(If I remove the load_best_model_at_end, Works smoothly) <|||||>Hi, with transformers 4.26.1 on Sage maker I am still having this error: TypeError: cannot pickle '_thread.lock' object.
def hp_space(trial):
return {
"learning_rate": trial.suggest_float("learning_rate", 1e-5, 1e-3, log=True),
"num_train_epochs": trial.suggest_int("num_train_epochs", 1, 10),
"seed": trial.suggest_int("seed", 1, 40),
"per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64]),
"weight_decay": trial.suggest_float("weight_decay", 1e-3, 1e-1, log=True),
}
best_run = trainer.hyperparameter_search(n_trials=20, direction="minimize", hp_space=hp_space) |
transformers | 17,695 | closed | Difference in the number of data during deep learning | ### System Info
```shell
Python 3
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense, Activation
input = pd.read_csv('D:\\puri\\capture\\3_1\\all\\inlet.csv', sep='\t', skiprows=0)
output = pd.read_csv('D:\\puri\\capture\\3_1\\all\\output.csv', sep='\t', skiprows=0)
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Activation('relu'))
### Expected behavior
```shell
hi i just started studying machine learning
I followed the machine learning example, but I can't apply it, so I'm asking here
I want to know the correlation between input data and output data,
but the number of input data is 500(500 x 7) and the number of output data is 1.8 million(1.8M x 4). In this case, what model should I study?
(inlet variables : 7, output variables : 4)
And i have 5 input-output cases, is that enough to find the correlation?
Thanks for reading let me know if I need to find another way
```
| 06-13-2022 18:27:28 | 06-13-2022 18:27:28 | Hi @rurujisu 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,694 | closed | In run_mlm.py the group_texts function incorrectly splits the text into lists of chars | ### System Info
```shell
Testing the code for the run_mlm.py - https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py to pull it into a separate notebook, I came across an issue with how the group_texts function is being used. Right now it correctly concatenates the input_ids and other lists, but it also operates on the text column (as that is one of the keys in the examples dict()) and incorrectly slices those strings as arrays, based on the token sequence length, which does not apply to the text string length. Further more it produces a list of lists of chars, not a list of strings. I am concerned this may cause issues for the subsequent DataCollatorForLanguageModeling or any downstream classes that rely on the original input text. If this text were tokenized, the function would work but in current code it appears to retain the original text strings.
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= max_seq_length:
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [ t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=N_CPU,
load_from_cache_file=True,
desc=f"Grouping texts in chunks of {max_seq_length}",
)
```
### Expected behavior
```shell
For it to group and concatenate the tokenized_data correctly without messing up the text field.
```
| 06-13-2022 17:23:37 | 06-13-2022 17:23:37 | opened in accident, you are deleting that column |
transformers | 17,693 | closed | Swin main layer | # What does this PR do?
Refactor the Swin model to have a `MainLayer` which is called by all models to get the Swin outputs (pre-head).
c.f. [relevant comment](https://github.com/huggingface/transformers/pull/17427#discussion_r895278064) from @sayakpaul on ResNet port
The following script was run to check weights could still be successfully loaded into the TF models:
```
from transformers import AutoFeatureExtractor, TFSwinForImageClassification, TFSwinForMaskedImageModeling
checkpoint = "microsoft/swin-tiny-patch4-window7-224"
# relative_position_index isn't updated during training. In TF set as instance param
print("\nTFSwinForImageClassification - from PyTorch checkpoint")
tf_model = TFSwinForImageClassification.from_pretrained(checkpoint, from_pt=True)
print("\nTFSwinForImageClassification - from TF checkpoint")
tf_model = TFSwinForImageClassification.from_pretrained(checkpoint)
# relative_position_index isn't updated during training. In TF set as instance param
# We don't have a masked image modeling checkpoint - use image classification checkpoint
# Some weights will not be used (classifier head)
# Some weights newly initialised (decoder, mask token)
print("\nTFSwinForMaskedImageModeling - from PyTorch checkpoint")
tf_model = TFSwinForMaskedImageModeling.from_pretrained(checkpoint, from_pt=True)
print("\nTFSwinForMaskedImageModeling - from TF checkpoint")
tf_model = TFSwinForMaskedImageModeling.from_pretrained(checkpoint)
```
Produced the outputs:
```
TFSwinForImageClassification - from PyTorch checkpoint
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinForImageClassification: ['swin.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.4.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.3.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.5.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.2.attention.self.relative_position_index']
- This IS expected if you are initializing TFSwinForImageClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFSwinForImageClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFSwinForImageClassification were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training.
TFSwinForImageClassification - from TF checkpoint
All model checkpoint layers were used when initializing TFSwinForImageClassification.
All the layers of TFSwinForImageClassification were initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training.
TFSwinForMaskedImageModeling - from PyTorch checkpoint
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinForMaskedImageModeling: ['classifier.weight', 'swin.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.4.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.3.attention.self.relative_position_index', 'classifier.bias', 'swin.encoder.layers.2.blocks.5.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.2.attention.self.relative_position_index']
- This IS expected if you are initializing TFSwinForMaskedImageModeling from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFSwinForMaskedImageModeling from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
Some weights or buffers of the TF 2.0 model TFSwinForMaskedImageModeling were not initialized from the PyTorch model and are newly initialized: ['swin.embeddings.mask_token', 'decoder.0.weight', 'decoder.0.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
TFSwinForMaskedImageModeling - from TF checkpoint
Some layers from the model checkpoint at microsoft/swin-tiny-patch4-window7-224 were not used when initializing TFSwinForMaskedImageModeling: ['classifier']
- This IS expected if you are initializing TFSwinForMaskedImageModeling from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFSwinForMaskedImageModeling from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFSwinForMaskedImageModeling were not initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224 and are newly initialized: ['decoder', 'swin/embeddings/mask_token:0']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-13-2022 15:57:13 | 06-13-2022 15:57:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,692 | closed | Change push CI to run on workflow_run event | # What does this PR do?
The attempt in #17369 (to make commit history status checks less noisy) unfortunately has no effect.
After a discussion in [this comment](https://github.com/huggingface/transformers/pull/17369#issuecomment-1153846717), this PR changes push CI to be triggered by a `on: workflow_run` event.
Note the change only takes effect once this PR is merged into `main`, as mentioned in the doc. of [workflow_run](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run).
The result would be like in [accelerate](https://github.com/huggingface/accelerate), where the jobs in `on-merge.yml` won't be shown, and the workflow run page look like [this](https://github.com/huggingface/accelerate/actions/workflows/on-merge.yml). | 06-13-2022 15:36:10 | 06-13-2022 15:36:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I merged this PR, you can check on the commit history page
[Change push CI to run on workflow_run event](https://github.com/huggingface/transformers/commits/main)
Hope you ❤️ it!<|||||>Amazing, thanks a lot!<|||||>I am sorry to bother you again ... |
transformers | 17,691 | closed | "comet-ml not installed" error in Trainer (despite comet-ml being installed) | ### System Info
```shell
- `transformers` version: 4.19.4
- Platform: Linux-4.19.0-17-amd64-x86_64-with-glibc2.31
- Python version: 3.9.6
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.4
- JaxLib version: 0.3.2
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@sg
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install comet-ml (in my case comet-ml==3.31.3)
2. Create TrainingArguments with `report-to='comet_ml'
3. Try to instantiate Trainer
This can be reproduced by adding `report_to='comet_ml'` to training arguments in this notebook:
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb
Following error happens when creating the Trainer:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_5296/3132099784.py in <module>
----> 1 trainer = Trainer(
2 model,
3 args,
4 train_dataset=encoded_dataset["train"],
5 eval_dataset=encoded_dataset["validation"],
/opt/conda/lib/python3.9/site-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
444 default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to)
445 callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks
--> 446 self.callback_handler = CallbackHandler(
447 callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler
448 )
/opt/conda/lib/python3.9/site-packages/transformers/trainer_callback.py in __init__(self, callbacks, model, tokenizer, optimizer, lr_scheduler)
288 self.callbacks = []
289 for cb in callbacks:
--> 290 self.add_callback(cb)
291 self.model = model
292 self.tokenizer = tokenizer
/opt/conda/lib/python3.9/site-packages/transformers/trainer_callback.py in add_callback(self, callback)
305
306 def add_callback(self, callback):
--> 307 cb = callback() if isinstance(callback, type) else callback
308 cb_class = callback if isinstance(callback, type) else callback.__class__
309 if cb_class in [c.__class__ for c in self.callbacks]:
/opt/conda/lib/python3.9/site-packages/transformers/integrations.py in __init__(self)
667 def __init__(self):
668 if not _has_comet:
--> 669 raise RuntimeError("CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.")
670 self._initialized = False
671 self._log_assets = False
RuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.
```
### Expected behavior
```shell
A Trainer is successfully created with cometml callback enabled.
```
| 06-13-2022 15:08:21 | 06-13-2022 15:08:21 | cc @sgugger <|||||>As the error message indicates, you need to have cometml installed to use it `report_to="comet_ml"`
```
RuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.
```
It also tells you exactly which command to run to fix this: `pip install comet-ml`.<|||||>Hey,
The issue here is that error appears despite cometml being installed (with pip).
EDIT: Edited the issue title to make it more clear.
On Mon, Jul 4, 2022, 14:33 Sylvain Gugger ***@***.***> wrote:
> As the error message indicates, you need to have cometml installed to use
> it report_to="comet_ml"
>
> RuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.
>
> It also tells you exactly which command to run to fix this: pip install
> comet-ml.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17691#issuecomment-1173767326>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AF7MPQSGKFHH4UZWW3JTEWLVSLKYRANCNFSM5YURU4KQ>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>Did you properly initialize it with your API key then?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger How to do it? In [this](https://huggingface.co/docs/transformers/main_classes/callback) doc, there's no mentioning about API key in comet callback. I tried set up COMET_API_KEY, COMET_MODE, COMET_PROJECT_NAME inside function that runs on spawn, but no luck so far. Also downgraded comet-ml till 3.1.17.
`os.environ["COMET_API_KEY"] = "<api-key>"`
`os.environ["COMET_MODE"] = "ONLINE"`
`os.environ["COMET_PROJECT_NAME"] = "<project-name>"`<|||||>Maybe open an issue with them? We did not write this integration with comet-ml and we don't maintain it. It was written by the Comet team :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is still an issue<|||||>This is still an issue, please re-open or address it -- none of the suggested methods of integrating with Comet ML are working for me -- neither the report_to="comet_ml" approach or the manual compute_metrics approach from this tutorial (https://www.comet.com/docs/v2/integrations/ml-frameworks/huggingface/). |
transformers | 17,690 | closed | GPT-2 based models generation breaks when adding new special tokens | ### System Info
```shell
- `transformers` version: 4.19.4
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: (True)
- Using distributed or parallel set-up in script?: (False)
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The problem occurs when using GPT2 based models with the transformers library, and specifically when using the model.generate() after adding new special tokens, or <pad> tokens.
I have put together a colab for this issue here: https://colab.research.google.com/gist/NtaylorOX/56c3578c1bfe6d6f5ec35ed0641c5e98/hf_gpt2_generate_bug.ipynb.
Steps to reproduce:
1.) Load in libraries and instantiate a GPT2 based model
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed
import os
import torch
import csv
import torch
from torch.utils.data import Dataset
cuda_device = torch.device('cuda:0')
# now set the default gpu to this one
torch.cuda.set_device(cuda_device)
# set model name and load in using transformers automodel/autotokenizer classes
# use smallest gpt2 type model but can use others
MODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
```
2.) Sanity check
```
# test its ability with few easy examples
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
Outputs:
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
['Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Spain. Not just the capital of a country; the capital of Europe....]
3.) Add additional special tokens such as \<pad>
```
# Declare special tokens for padding and separating the context from the slogan:
SPECIAL_TOKENS_DICT = {
'pad_token': '<pad>',
}
# # Add these special tokens to the vocabulary and resize model's embeddings:
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
model.resize_token_embeddings(len(tokenizer))
# Show the full list of special tokens:
print(tokenizer.special_tokens_map)
```
Outputs: {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '<pad>'}
4.) Now run through the generate process again
```
# run same prompt
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
output: 'Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is pad pad pad'
This <pad>token issue can be fixed by instead setting pad_token_id to eos_token_id via:
```
tokenizer.pad_token = tokenizer.eos_token
```
But with other special tokens the problem persists. Please see the colab notebook for more detailed examples.
### Expected behavior
```shell
The adding of new special tokens and subsequence resizing of the model embeddings should leave a model performing in its original pre-trained state when given known tokens.
For example, this problem does not occur with a similar autoregressive model, "facebook/opt".
MODEL_NAME = "facebook/opt-350m"
# reload model and tokenizer from its original pre-trained state
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
# Declare special tokens for padding and separating the context from the slogan:
SPECIAL_TOKENS_DICT = {
'additional_special_tokens': ['<context>', '<slogan>']
}
# OPT already has a <pad> token so add other special tokens to the vocabulary and resize model's embeddings:
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
model.resize_token_embeddings(len(tokenizer))
# run same single prompt as before
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
output: "</s>Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Switzerland. Capital of Italy is: Naples. Capital of France is: Rome. Capital of Spain is: Madrid. "
This output is as it should be - but when using GPT2 based models, something goes wrong.
If this is not a bug, and expected behaviour based on something I've missed, please let me know!
```
| 06-13-2022 14:29:17 | 06-13-2022 14:29:17 | Hey @NtaylorOX,
Sorry I'm not following a 100% here what the problem is here. I can run all of the above samples without a problem and I don't see exactly what the bug is here. Could you maybe copy-paste a single code snippet here that shows the error and then explain what the output should be? :-)
From what I understand, there is a problem when adding the `<pad_token>` to GPT2's tokenizer? Why is OPT used in the example here?<|||||>Hi! Thanks for the reply @patrickvonplaten
So there was actually a bug in my issue! The output was meant to be fully of <pad> tokens or whatever additioanl special tokens had been added - but it seems markdown was showing/compiling these. I've updated comment now.
So what happens is that when you update the GPT2 tokenizer via add_special_tokens - the generate function ends up just predicting those new additional tokens repeatedly. You can see the output in full in the colab notebook.
I believe my issue has the appropriate code snippets with output - although I may have made it a bit messy.
The point here is that the using the prompt:
"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: "
The untouched gpt model generates:
"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Spain. Not just the capital of a country; the capital of Europe. "
But when you add any special token, such as <pad> token using add_special_tokens and resize the embeddings of the model. You get
"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: pad pad pad pad pad or whatever special token you added."
I am 99% sure that adding special tokens should not be intefering with the ability of the model to generate in this way.
The reason for using OPT is because it essentially uses same tokenizer class and the problem doesn't occur for it. But it has occured for all gpt2 variants I've tried.
Has this cleared it up at all?
Again, I think its clearer in the colab notebook
<|||||>Hey @NtaylorOX,
So I guess you're referring to this code snippet here:
```python
# Declare special tokens for padding and separating the context from the slogan:
SPECIAL_TOKENS_DICT = {
'pad_token': '<pad>',
}
# # Add these special tokens to the vocabulary and resize model's embeddings:
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
model.resize_token_embeddings(len(tokenizer))
# Show the full list of special tokens:
print(tokenizer.special_tokens_map)
# run same prompt
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
which then generates the `<pad>` token as an output (but isn't this expected since you set ` skip_special_tokens=False`?
Sorry I'm still not 100% certain I understand what you mean. Could you please post a single code snippet that I can just copy-paste and run and that shows me an output **and** a message what the output should have been instead?
This would be super nice - sorry I'm a bit lost here<|||||>Hi @patrickvonplaten,
Thanks for persisting with my confusing post :D.
Yes The following snippest is the main concern:
```
# Declare special tokens for padding and separating the context from the slogan:
SPECIAL_TOKENS_DICT = {
'pad_token': '<pad>',
}
# # Add these special tokens to the vocabulary and resize model's embeddings:
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
model.resize_token_embeddings(len(tokenizer))
# Show the full list of special tokens:
print(tokenizer.special_tokens_map)
# run same prompt
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
The expected output for gpt2-medium would be the same as the output **before** adding the special tokens, which would be:
"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Basel. Capital of Liechtenstein is: Liechtenstein. Capital of Mexico is: Mexico City. Capital of South Africa is: Cape Town...."
So nice and sensible output. To my understanding, and the way it works with non-gpt2 models, is that adding special tokens should not lead to a different output, but it does.
Again, after adding special tokens as desribed above, the output becomes:
"'Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is pad pad pad pad ...".
To me this seems wrong? The output should be the same as it was originally, but its unable to produce anything other than pad tokens when generating now. And if you inspect the input ids etc, there is no pad token encoded by the tokenizer, nor is there any padding as its a single sample.
Has this made anything clearer?
<|||||>Haha we'll get there @NtaylorOX :-)
Right now when running [your last code snippet](https://github.com/huggingface/transformers/issues/17690#issuecomment-1162746071), I get:
```
NameError Traceback (most recent call last)
<ipython-input-1-d3e787aeade6> in <module>
5
6 # # Add these special tokens to the vocabulary and resize model's embeddings:
----> 7 tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
8 model.resize_token_embeddings(len(tokenizer))
9
NameError: name 'tokenizer' is not defined
```
Could you fix the code snippet so that I can run it in a Python shell to see the output expected by you?<|||||>Now I'm confused. In your comment did you mean to put the code snippest after "I get:"?
I read this as you would be posting the output from running the code?
To get what I believe to produce the "incorrect output", run this:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed
import os
import torch
import csv
import torch
from torch.utils.data import Dataset
cuda_device = torch.device('cuda:0')
# now set the default gpu to this one
torch.cuda.set_device(cuda_device)
# set model name and load in using transformers automodel/autotokenizer classes
# use smallest gpt2 type model but can use others
MODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
# Declare special tokens for padding and separating the context from the slogan:
SPECIAL_TOKENS_DICT = {
'pad_token': '<pad>',
}
# # Add these special tokens to the vocabulary and resize model's embeddings:
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
model.resize_token_embeddings(len(tokenizer))
# Show the full list of special tokens:
print(tokenizer.special_tokens_map)
# run same prompt
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
To get what the output should be and normally is without special tokens:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed
import os
import torch
import csv
import torch
from torch.utils.data import Dataset
cuda_device = torch.device('cuda:0')
# now set the default gpu to this one
torch.cuda.set_device(cuda_device)
# set model name and load in using transformers automodel/autotokenizer classes
# use smallest gpt2 type model but can use others
MODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
# run same prompt
prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)
tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
Does this help?<|||||>Hey @NtaylorOX,
Sorry just corrected my comment above. Ok I think I see what the problem is. You've added a token and now this token is predominantly generated. IMO this is not because it's called a `<pad>` token, it's simply due to the pretrained weights of `distilgpt2`.
Also see this issue: https://github.com/huggingface/transformers/issues/8472 <|||||>Hi @patrickvonplaten ,
Yes - I did not mean it was only affecting <pad> tokens. But it seems I did not find that previous issue which seems to address the problem.
Also, as I mentioned, it does not only affect distilgpt - it affects all GPT2 models I tried. But does not happen to OPT model which I was i found it odd?<|||||>Also - on that other issue: https://github.com/huggingface/transformers/issues/8472
When using your nicely supplied possible fix:
```
import torch
import torch.nn.functional as F
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens(
{'additional_special_tokens': ['<USER>', '<SYSTEM>']}
)
model = GPT2LMHeadModel.from_pretrained('distilgpt2')
model.resize_token_embeddings(len(tokenizer))
inp_tok_ids = tokenizer.encode('I want a pepperoni pizza with mushroom')
inp_tensor = torch.LongTensor(inp_tok_ids).unsqueeze(0)
model.eval()
model.lm_head.weight[-2, :] = (torch.zeros((768,)) - 10000.0)
model.lm_head.weight[-1, :] = (torch.zeros((768,)) - 10000.0)
with torch.no_grad():
for i in range(10):
outputs = model(inp_tensor)
logits = outputs[0][:, -1, :]
probs = F.softmax(logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
inp_tensor = torch.cat([inp_tensor, next_token.unsqueeze(-1)], dim=-1)
print(tokenizer.decode(inp_tensor[0]))
```
I am getting an error:
```
RuntimeError Traceback (most recent call last)
model.eval()
----> 3 model.lm_head.weight[-2, :] = (torch.zeros((768,)) - 10000.0)
4 model.lm_head.weight[-1, :] = (torch.zeros((768,)) - 10000.0)
6 with torch.no_grad():
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
```
Sorry if I shouldn't be crossing wires so much! Just wanted to highlight that this example doesn't seem to work, at least with my transformers version etc. |
transformers | 17,689 | closed | Include a comment to reflect Amy's contributions | This PR adds a note to `src/transformers/modeling_tf_pytorch_utils.py` to reflect @amyeroberts's contributions suggested in https://github.com/huggingface/transformers/pull/17571. It is an oversight on my end that I forgot to mention in the first place.
I hope it's viewed as a mistake and not as a plagiarism attempt. | 06-13-2022 12:49:13 | 06-13-2022 12:49:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger forgot tagging you.<|||||>Hey @sayakpaul, that's really admirable, thank you for that. I personally don't think it needs to be in a comment, as the code isn't where attribution lies, and would clutter the code with non-technical details. The attribution lives in git, and that's where we should do something if you want to add a mention of Amy for those lines of code.
How about doing something as simple as switching a if/else statement (or any other kind of no-op change) and having Amy as author/co-author?<|||||>@LysandreJik see if it's okay now.<|||||>Let's see if this works, thanks a lot! |
transformers | 17,688 | closed | clm example training script uses larger train/eval data than it should | ### System Info
```shell
- `transformers` version: 4.12.3
- Platform: Linux-4.15.0-180-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: (True)
- Using distributed or parallel set-up in script?: (True)
```
### Who can help?
@sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Just run `examples/pytorch/language-modeling/run_clm.py` script with `max_train_samples` parameter with value 1, it'll still group first 1024 tokens regardless.
This is also a bug across libraries (i.e., tensorflow, FLAX) and also for `max_eval_samples` parameter as well.
### Expected behavior
```shell
The script should only use the max number of samples specified from the dataset. This happens because the grouping takes place before selecting the number of samples.
```
Grouping: (https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/examples/pytorch/language-modeling/run_clm.py#L447)
Dataset selection:(https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/examples/pytorch/language-modeling/run_clm.py#L460)
| 06-13-2022 11:17:55 | 06-13-2022 11:17:55 | If you approve that this is a legitimate bug, then please let me know I will open the PR.<|||||>This is completely intended and not a bug. Sample/example is meant as one processed training/evaluation example, which is what is done here.<|||||>Yeah maybe for my purposes I needed it to filter the number of samples before grouping and I thought it would be same for others.
Thanks for the quick response. Closing the issue :) |
transformers | 17,687 | closed | how can I use emformer checkpoint? | ### Feature request
ImportError Traceback (most recent call last)
<ipython-input-1-7a40e4bd817f> in <module>
----> 1 from transformers import EmformerForRNNT
2
3 model = EmformerForRNNT.from_pretrained("anton-l/emformer-base-librispeech")
ImportError: cannot import name 'EmformerForRNNT' from 'transformers' ( .local/lib/python3.8/site-packages/transformers/__init__.py)
### Motivation
.
### Your contribution
. | 06-13-2022 09:48:01 | 06-13-2022 09:48:01 | Hi,
Emformer hasn't been added to the library yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,686 | closed | Save huggingface checkpoint as artifact in mlflow callback | # What does this PR do?
1. Store model checkpoints including tokenizers that are needed to reload the model from mlflow as artifacts
2. Allow model to be register-able. (they are not if log_artifacts is used to log the model)
Fixes # (issue)
https://github.com/huggingface/transformers/issues/15495
https://github.com/huggingface/transformers/issues/10881
https://github.com/huggingface/transformers/issues/7698
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 06-13-2022 04:44:23 | 06-13-2022 04:44:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again!<|||||>Hi there! @swethmandava Thanks for adding this functionality. Quick question: because the artifact logging was removed, wouldn't the intermediate checkpoints not be tracked? Only the latest checkpoint would be logged as a model, right?<|||||>> Hi there! @swethmandava Thanks for adding this functionality. Quick question: because the artifact logging was removed, wouldn't the intermediate checkpoints not be tracked? Only the latest checkpoint would be logged as a model, right?
It should now save all the checkpoints. every time on_save is called |
transformers | 17,685 | closed | Disregard | null | 06-12-2022 22:09:37 | 06-12-2022 22:09:37 | |
transformers | 17,684 | closed | [Pipeline] avoid importing tensorflow if not used | # What does this PR do?
Avoids loading unnecessary modules by `pipelines.base.infer_framework_load_model()` which could create some unexpected behaviour like tensorflow allocating all GPU memory.
@sgugger @LysandreJik
Before:
```python
from transformers import pipeline
pipeline("text-classification")
# This would try importing `TFDistilBertForSequenceClassification`if both tensorflow and pytorch
# are available, and tensorflow would allocate all GPU memory, even if we expect to use
# the pytorch model
```
After:
```python
from transformers import pipeline
pipeline("text-classification")
# Only `DistilBertForSequenceClassification` is imported, and tensorflow is not called
```
| 06-12-2022 12:59:27 | 06-12-2022 12:59:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Was this issue resolved in another PR already? @sgugger <|||||>The fact that TensorFlow takes all GPU memory has been fixed in #18044 |
transformers | 17,683 | closed | Update eli5_app.py | # What does this PR do?
Fixes # (issue)
Updated the string format style for a cleaner understanding of the code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-12-2022 06:47:27 | 06-12-2022 06:47:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17683). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,682 | closed | Truncation + max_length not working for GPT2TokenizerFast | ### System Info
```shell
- `transformers` version: 4.13.0
- Platform: Linux-4.15.0-29-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.10.2+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following Python code:
```Python
from typing import List
from transformers import GPT2TokenizerFast
DESIRED_TOKEN_LENGTH: int = 1949
TEXT = 'Title: Hilltop Hoods\n\nBackground: Hilltop Hoods are an Australian hip hop group that formed in 1994 in Blackwood, Adelaide, South Australia. The group was founded by Suffa (Matthew David Lambert) and MC Pressure (Daniel Howe Smith), who were joined by DJ Debris (Barry John M. Francis) after fellow founder, DJ Next (Ben John Hare), left in 1999. The group released its first extended play, Back\n\nSection: 2007-2009: The Hard Road Restrung and State of the Art\nPassage: Two of Hilltop Hoods\' founders first met in 1987 when MC Suffa (aka Matthew David Lambert) and MC Pressure (Daniel Howe Smith) attended Blackwood High School in Eden Hills - a suburb of Adelaide. In 1991 they joined up with DJ Next (Ben John Hare) through a mutual friend and formed an Australian hip hop group. Their name was supplied by fellow local MC Flak (from Cross Bred Mongrels) - the suburb of Blackwood is known by locals as the Hilltop. The band\'s influences include American hip hop artists: Notorious B.I.G., KRS-One, Gang Starr, Wu-Tang Clan and Public Enemy. At live shows Next was the group\'s DJ, for recording he contributed audio engineering and all the scratching/turntablism on their early works. He regularly competed in the local DMC World DJ Championships (DMC) tournaments, winning the South Australian DMC championships multiple times. Hilltop Hoods recorded a demo, Highlanders, which was released on cassette tape only. As well as Pressure and Suffa on vocals, the group included MC Summit aka DJ Sum-1, but he did not appear on later Hilltop Hoods work. The group\'s first official release, in 1997, was a vinyl-only, seven-track extended play, Back Once Again. Production was handled by DJ Debris (Barry John M Francis), turntablism and audio engineering by Next, vocals by Pressure and Suffa. The third track, "Shades of Grey", features Debris with a verse, and was co-written by Francis, Hare, Lambert and Smith. Fifth track, "Mankind Must Suffa" also features a guest verse from Quromystix (aka Quro, Andrew Michael Bradley) - a member of Finger Lickin\' Good and later the Fuglemen. "Mankind Must Suffa" is credited to Lambert, Smith, Francis and Bradley. Back Once Again is out of print and unavailable for retail purchase. The group\'s debut studio album, A Matter of Time, was released in 1999 on CD only. As with Back Once Again, it is now unavailable for retail purchase. All scratching/turntablism is performed by Next, a track, "Let Me Show You", has no vocals - solely showcasing his turntable skills. American MC Bukue One (Tion Torrence) appears for a guest verse on "Deaf Can Hear". The track is credited to Lambert, Smith, Francis, Hare and Torrence. The album was released independently but with financial assistance from Arts SA - the band were inspired, in 2005, to set up their own Hilltop Hoods Initiative, to help local artists. After the album appeared, Next left the group and moved to Melbourne. In 2004 he moved to London. In 1999 Debris, who was also a member of the Cross Bred Mongrels, replaced Next and became the Hilltop Hoods\' full-time DJ. Hilltop Hoods founded the Certified Wise Crew - a hip hop collaborative - with local groups Terra Firma, Cross Bred Mongrels and After Hours. Certified Wise Crew has since expanded to include MCs Trauma, Blockade, Kolaps, Flea, with Vents and Funkoars joining in later years. Hilltop Hoods received two nominations for the Hip Hop Act of the Year Award at the Australian Dance Music Awards and again at the 3D World Music Awards in 2001 and 2002. In 2001 the group\'s second album, Left Foot, Right Foot, was released with Lambert, Francis and M. Veraquth producing. On 22 September 2003, Hilltop Hoods released their third album, The Calling, which became a commercial breakthrough. In an interview after the release of their fourth album, Suffa revealed that The Calling was recorded on his mother\'s computer and the simplicity of their \'studio\' is the reason why some of the music on the album is in monaural (\'mono\') sound. The Calling entered the ARIA Albums Chart in March 2004 and reached No. 53 before exiting the top 100 in September of the same year. By December 2006 it was certified platinum for shipment of 70,000 units, becoming the first Australian hip hop album to achieve platinum status. In March 2012, it re-entered the chart and peaked at No. 50 - eight-and-a-half years after its first release. It featured two singles, "The Nosebleed Section" and "Dumb Enough", which were listed in the Triple J Hottest 100, 2003. "The Nosebleed Section" was ranked No. 17 in the Triple J Hottest 100 of All Time in 2009. Hilltop Hoods\' chart and commercial success was a turning point in the Australian Hip Hop scene because it demonstrated widespread support for the genre that reached beyond an underground fan base. On 1 April 2006, the group followed with their fourth album, The Hard Road, which peaked at number one. It was the first Australian hip hop album to do so. It was certified gold within a week of being released. Its lead single, "Clown Prince", reached the top 30 on the related ARIA Singles Chart. It featured guest verses from New York rapper, Omni, and British MCs, Mystro and Braintax. The Hilltop Hoods received the inaugural Australian Independent Record (AIR) Award for Independent Artist of the Year and Best Performing Independent Album for The Hard Road in 2006. The track, "The Blue Blooded", is a collaboration with Australian MCs: Funkoars, Hau from Koolism, Mortar, Vents, Drapht, Muph & Plutonic, Pegz and Robby Balboa. On 27 April of the same year, Hilltop Hoods performed at the Bass in the Grass music festival in Darwin alongside fellow hip hop group, The Herd. That same day they issued a second single, the title track from the album. Its video includes fellow members from the Certified Wise Crew - Cross Bred Mongrels, Terrafirma and Funkoars. Following the success of The Hard Road Tour in early 2006, the Hilltop Hoods began their second national tour for the year, The Stopping All Stations Tour, which visited more regional areas of Australia as well as the capital cities. They were supported by Koolism and Mystro. Late that year, Hilltop Hoods released their third single from the album, "What a Great Night". The video shows the group at a club with camera shots panning up and down to reveal a new location. It used special effects and is one of the most expensive video clips for an Australian hip hop group, mirroring the group\'s rise in success and popularity. Also late in the year the band won the J Award for best album of the year from Triple J. They performed the Homebake Festival and Falls Festival before the end of the year. The Hard Road received the AIR Award for Best Independent Hip Hop/Urban Release in 2007. On 12 May 2007, Hilltop Hoods released their next album The Hard Road: Restrung which is a remix of their previous studio album, The Hard Road, featuring the Adelaide Symphony Orchestra and Okwerdz. It peaked at No. 8 on the ARIA Albums Chart. Like its predecessor The Hard Road, it took out "Best Urban Release" at the ARIA Awards of 2007, with the group going back-to-back in the category. The lead single from the album "Recapturing the Vibe Restrung", its video clip was on high rotation on rage & jtv. That year the group performed at the Southbound Festival (WA), The Great Escape at Newington Armory over Easter, and embarked on a UK tour with a Sydney-based string quartet. They finished the year by headlining the Pyramid Rock Festival on Victoria\'s Phillip Island over New Year\'s Eve 2007. In 2008 they performed at the Big Day Out festivals, at Glastonbury Festival and Islington Academy in London. In December their DVD, The City of Light, was released and was nominated as \'Best Music DVD\' at the 2008 ARIA Awards. Hilltop Hoods left their longtime home of Obese Records to start their own label, Golden Era Records, to release their future material. In November 2008 Pressure announced on Triple J\'s breakfast program that the next studio album, State of the Art, would be recorded with session musicians: "We realised with this one after doing Restrung and having an orchestra that we were a bit less limited. So we\'re going to have some session musos come in on this one and stuff like that". The album was released on 12 June, with the lead single, "Chase That Feeling", issued as a digital download on 8 May, and featured a return guest appearance by a quartet from the Adelaide Symphony Orchestra. The album debuted at number one on the albums chart while "Chase That Feeling" peaked at No. 8 on the related singles chart. By 2010 State of the Art was certified 2x platinum for shipment of 140,000 units. In early 2009 the Hilltop Hoods performed at the Groovin the Moo festival in Townsville, Maitland and Bendigo. They also performed at Triple J\'s One Night Stand in Sale, Victoria on 30 May, and at Fat as Butter festival in Newcastle on 25 October where they played several of the tracks from the album. To promote its release the band started a national tour starting on 18 July and performed at most major cities including state capitals. The second national tour that year followed on 11 November with support provided by Vents.\n\nQuestion: What is significant about this time?\nAnswer: On 1 April 2006, the group followed with their fourth album, The Hard Road,\n\nQuestion: How did this album do?\nAnswer: which peaked at number one. It was the first Australian hip hop album to do so.\n\nQuestion: Are there any other interesting aspects about this article?\nAnswer:'
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
tokens: List[int] = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH)
assert len(tokens) == DESIRED_TOKEN_LENGTH # 1949, no problem here
result: str = tokenizer.decode(tokens)
print(len(tokenizer.tokenize(result))) # 1950 when it should be 1949
print(len(tokenizer.encode(result))) # 1950 when it should be 1949
assert len(tokenizer.tokenize(result)) == DESIRED_TOKEN_LENGTH # Fails here!
```
It should fail at the last assertion:
```
Traceback (most recent call last):
File "/Users/tonyhlee/research/mercury/benchmarking/gpt2_tokenizer_bug.py", line 16, in <module>
assert len(tokenizer.tokenize(result)) == DESIRED_TOKEN_LENGTH # Fails here!
AssertionError
```
### Expected behavior
Since I passed in `truncation=True` and `max_length=1949` to `encode`, I would expect the resulting text to be 1949 tokens long after decoding. It's 1950 tokens long instead.
| 06-12-2022 01:32:17 | 06-12-2022 01:32:17 | Also gently pinging @SaulLu @mishig25 @thomasw21 here<|||||>Hi @teetone,
Thank you for your detailed outcome. While investigating, I noticed that the portion of text that produces this behaviour is `"the simplicity of their 'studio' is the reason why"`
```python
from transformers import AutoTokenizer
DESIRED_TOKEN_LENGTH = 1949
TEXT="their 'studio'"
tokenizer = AutoTokenizer.from_pretrained("gpt2")
encoding_1 = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH)
print(f"Encoding 1st time is of length {len(encoding_1)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_1)}")
decoded_encoding_1 = tokenizer.decode(encoding_1)
#
encoding_2 = tokenizer.encode(decoded_encoding_1)
print(f"Encoding 2nd time is of length {len(encoding_2)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_2)}")
print(f"Decoded sequence of ids is \"{decoded_encoding_1}\"")
# Encoding 1st time is of length 5 and corresponds to ['their', "Ġ'", 'stud', 'io', "'"]
# Encoding 2nd time is of length 6 and corresponds to ['their', "'s", 't', 'ud', 'io', "'"]
# Decoded sequence of ids is "their'studio'"
```
The reason is that by default the `clean_up_tokenization_spaces` argument is set to true and has the effect of removing the space between `their` and `'studio'`. By specifying this argument to False you get:
```python
from transformers import AutoTokenizer
DESIRED_TOKEN_LENGTH = 1949
TEXT="their 'studio'"
tokenizer = AutoTokenizer.from_pretrained("gpt2")
encoding_1 = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH)
print(f"Encoding 1st time is of length {len(encoding_1)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_1)}")
decoded_encoding_1 = tokenizer.decode(encoding_1, clean_up_tokenization_spaces=False)
encoding_2 = tokenizer.encode(decoded_encoding_1)
print(f"Encoding 2nd time is of length {len(encoding_2)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_2)}")
print(f"Decoded sequence of ids \"{decoded_encoding_1}\"")
# Encoding 1st time is of length 5 and corresponds to ['their', "Ġ'", 'stud', 'io', "'"]
# Encoding 2nd time is of length 5 and corresponds to ['their', "Ġ'", 'stud', 'io', "'"]
# Decoded sequence of ids "their 'studio'"
```
I hope this answers your problem!
<|||||>>
@SaulLu, thanks for your help! I'm using this `encode` + `decode` logic to truncate text to fit in a given context window. Is it safe to say that if I want to truncate while preserving the original text, I should pass in `clean_up_tokenization_spaces=False` when calling `decode`?<|||||>Generally it is not promised that 1-1 matching is possible. But in the particular case of GPT-2 (without added tokens or special tokens present in the sentence to be tokenized) I think it should work with `clean_up_tokenization_spaces=False` in the `decode` method!<|||||>> Generally it is not promised that 1-1 matching is possible. But in the particular case of GPT-2 (without added tokens or special tokens present in the sentence to be tokenized) I think it should work with `clean_up_tokenization_spaces=False` in the `decode` method!
That makes sense. Thank you again! I will close this issue as resolved. |
transformers | 17,681 | closed | trainer fails when fsdp = full_shard auto_wrap | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.13.0.dev20220610 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
### Who can help?
@sgugger @patric
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
torchrun --nproc_per_node=4 \
run_summarization.py \
--model_name_or_path google/pegasus-large \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=1 \
--per_device_eval_batch_size=1 \
--overwrite_output_dir \
--predict_with_generate \
--fsdp "full_shard auto_wrap" \
--fsdp_min_num_params 20000
```
Running the above script will generate the following error:
`File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1242, in _wrap_model
from torch.distributed.fsdp.wrap import default_auto_wrap_policy
ImportError: cannot import name 'default_auto_wrap_policy' from 'torch.distributed.fsdp.wrap'`
A little bit digging into the torch/distributed/fsdp/wrap.py shows default_auto_wrap_policy is no longer in the file. I tried to change it to size_based_auto_wrap_policy as it seems to have the same function signature. Unfortunately, another error pops up:
`File "run_summarization.py", line 734, in <module>
main()
File "run_summarization.py", line 653, in main
ignore_keys_for_eval=ignore_keys_for_eval,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1610, in _inner_training_loop
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1372, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2301, in training_step
ignore_keys_for_eval=ignore_keys_for_eval,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1610, in _inner_training_loop
loss = self.compute_loss(model, inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2333, in compute_loss
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2301, in training_step
outputs = model(**inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
loss = self.compute_loss(model, inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2333, in compute_loss
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward
outputs = model(**inputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
outputs = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
return self.module(*inputs, **kwinputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
outputs = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
return self.module(*inputs, **kwinputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return_dict=return_dict,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
return self.module(*inputs, **kwinputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return_dict=return_dict,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1245, in forward
return_dict=return_dict,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return_dict=return_dict,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1245, in forward
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward
return_dict=return_dict,
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
outputs = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
return self.module(*inputs, **kwinputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
outputs = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 761, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return self.module(*inputs, **kwinputs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/functional.py", line 2156, in embedding
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 761, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that retur
ns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.`
This time I have no idea how the problem should be solved.
Any help is greatly appreciated! Thanks.
### Expected behavior
```shell
The script should run without errors when fsdp is enabled.
```
| 06-11-2022 20:10:03 | 06-11-2022 20:10:03 | cc @pacman100 <|||||>@pacman100 Thanks for looking into this! Would love to provide any additional information!<|||||>Hello @chijames, thanks for letting us know that `default_auto_wrap_policy` is no more, will be fixing it shortly. Regarding the subsequent error, it is unrelated to the integration and I have opened issue in PyTorch repo for the same and mentioned this issue in that issue as seen above.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hello @chijames, thanks for letting us know that `default_auto_wrap_policy` is no more, will be fixing it shortly. Regarding the subsequent error, it is unrelated to the integration and I have opened issue in PyTorch repo for the same and mentioned this issue in that issue as seen above.
One way to get the **default_auto_wrap_policy** is to get Nvidia's docker nvcr.io/nvidia/pytorch:**22.05-py3**
The definition of **default_auto_wrap_policy** is in /opt/pytorch/pytorch/torch/distributed/fsdp/wrap.py:31
|
transformers | 17,680 | closed | Save huggingface checkpoint as artifact in mlflow callback | # What does this PR do?
1. Store model checkpoints including tokenizers that are needed to reload the model from mlflow as artifacts
2. Allow model to register-able. (they are not if log_artifacts is used to log the model)
Fixes # (issue)
https://github.com/huggingface/transformers/issues/15495
https://github.com/huggingface/transformers/issues/10881
https://github.com/huggingface/transformers/issues/7698
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 06-11-2022 19:58:06 | 06-11-2022 19:58:06 | Opened #17686 from branch. closing this<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,679 | closed | Fix typo in adding_a_new_model README | # What does this PR do?
Fixes #17678
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger | 06-11-2022 18:13:14 | 06-11-2022 18:13:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,678 | closed | Typo in adding_a_new_model README | There's a typo in the adding_a_new_model README [file](https://github.com/huggingface/transformers/blob/main/templates/adding_a_new_model/README.md),
It would be `make fix-copies`, not `maxke fix-copies` here

| 06-11-2022 18:10:47 | 06-11-2022 18:10:47 | Sent a PR to fix this [here](https://github.com/huggingface/transformers/pull/17679) |
transformers | 17,677 | closed | Add missing tokenizer tests - Longformer | # What does this PR do?
This PR add tests for Longformer tokenizer copying tests from Roberta tokenizer's test suite, because those tokenizers are absolutely identical.
Fixes #16627
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@SaulLu @LysandreJik
| 06-11-2022 17:40:56 | 06-11-2022 17:40:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I read discussion in merged tokenizers' tests PRs and post [~~Don't~~ Repeat Yourself*](https://huggingface.co/blog/transformers-design-philosophy) on HF blog and I manually add "the copying mechanism". But I don't understand how it is work, so I tried not to change copied test code from Roberta tokenizer tests. If code modification is not a problem, I would like to add some minor changes, e.g. delete commented code and split big test into smaller one.
Could describe "copying mechanism" works in more details?<|||||>Thanks a lot for working on this @tgadeliya!!
As far as I know, there are no identified "practices" for this case (cc @LysandreJik in case you have another opinion). Nevertheless, if changes are relevant, they are obviously welcome. For example, it is possible to indicate the changes made as here:
https://github.com/huggingface/transformers/blob/d95a32cc60e5d92b4bf08cd805c6b0db7b4100cc/src/transformers/models/deberta/modeling_deberta.py#L308-L309
If the differences are too long to list perhaps the message can just explain why it diverged from the originally copied and pasted code.
Does this help you?<|||||>@SaulLu, Sorry for the late reply. Summer is ending :)
Thanks for your comment. Now it is clear for me. Actually, I came to the conclusion, that code cleaning not so necessary considering all pros and cons. So this PR can be reviewed and merged <|||||>@SaulLu I refreshed this PR, so now it is ready to merge<|||||>Thanks @tgadeliya :hugs: |
transformers | 17,676 | closed | Problems when producing distilBERT | ### System Info
```shell
Hello!I am training a distilBert from scratch following scripts under examples/distillation.
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
And got error when running the scripts below:
```
python scripts/token_counts.py \
--data_file data/binarized_text.bert-base-uncased.pickle \
--token_counts_dump data/token_counts.bert-base-uncased.pickle \
--vocab_size 30522
```
### Expected behavior
```shell
The error I got is:
Traceback (most recent call last):
File "scripts/token_counts.py", line 44, in <module>
data = pickle.load(fp)
EOFError: Ran out of input
```
Could you help me resolve this?
```
| 06-11-2022 15:10:41 | 06-11-2022 15:10:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,675 | closed | AutoTokenizer fails to do_lower_case | If we use the AutoTokenizer library, this still does not work.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base", do_lower_case=True)
tokenizer.do_lower_case = True
print(tokenizer.tokenize("Huggingface"))
```
_Originally posted by @pratyushmaini in https://github.com/huggingface/transformers/issues/9122#issuecomment-1152939838_ | 06-11-2022 14:38:52 | 06-11-2022 14:38:52 | >>> print(tokenizer.tokenize("Huggingface"))
['Hug', 'ging', 'face']<|||||>Hey @pratyushmaini 👋 Following our bug submission template yields better outcomes -- we have many issues and requests coming in, and we need the help of the community to maximize our usefulness :) One of the fields of the template is `Who can help?`, where you can find the right person to tag on your issue.
https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,674 | closed | fine-tunes RoBERTa on WikiText-2 | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The example in `transformers/examples/pytorch/language-modeling/`: https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling
```
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm
```
### Expected behavior
When I run
```shell
wikitext_dataset = load_dataset('wikitext', 'wikitext-2-v1')
print(wikitext_dataset)
```
The log is as follow:
```
DatasetDict({
test: Dataset({
features: ['text'],
num_rows: 4358
})
train: Dataset({
features: ['text'],
num_rows: 36718
})
validation: Dataset({
features: ['text'],
num_rows: 3760
})
})
```
But when I run he previous fine-tune code, The log is as follow:
```
[INFO|trainer.py:1469] 2022-06-11 20:59:08,340 >> ***** Running training *****
[INFO|trainer.py:1470] 2022-06-11 20:59:08,340 >> Num examples = 4798
[INFO|trainer.py:1471] 2022-06-11 20:59:08,340 >> Num Epochs = 1
[INFO|trainer.py:1472] 2022-06-11 20:59:08,340 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1473] 2022-06-11 20:59:08,340 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:1474] 2022-06-11 20:59:08,340 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1475] 2022-06-11 20:59:08,340 >> Total optimization steps = 1800
```
1. Why is the examples number in training(**4798**) different from the examples number(**36718**) in the dataset?
2. 4789 doesn't seem to match the Wikitext data

4. The epoch seems to be 2 every time I finetune it. How to modify the epoch when fine-tuning?
| 06-11-2022 13:59:44 | 06-11-2022 13:59:44 | Please use the [forums](https://discuss.huggingface.co/) to ask questions like this and help debug your code as we keep issues for feature requests and bugs only.
The texts are all concatenated and then split in blocks of the block size you pass to the script, so your dataset is not composed of documents of wikitext-2, but parts of the concatenation of all documents of wikitext-2.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,673 | closed | Add AlbertTokenizer description information | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi, when I used Albert for pretraining, I found that the loss curve seemed abnormal. I did a number of checks and found that the data was not segmented as expected.I searched GitHub and found that this bug has been marked some times( #15003 #8999 ) and waiting to be fixed.
I guess we should definitely tell users about this problem before fixing it. This will help users win more debug time, right?
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-11-2022 12:31:10 | 06-11-2022 12:31:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17673). All of your documentation changes will be reflected on that endpoint.<|||||>cc @SaulLu
|
transformers | 17,672 | closed | [Flax] Token classifier training example improve | # What does this PR do?
add dtype for faster fp16/bf16 training
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj @sgugger | 06-11-2022 10:05:58 | 06-11-2022 10:05:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17672). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,671 | closed | [WIP] Adding LID (Language Identification) Head for M-CTC-T Model | # What does this PR do?
Adding LID head to the M-CTC-T Model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
| 06-11-2022 06:32:41 | 06-11-2022 06:32:41 | Developing this as a `SequenceClassification` head. This is correct right? @patrickvonplaten
TODO:
- [ ] Export LID head weights from flashlight model
- [ ] make new `cwkeam/m-ctct-t-large-lid` checkpoint with above exported weights
- [ ] write tests<|||||>This looks good to me @lorenlugosch and @anton-l what do you think? <|||||>Looks good---I'll give you the mapping from logit index to language shortly.<|||||>```
0: ab
1: ar
2: as
3: br
4: ca
5: cnh
6: cs
7: cv
8: cy
9: de
10: dv
11: el
12: en
13: eo
14: es
15: et
16: eu
17: fa
18: fi
19: fr
20: fy-NL
21: ga-IE
22: hi
23: hsb
24: hu
25: ia
26: id
27: it
28: ja
29: ka
30: kab
31: ky
32: lg
33: lt
34: lv
35: mn
36: mt
37: nl
38: or
39: pa-IN
40: pl
41: pt
42: rm-sursilv
43: rm-vallader
44: ro
45: ru
46: rw
47: sah
48: sl
49: sv-SE
50: ta
51: th
52: tr
53: tt
54: uk
55: vi
56: vot
57: zh-CN
58: zh-HK
59: zh-TW
```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17671). All of your documentation changes will be reflected on that endpoint.<|||||>@lorenlugosch if that was the intention for the original implementation then yes, for sure! We can also turn the model into `MCTCForAudioFrameClassification` (similar to `Wav2Vec2ForAudioFrameClassification`) to make it clearer that the outputs are frame-level.<|||||>@anton-l That sounds like the perfect solution! Will get going with it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@anton-l do you want to help finalize the PR here or should I do it?<|||||>@patrickvonplaten @anton-l
Firstly, I apologize for being irresponsible with going through with something I started. Life really got in the way here but this is so close to being done.
I've put up the weights as well from my last run at this: https://huggingface.co/cwkeam/m-ctc-t-large-lid
<|||||>@cwkeam no worries, and thank you for the weights!
Do you need a hand with any part that's left (tests, modelcard, etc.)? I'd be happy to help or take over the PR if you don't have time! :hugs: <|||||>Actually, I realized I actually had finished the full working implementation. I think because of little nits I didn't fully notify you guys but I think it's worth a review:
```python
from transformers import MCTCTProcessor, MCTCTForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = MCTCTProcessor.from_pretrained("cwkeam/m-ctc-t-large-lid")
model = MCTCTForAudioFrameClassification.from_pretrained("cwkeam/m-ctc-t-large-lid")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_ids
>>> tensor([[12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]])
```
Where, as you can see from @lorenlugosch 's comment above, 12 is English, which we expect from librispeech demo.
<|||||>@cwkeam perfect! Then I'll go over the PR this week and prepare it for merging :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@anton-l and I won't have any time soon to look into this. Should we let it stale for now?
Or @cwkeam, @lorenlugosch would you be interested in taking this PR over? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,670 | closed | run_ner.py is slower than run_ner_no_trainer.py | ### System Info
```shell
- `transformers` version: 4.19.3
- Platform: Linux-5.15.0-35-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
Models:
BERT @LysandreJik
Library:
Trainer @sgugger
Examples:
[token-classification/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) @sgugger @pat
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I ran the example in [pytorch/token-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) with the two scripts provided in the repo: [run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) and
[run_ner_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py), respectively. And the parameters for the two scripts are set identical as follows:
1. in `run_ner.py`:
```
export TASK_NAME=ner
python3 run_ner.py \
--model_name_or_path bert-base-uncased \
--dataset_name conll2003 \
--task_name $TASK_NAME \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir \
--per_device_eval_batch_size 16 \
--do_train \
--do_eval
```
2. in `run_ner_no_trainer.py`:
```
export TASK_NAME=ner
accelerate launch run_ner_no_trainer.py \
--model_name_or_path bert-base-cased \
--dataset_name conll2003 \
--task_name $TASK_NAME \
--max_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--per_device_eval_batch_size 16
```
However, I found `run_no_trainer.sh` is much slower than `run.sh`.
The results are:
1. in `run_ner.py`:
> ***** train metrics *****
> epoch = 3.0
> train_loss = 0.1075
> train_runtime = **_0:02:02.31_**
> train_samples = 14042
> train_samples_per_second = 344.419
> train_steps_per_second = 5.396
2. in `run_ner_no_trainer.py`:
> 33%|████████████████████████████ | 220/660 [00:22<00:44, 9.86it/s]epoch 0: {'precision': 0.8766612641815235, 'recall': 0.904666332162569, 'f1': 0.8904436579142316, 'accuracy': 0.9829800402289959}
> 67%|███████████████████████████████████████████████████████▊ | 439/660 [00:47<00:22, 9.67it/s]epoch 1: {'precision': 0.9085555373793555, 'recall': 0.9289178792440207, 'f1': 0.9186238835593781, 'accuracy': 0.986712826860591}
> 100%|████████████████████████████████████████████████████████████████████████████████████| 660/660 [**_01:11_**<00:00, 9.75it/s]epoch 2: {'precision': 0.9220243982855258, 'recall': 0.935440709148687, 'f1': 0.928684101286841, 'accuracy': 0.988356800247563}
That is, `run_ner.py` is about half the speed of `run_ner_no_trainer.py`
### Expected behavior
```shell
The two scripts should be at the same throughput.
```
| 06-11-2022 01:53:51 | 06-11-2022 01:53:51 | How many GPUs do you have? You launch the second script with `accelerate launch`, so will use distributed training if you have several GPUs, while the first script is launched with Python so is not distributed.<|||||>> How many GPUs do you have? You launch the second script with `accelerate launch`, so will use distributed training if you have several GPUs, while the first script is launched with Python so is not distributed.
Thanks for your reply.
I used two GPUs on a single machine. But the first script launched with Python also utilizes both of the GPUs when I check through the command `nvidia-smi`. Perhaps the trainer class has already used distributed launch and it is still slower.<|||||>Yes, but you're using `DataParallel` vs `DistributedDataParallel`. This is enough to explain the speed difference very likely.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,669 | closed | Fixed documentation typo, parameter name is evaluation_strategy, not eval_strategy | Noticed that there was a typo in the documentation here:
https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/trainer#transformers.TrainingArguments
I believe the parameter name should be `evaluation_strategy` and not `eval_strategy` | 06-10-2022 21:21:22 | 06-10-2022 21:21:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,668 | closed | Fix dtype getter | # What does this PR do?
This PR is a better fix for #17656 as I've had time to dive deeper into this issue. The problem does appear when the model is wrapped in a `DataParallel` object, and not just for PyTorch 1.5. | 06-10-2022 18:54:19 | 06-10-2022 18:54:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Errors seem all related to the current Hub failure, still, will wait for the site to be back online to relaunch them since this is not urgent :-) |
transformers | 17,667 | closed | [Pipelines] Add revision tag to all default pipelines | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17666
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-10-2022 15:43:23 | 06-10-2022 15:43:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil would this be ok for you? see: https://github.com/huggingface/transformers/issues/17666
I can add a simple slow test that iterates over the default pipelines to make sure that the pipeline is correctly loaded<|||||>PR is good to go for me.
@Narsil could you please take a look at this comment: https://github.com/huggingface/transformers/pull/17667#discussion_r910987282 before merging - I think there is some dead code. Default tokenizer never seem to be called.
Also cc @sgugger PR should be ready otherwise<|||||>@Narsil approved offline (tokenizer_default code is dead indeed) => merging! |
transformers | 17,666 | closed | [Default pipelines] Add a revision tag to all pipelines | ### Feature request
Continuing the discussion here: https://github.com/huggingface/transformers/pull/17286#discussion_r893716733
To make sure that the default pipelines models - some of which - are community contributed models and thus somewhat "out-of-control" for transformers, it'd be important to add a `revision` tag to those default models (maybe actually to all pipeline models).
### Motivation
To ensure that changing checkpoints don't break transformers
### Your contribution
Can open a quick PR | 06-10-2022 15:37:52 | 06-10-2022 15:37:52 | cc @NielsRogge @Narsil @LysandreJik @sgugger |
transformers | 17,665 | closed | Transformer Vit-MAE hard coded image channels | ### Feature request
Although the config file for the model accepts number of input channels, the class methods patchify and unpatchify use hard coded 3 channel image input. It should use num_channels argument of the config. https://github.com/huggingface/transformers/blob/a727db62f4f4e5cb15020fa5efa86ae559324616/src/transformers/models/vit_mae/modeling_vit_mae.py#L899
### Motivation
With this feature, VIT MAE model will be able to run on gray scale images as well
### Your contribution
I already fixed it in my code, but I haven't tested it thoroughly | 06-10-2022 15:35:19 | 06-10-2022 15:35:19 | Hi,
This has already been asked in #17473. I'll review the related PR #17491 |
transformers | 17,664 | closed | Explicitly set utf-8 for Windows in utils required for make fixup | # What does this PR do?
Had a little issue that I could not successfully run `make fixup` due to encoding issues. Windows still does not use UTF-8 by default unfortunately.
When looking through the whole codebase, I found other occurrences where the encoding has not been specified. These are _not_ fixed in this PR. This PR focuses on just the changes needed to at least allow the make commands to work.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
Tagging @LysandreJik who hinted me to use `make fixup` (here https://github.com/huggingface/transformers/pull/17629#issuecomment-1152359001)
| 06-10-2022 14:49:35 | 06-10-2022 14:49:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,663 | closed | [TRACKER] Add BLOOM on `pipeline` | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- `accelerate` version: 0.9.0
```
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just a tracker of the following issue
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom")
model = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom",
device_map="auto",
torch_dtype=torch.bfloat16
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=torch.device(0))
```
That throws the following error:
```
Traceback (most recent call last):
File "generate.py", line 58, in <module>
main()
File "generate.py", line 53, in main
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=torch.device(0), max_new_tokens=args.generate_max_length, greedy=args.greedy, top_k=args.top_k)
File "/gpfsssd/worksf/projects/rech/six/uan68tv/transformers/src/transformers/pipelines/__init__.py", line 666, in pipeline
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File "/gpfsssd/worksf/projects/rech/six/uan68tv/transformers/src/transformers/pipelines/text_generation.py", line 48, in __init__
super().__init__(*args, **kwargs)
File "/gpfsssd/worksf/projects/rech/six/uan68tv/transformers/src/transformers/pipelines/base.py", line 770, in __init__
self.model = self.model.to(self.device)
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 907, in to
return self._apply(convert)
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply
module._apply(fn)
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply
module._apply(fn)
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 578, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 601, in _apply
param_applied = fn(param)
File "/gpfswork/rech/six/commun/conda/younes-test-bloom/lib/python3.8/site-packages/torch/nn/modules/module.py", line 905, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 1.53 GiB (GPU 0; 79.35 GiB total capacity; 77.14 GiB already allocated; 509.19 MiB free; 77.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
Pipeline should work correctly, but this behaviour is expected (as far as I understood) we just have to add `bloom` support in the pipeline (it is a WIP)
| 06-10-2022 14:47:59 | 06-10-2022 14:47:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,662 | closed | Translation/training: italian translation training.mdx | # What does this PR do?
* added translation.mdx
* updated _toctree.yml
See issue: [#17459](https://github.com/huggingface/transformers/issues/17459)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel @sgugger | 06-10-2022 14:17:41 | 06-10-2022 14:17:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @nickprock , I found these typos could you edit the file? Thanks!
- "benifici" --> "benefici"
- "stratergia" --> "strategia"
- "Addestranento" --> "Addestramento"
- "Le [Trainer] API supporta" --> "L'API [Trainer] supporta"
- Here: "tutti gli iperparametri che puoi impostare" It doesn't seem so clear, I would maybe put it this way: "tutti gli iperparametri che si possono calibrare". What do you think?
- "nonchè" --> "nonché"
- "funzione [accuratezza](https://huggingface.co/metrics/accuracy)" maybe I would keep the function names in English (i.e., with the original name)
- "maggiorni informazioni" --> "maggiori informazioni"
- "le Keras API" --> "l'API di Keras"
- "Sii sicuro" --> "Assicurati" (for gender neutrality)
- "sepecificare" --> "specificare"
- "Compila e fit" --> "Compilazione e adattamento"
- "Per gli utenti che preferiscono scrivere il loro personale ciclo di addestramento" --> "Per chi preferisse scrivere un proprio ciclo di addestramento personale"
- "pò di memoria" --> "po' di memoria"
- "perchè" --> "perché"
- "adesso sei pronto per addestrare!" --> "adesso possiamo addestrare!" (for gender neutrality)<|||||>Thanks for the review @mfumanelli :pray:, I fixed them. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I fixed and I'm waiting the review<|||||>cc @omarespejel |
transformers | 17,661 | closed | [Generation Test] Make fast test actually fast | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Speeds up `tests/generation/test_generation_utils.py::GenerationIntegrationTests::test_constrained_beam_search_mixin_type_checks`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-10-2022 14:10:44 | 06-10-2022 14:10:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thansk for fixing! |
transformers | 17,660 | closed | [Data2Vec] Speed up test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Speeds up `tests/models/data2vec/test_modeling_data2vec_audio.py::Data2VecAudioModelTest::test_mask_feature_prob_ctc`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-10-2022 14:06:15 | 06-10-2022 14:06:15 | Speeds up data2vec test from 30s to 2s<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! |
transformers | 17,659 | closed | Bug with .from_pretrained in distributed mode on high-ram Colab instances + Accelerate | ### System Info
```shell
- `transformers` version: 4.19.3
- `accelerate` version: 0.10.0.dev0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-biotic
- Python version: 3.713
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script? No (notebook_launcher)
- Using distributed or parallel set-up in script? Accelerate
```
### Who can help?
@sgugger + others related to `.from_pretrained`
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On either a high-ram TPU Colab instance or Titan RTX/bad** machine, load the [Simple NLP Example](https://github.com/huggingface/notebooks/blob/main/examples/accelerate/simple_nlp_example.ipynb) and attempt to run it top-down.
I modified the launch part to be:
```python
notebook_launcher(training_function, (model,))
```
And changed the training function to accept a model arg for it to work
### Expected behavior
It should just run and train fine, but instead it will get a `SIGSEGV` error. We've pinpointed it due to `model = AutoModelFromPretrained...` being inside of the loop that gets launched in the distributed process. If instead we use the already downloaded + instantiated one, the code runs just fine.
`from_pretrained` should guarantee that a single process is being loaded at a time, but instead we get a SIGSEV.
Here's two open issues in Accelerate that have detailed stack traces:
- [On TPU](https://github.com/huggingface/accelerate/issues/440)
- [On bad***](https://github.com/huggingface/accelerate/issues/434)
| 06-10-2022 14:03:48 | 06-10-2022 14:03:48 | cc @pacman100 <|||||>The crux of this issue is Google Colab, and its inability to clear out the old model architecture from RAM. So instead of loading 8x model weights, then our new dict, plus the potential to do a gc.collect(), the other model weights are hanging indefinitely in colab space, unable to be freed. Gist here: https://gist.github.com/muellerzr/763feb654fc0446ed4ebf1813e0cb05e |
transformers | 17,658 | closed | [BigBirdFlaxTests] Make tests slow | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Reduce testing time for BigBird Flax (saves ~4 minutes)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-10-2022 13:47:39 | 06-10-2022 13:47:39 | This reduces overall fast testing time of BigBird to 30 seconds from 4 minutes. It's still quite a bit, but if I remove more tests (or make them slow, we don't test the Flax version anymore at all really).<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.