repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 18,860 | closed | Learning rate is given as tensor, cannot serialize TrainerState in order to save checkpoint. | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
- Python version: 3.10.5
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1.post200 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. use Adafactor optimizer and schedule
```
optimizer = Adafactor(
model.parameters(),
relative_step=True,
warmup_init=True,
)
lr_scheduler = AdafactorSchedule(optimizer)
```
2. save a checkpoint
### Expected behavior
When using an `AdafactorSchedule` I can't use the `Trainer` class to save a checkpoint.
It breaks [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L97) since the learning rate attached to the `TrainerState` is given by a tensor and tensors are not JSON serializable.
I dropped a breakpoint at this line and took a look at my `TrainerState`:
```
In [4]: self.log_history[0]['learning_rate']
Out[4]: tensor(0.0001, device='cuda:0')
```
Expected behavior is that the learning rate attached to the log history would be given by a float and would therefore be JSON serializable. | 09-02-2022 00:10:07 | 09-02-2022 00:10:07 | |
transformers | 18,859 | closed | [modeling_utils] postpone bnb loading until and if it's needed | BNB shouldn't be loaded unless it's actually used - definitely not by used-everywhere `modeling_utils.py`:
The following shouldn't (1) generate all this noise and (2) use up memory and resources w/o an actual need:
```
$ python -c "from transformers import BloomModel"
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: CUDA runtime path found: /home/stas/anaconda3/envs/py38-pt112/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 6.1
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda116_nocublaslt.so...
```
Specifically, currently only using `from_pretrained(..., load_in_8bit=True)` should load it.
My proposal is probably not the best, but it solves this problem
Probably a cleaner solution is to rewrite `src/transformers/utils/bitsandbytes.py` to delay loading its libraries until and if it is used - not sure. Totally open to other suggestions.
@sgugger, @younesbelkada | 09-02-2022 00:01:20 | 09-02-2022 00:01:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>actually the problem was much more severe - before this PR on a machine with no gpu, it lead to this huge crash:
```
python -c "from transformers import AutoModel, AutoTokenizer, AutoConfig; AutoModel.from_pretrained('gpt2'), AutoTokenizer.from_pretrained('gpt2'), AutoConfig.from_pretrained('gpt2');"
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 665/665 [00:00<00:00, 550kB/s]
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: CUDA runtime path found: /gpfswork/rech/six/commun/conda/inference/lib/libcudart.so
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
Traceback (most recent call last):
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py", line 1031, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/gpt2/modeling_gpt2.py", line 49, in <module>
from ...modeling_utils import PreTrainedModel, SequenceSummary
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/modeling_utils.py", line 88, in <module>
from .utils.bitsandbytes import get_key_to_not_convert, replace_8bit_linear, set_module_8bit_tensor_to_device
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/bitsandbytes.py", line 10, in <module>
import bitsandbytes as bnb
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 6, in <module>
from .autograd._functions import (
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 4, in <module>
import bitsandbytes.functional as F
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/functional.py", line 14, in <module>
from .cextension import COMPILED_WITH_CUDA, lib
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 41, in <module>
lib = CUDALibrary_Singleton.get_instance().lib
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 37, in get_instance
cls._instance.initialize()
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 15, in initialize
binary_name = evaluate_cuda_setup()
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 136, in evaluate_cuda_setup
cc = get_compute_capability(cuda)
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 109, in get_compute_capability
ccs = get_compute_capabilities(cuda)
File "/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 87, in get_compute_capabilities
check_cuda_result(cuda, cuda.cuDeviceGetCount(ctypes.byref(nGpus)))
AttributeError: 'NoneType' object has no attribute 'cuDeviceGetCount'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py", line 462, in from_pretrained
model_class = _get_model_class(config, cls._model_mapping)
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py", line 359, in _get_model_class
supported_models = model_mapping[type(config)]
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py", line 583, in __getitem__
return self._load_attr_from_module(model_type, model_name)
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py", line 597, in _load_attr_from_module
return getattribute_from_module(self._modules[module_name], attr)
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py", line 553, in getattribute_from_module
if hasattr(module, attr):
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py", line 1021, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py", line 1033, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.gpt2.modeling_gpt2 because of the following error (look up to see its traceback):
'NoneType' object has no attribute 'cuDeviceGetCount'
```
basically rendering transformers completely broken if bnb was installed and the machine had no visible gpu.
after updating the clone post this PR merge all is back to normal.<|||||>@younesbelkada, I think this functionality of `load_in_8bit=True` requires checking that there is at least one gpu and cleanly assert if there isn't any. i.e this feature can be used only with gpu_count > 0.<|||||>Hi @stas00 ,
Thanks a lot for adding this! I agree with all the points stated on the PR.
Agreed also on your final suggestion, I will add a small PR to cleanly check if a GPU has been correctly detected by Pytorch |
transformers | 18,858 | closed | V4.3.0 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-01-2022 17:34:50 | 09-01-2022 17:34:50 | |
transformers | 18,857 | closed | Clean up utils.hub using the latest from hf_hub | # What does this PR do?
This PR uses the newly released version of `hugginface_hub` to clean up a few things introduced in #18438 (points 1 and 2 in the description of this PR). For point 3 (load from cache) there is currently a difference between Transformers' `try_to_load_from_cache` and the huggingface_hub's one, so this will a follow-up PR in `huggingface_hub` (and then to wait for the next release). | 09-01-2022 16:55:18 | 09-01-2022 16:55:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,856 | closed | Fix number of examples for iterable datasets in multiprocessing | # What does this PR do?
As pointed out in #18608, `IterableDatasetShard.num_examples` is not always update in multiprocessing environments. This PR addresses that by ignoring the value in those cases. It also adds a stronger check of trusting the observed number of examples when for the reason the length is 0.
Fixes #18608 | 09-01-2022 15:14:54 | 09-01-2022 15:14:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,855 | closed | Tie weights after preparing the model in run_clm | # What does this PR do?
#18676 fixed the weights tying in `run_mlm_no_trainer` but not in `run_clm_no_trainer`. This PR fixes that. | 09-01-2022 15:06:24 | 09-01-2022 15:06:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,854 | closed | Pin revision for LayoutLMForQuestionAnswering and TFLayoutLMForQuestionAnswering tests | # What does this PR do?
The newly introduced tests for `LayoutLMForQuestionAnswering` and `TFLayoutLMForQuestionAnswering` broke due to a change to the weights in https://huggingface.co/impira/layoutlm-document-qa. To make sure a weights change does not break tests, I've pinned the revisions in these tests. I'll separately investigate the weights and debug if something is broken with them.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
It was raised here: https://github.com/huggingface/transformers/pull/18407.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-01-2022 14:31:36 | 09-01-2022 14:31:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,853 | closed | loading CLIPVisionModel from openai/clip-vit-base-patch32 | ### System Info
- `transformers` version: 4.12.5
- Platform: Linux-4.18.0-305.57.1.el8_4.x86_64-x86_64-with-redhat-8.4-Ootpa
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
CLIPVisionModel keys do not match the state dict available in `"openai/clip-vit-base-patch32"`. Perhaps it can be fixed but you should at least change the doc which provides the sample code below: https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModel
```py
>>> from transformers import CLIPVisionModel
>>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
You are using a model of type clip to instantiate a model of type clip_vision_model. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at openai/clip-vit-base-patch32 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.1.layer_norm1.weight', β¦, 'text_model.encoder.layers.9.self_attn.q_proj.bias']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model fro
m a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSeq
uenceClassification model).
```
# Circumventing it
I was able to circumvent this problem easily:
```py
>>> from transformers import CLIPVisionModel, CLIPModel
>>> cv = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
# ignore warning and load the full CLIP
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
# load one state dict into the other and save it
>>> cv.vision_model.load_state_dict(model.vision_model.state_dict())
>>> cv.save_pretrained('/path/of/your/choice')
```
### Expected behavior
```py
>>> from transformers import CLIPVisionModel
>>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
# all clear
```
| 09-01-2022 13:54:40 | 09-01-2022 13:54:40 | I don't see the issue here, it is just telling you that the weights of the text encoder aren't used. Which makes total sense as you are loading only the vision encoder, whose weights are all loaded properly.
It's a warning, not an error, so I'm not sure whether this can be improved.<|||||>Oh sorry, my bad, the message warning was so huge I thought it also discarded the weights of the vision model. |
transformers | 18,852 | closed | Add X-CLIP | # What does this PR do?
This PR adds [X-CLIP](https://github.com/microsoft/VideoX/tree/master/X-CLIP), which is a minimal extension of CLIP for video-language pre-training.
To do:
- [x] upload all checkpoints to the hub, as part of the `microsoft` organization
| 09-01-2022 12:15:24 | 09-01-2022 12:15:24 | Many tests fail due to the following error:
> ModuleNotFoundError: No module named 'transformers.models.xclip'
This is probably because I first called the model folder "xclip", which is now called "x_clip". Still, wondering why it keeps looking for the module models.clip. If anyone has any pointers, that would be greatly appreciated.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks a lot, that solved the issue. There seems to be another (small) issue with run_tests_hub:
```
==================================== ERRORS ====================================
_______________ ERROR collecting tests/utils/test_file_utils.py ________________
tests/utils/test_file_utils.py:26: in <module>
from transformers import * # noqa F406
src/transformers/utils/import_utils.py:1021: in __getattr__
value = getattr(module, name)
src/transformers/utils/import_utils.py:1023: in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
E AttributeError: module transformers.models.clip has no attribute CLIPProcessor
```
Running `RUN_SLOW=yes pytest tests/utils/test_file_utils.py` passes locally for me.<|||||>That would be because you moved `CLIPProcessor` in the non-vision dependent objects in the main init (and rightly so) but did not do the same for the `models/clip/__init__.py`.<|||||>@sgugger and @alaradirik - the PR is ready for merge. Kindly asking for your approval :) |
transformers | 18,851 | closed | Generate: get the correct beam index on eos token | # What does this PR do?
Fixes #18839
We were not storing the correct beam index when an `eos_token` was generated (except for the first batch member), resulting in the issue linked above.
________________________________
Confirming the change -- let's consider the following script, which gets the scores from `output.sequences_scores` and from `model.compute_transition_beam_scores`. Since there is no length penalty, the sum of the transition scores divided by the sequence length should match `output.sequences_scores` -- with the current codebase, it was not true except for the first batch.
```python
from transformers import BartTokenizer, BartForConditionalGeneration
model_id = "facebook/bart-base"
tokenizer = BartTokenizer.from_pretrained(model_id)
model = BartForConditionalGeneration.from_pretrained(model_id)
input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much",
"transformers is so good"]
batch_size = 2
num_beams = 10
max_length = 10
num_return_sequences = 5
input_ids = tokenizer(input_tokens, return_tensors='pt', padding=True).input_ids
output = model.generate(
input_ids,
max_length=max_length,
num_beams=num_beams,
num_return_sequences=num_return_sequences,
return_dict_in_generate=True,
output_scores=True
)
print("\nbeam indices:\n", output.beam_indices)
beam_lengths = (output.beam_indices != -1).sum(dim=1)
beam_scores = model.compute_transition_beam_scores(
output.sequences, output.scores, output.beam_indices, tokenizer.eos_token_id
)
print("\nsequence scores (from outputs):\n", output.sequences_scores)
print("\nsequence scores (from compute_transition_beam_scores):\n", beam_scores.sum(dim=1) / beam_lengths)
```
π« output before this PR:
```
beam indices:
tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 2, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 3, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 4, -1],
[10, 10, 10, 10, 10, 10, 0, -1, -1, -1],
[10, 10, 10, 10, 10, 10, 10, 0, -1, -1],
[10, 10, 11, 11, 11, 11, 1, -1, -1, -1],
[10, 10, 10, 10, 10, 10, 10, 1, -1, -1],
[10, 10, 12, 12, 12, 12, 2, -1, -1, -1]])
sequence scores (from outputs):
tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01,
-4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00])
sequence scores (from compute_transition_beam_scores):
tensor([-0.0241, -0.5160, -0.5285, -0.6219, -0.6219, -2.4050, -2.5656, -3.4137,
-2.3775, -3.5453])
```
β
output after this PR:
```
beam indices:
tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 2, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 3, -1],
[ 0, 0, 0, 0, 0, 0, 0, 0, 4, -1],
[10, 10, 10, 10, 10, 10, 10, -1, -1, -1],
[10, 10, 10, 10, 10, 10, 10, 10, -1, -1],
[10, 10, 11, 11, 11, 11, 11, -1, -1, -1],
[10, 10, 10, 10, 10, 10, 10, 11, -1, -1],
[10, 10, 12, 12, 12, 12, 12, -1, -1, -1]])
sequence scores (from outputs):
tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01,
-4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00])
sequence scores (from compute_transition_beam_scores):
tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01,
-4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00])
``` | 09-01-2022 11:11:16 | 09-01-2022 11:11:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,850 | closed | [ViTMAE] Renamed variable name | The `sequence_masked` variable is actually the part of the sequence that is kept **unmasked** for the encoder to consume. This commit renames the variable accordingly.
CC: @sayakpaul | 09-01-2022 10:01:52 | 09-01-2022 10:01:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Tagging @NielsRogge, as he's the vision master π¨βπ« <|||||>Just a reminder here!<|||||>(Niels it currently off, he'll be back in a few days :) )<|||||>FYI, I took this from the original repository: https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/models_mae.py#L140<|||||>Do you think an issue in the original repository would be a good approach to move forward? @NielsRogge <|||||>Yes indeed, to make the authors confirm!<|||||>Thanks for leading the effort. Good work!
@sgugger and @NielsRogge thanks for the reviews and actions. |
transformers | 18,849 | closed | BartForSequenceClassification: Use eos_token or cls_token? | ### System Info
NA
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The [BartTokenizer doc](https://huggingface.co/docs/transformers/v4.21.1/en/model_doc/bart#transformers.BartTokenizer) mentions that `cls_token` is attached to the beginning of the input sentence and is used as the token for sequence classification purposes. However, in the HF code it is picking the last eos_token:
https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bart/modeling_bart.py#L1514-L1521
### Expected behavior
The [Bart paper (Sec 3.1)](https://arxiv.org/pdf/1910.13461.pdf) matches with what the code does, i.e. the last token is to be used as the classification token.
```
3.1 Sequence Classification Tasks
For sequence classification tasks, the same input is fed
into the encoder and decoder, and the final hidden state
of the final decoder token is fed into new multi-class
linear classifier. This approach is related to the CLS
token in BERT; however we add the additional token
to the end so that representation for the token in the
decoder can attend to decoder states from the complete
input (Figure 3a).
```
Should I update the Bart doc with this method of classification? | 09-01-2022 09:25:24 | 09-01-2022 09:25:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,848 | closed | Fix minor typo in prose of model outputs documentation | # What does this PR do?
Fixes a very minor typo in the documentation of the model outputs section.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
| 09-01-2022 08:38:31 | 09-01-2022 08:38:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,847 | closed | Cannot import OPTForSequenceClassification on Kaggle notebooks (transformers 4.20.1, huggingface_hub 0.8.1) | ### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.6.4 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Below is the code snippet and the error it caused. It can import OPTForCausalLM but not OPTForSequenceClassification
```
from transformers import OPTForCausalLM
from transformers import OPTForSequenceClassification, Trainer, TrainingArguments
```
**ImportError: cannot import name 'OPTForSequenceClassification' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)**
Any pointers to what might be wrong? Thank you so much.
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import OPTForCausalLM
from transformers import OPTForSequenceClassification, Trainer, TrainingArguments
### Expected behavior
I would expect OPTForSequenceClassification to be imported normally. | 09-01-2022 08:07:33 | 09-01-2022 08:07:33 | Hey @navinelahi, this class was released in v4.21; could you upgrade your `transformers` library`?<|||||>Yes I upgraded with `pip install transformers==4.21` and now it's working. Thank you so much. The issue is solved |
transformers | 18,846 | closed | Unpin fsspec | # What does this PR do?
The `fsspec` team has made a patch release (2022.8.2) to fix their issue:
- fsspec/filesystem_spec#1032
They yanked both 2022.8.0 and 2022.8.1, so no need to pin to exclude them.
Follows:
- #18837
| 09-01-2022 05:04:15 | 09-01-2022 05:04:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,845 | closed | Remove dropout in embedding layer of OPT | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/18844
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 09-01-2022 02:32:57 | 09-01-2022 02:32:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR @shijie-wu! Pinging @ArthurZucker and @younesbelkada to take a look at this as soon as they're back from leave (in about a week's time). |
transformers | 18,844 | closed | Dropout in OPT embedding layer | ### System Info
main
### Who can help?
@ArthurZucker, @patrickvonplaten, @LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The [OPT paper](https://arxiv.org/pdf/2205.01068.pdf) (sec 2.2) mentioned
> We use a dropout of 0.1 throughout, but we do not apply any dropout to embeddings.
This is also supported by loading [the official checkpoints](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT) and running the following check
```python
import torch
data = torch.load("reshard-model_part-0.pt")
assert data['cfg']['model'].no_emb_dropout
```
However, in `modeling_opt.py`, dropout is applied to the embedding layer.
https://github.com/huggingface/transformers/blob/80367cd1fb6d36ca6bdd99b70586aab4ffae1ae1/src/transformers/models/opt/modeling_opt.py#L641-L642
### Expected behavior
No dropout in the embedding layer | 09-01-2022 02:26:30 | 09-01-2022 02:26:30 | |
transformers | 18,843 | closed | fix arg name in BLOOM testing and remove unused arg document | # What does this PR do?
* Fix argument name in BLOOM testing (`hidden_dropout` and `attention_dropout`)
* Remove document of unused arguments (`skip_bias_add`, `skip_bias_add_qkv` and `attn_pdrop`)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@thomasw21 @stas00 @TevenLeScao | 09-01-2022 02:05:16 | 09-01-2022 02:05:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the fix, @shijie-wu
Wrt `use_cache` won't it be better to actually set the default to `True` - most models have it set to `True` and most users will get bad peformance out of the box with the default during `generate` with it being `False`.
I'm aware that the cat is out of the box, but there have been a lot of tweaks to the model post its release so perhaps such change would still be in the grace period of backward compatibility. <|||||>actually, looking at `generate` it says it defaults to `True`:
https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/generation_utils.py#L1037-L1039
but it doesn't appear to be so, as the default it uses is `None`:
https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/generation_utils.py#L919
but since most models have `use_cache == True` it's sometimes true.
tagging @patrickvonplaten - should the `generate` doc be corrected to say that the default is `None` to match the actual code? and say that unless set explicitly the model's config's `use_cache` setting is used?<|||||>I missed that `use_cache = True` for most models by default. If that's the case, I could update the default instead and revert the change to the doc.<|||||>So it's pretty clear it's most likely a unintended default as 1/3rd but 2 models have it set to `True` and 2/3 of models don't have it set at all:
```
$ grep -Ir 'use_cache=True' src/transformers/models/*/config* | wc -l
45
$ grep -Ir 'use_cache=False' src/transformers/models/*/config* | wc -l
2
$ grep -Ir 'use_cache=False' src/transformers/models/*/config*
src/transformers/models/bloom/configuration_bloom.py: use_cache=False,
src/transformers/models/trocr/configuration_trocr.py: use_cache=False,
```
so `generate's default was relying on all the models having it set to `True` which as we can see isn't always the case.
though I think my regex missed many models, let me check where the missing 100+ are.
edit: the remaining ones don't have `use_cache` set in the `configuration_foo.py` files.<|||||>~If all models set `use_cache=True` shouldn't `.generate` also have `use_cache=True` by default?~
edit: based on new info, this is no longer a question. I am happy to fix default for `bloom` and `trocr`.<|||||>Please let's wait for others to chime in. Perhaps setting it to `False` was not an omission but an intentional move.
also tagging @younesbelkada <|||||>The `use_cache` interior mechanisms are indeed a bit hard to understand. Think we haven't done a good job here. Thanks for pointing this out.
IMO, `use_cache` should be set to `True` for all models and IMO it was probably a mistake to not set it to `True` for BLOOM.
Or is there a maybe a reason behind it (cc @thomasw21 @younesbelkada ?). If `use_cache=False` it means that no tensors containing past generated keys and values are moved around - was this maybe done on purpose given the size of the model? More specifically, by setting it to False we save `num_layers` x `hidden_size` x `2` x `num_previous_generated_tokens` memory.
Regarding the generate docs we decided with @gante to write the defaults to how the default to depending on what's set in the config, so I think the docstring is ok/correct there. Also note that `generate` will soon go through a major refactor regarding the configuration.
Regarding whether we should change the default to True - I actually don't know really. If it would have been for a small model I would have said definitely yes, but for BLOOM which requires multi-gpu for inference I'm not 100% sure how much the memory consumption goes up when enabling it and running generate with `accelerate` - could we try it out ? cc @younesbelkada
<|||||>Thank you for this great feedback, Patrick.
In addition as I have just discovered there are many models that don't set `use_cache` in their config at all. And I don't think there is a super-class default that is inherited. Perhaps there should be one?
Also it appears that `use_cache` is often a tri-state:
https://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/models/t5/modeling_t5.py#L918
so perhaps `generate` could reflect that - or if things change with refactor then I trust the new way will take care of this.
wrt memory usage in Bloom - recalculating say 100 tokens for each new token would be quite slow. So it's a question of whether `use_cache=False` + large BS gives better throughput than `use_cache=True`+small BS using the same amount of memory - probably can measure empirically? though the outcome would be highly hardware dependent.<|||||>@LysandreJik, could we please make a note in the next release that
* BLOOM's `use_cache`'s original default value was incorrectly set to `False` and this release it has been corrected to `use_cache=True` - as this is somewhat backward compatibility breakage. and we hope our users will forgive us as this is still a new model that is going through minor fixes. `use_cache=True` leads to a much faster generation at the cost of larger memory requirements to store the cached values.
Thank you!<|||||>@stas00 I was thinking that we should probably also change it for tcocr model for consistency with other models on the library as suggested by @shijie-wu ?<|||||>I thought so too, but perhaps let's check in with the porter of that model to see if perhaps it was by design?
Would you like to do that? and perhaps in a separate PR so that it's loud and clear in the history of the project? with reference to this PR for context.
Thank you!<|||||>Sure, happy to take care of that!
Will tag also the porter of the model and double check
Thanks again! |
transformers | 18,842 | closed | Remove unused `activation_dropout` in OPT | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/18309
PR for corresponding models
* https://huggingface.co/facebook/opt-125m/discussions/19
* https://huggingface.co/facebook/opt-350m/discussions/5
* https://huggingface.co/facebook/opt-1.3b/discussions/7
* https://huggingface.co/facebook/opt-2.7b/discussions/5
* https://huggingface.co/facebook/opt-6.7b/discussions/8
* https://huggingface.co/facebook/opt-13b/discussions/7
* https://huggingface.co/facebook/opt-30b/discussions/8
* https://huggingface.co/facebook/opt-66b/discussions/7
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker | 09-01-2022 01:44:42 | 09-01-2022 01:44:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker I am not sure who is the owner of the models repo but the following PRs also need to be merged.
>
> https://huggingface.co/facebook/opt-125m/discussions/19
> https://huggingface.co/facebook/opt-350m/discussions/5
> https://huggingface.co/facebook/opt-1.3b/discussions/7
> https://huggingface.co/facebook/opt-2.7b/discussions/5
> https://huggingface.co/facebook/opt-6.7b/discussions/8
> https://huggingface.co/facebook/opt-13b/discussions/7
> https://huggingface.co/facebook/opt-30b/discussions/8
> https://huggingface.co/facebook/opt-66b/discussions/7
> <|||||>Hey, thanks for notifying, I will take care of it π |
transformers | 18,840 | closed | Generate: smaller TF serving test | # What does this PR do?
`tests/test_modeling_tf_common.py::UtilsFunctionsTest::test_generate_tf_function_export` is often failing because it times out (>60s).
The previous version took ~35s in my machine. This PR's takes ~19s, which may avoid the time out issue.
If it still fails, a custom config must be added :) | 08-31-2022 20:17:39 | 08-31-2022 20:17:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,839 | closed | BUG for beam_indices from model.generate() | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import BartTokenizer,BartForConditionalGeneration
model_path = "/data/pretrained_model/bart_base"
toker = BartTokenizer.from_pretrained(model_path)
model = BartForConditionalGeneration.from_pretrained(model_path)
input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much",
"transformers is so good"]
batch_size = 2
num_beams = 10
max_length = 10
num_return_sequences = 5
input_ids = toker(input_tokens,return_tensors='pt',padding=True).input_ids
output=model.generate(input_ids,max_length=max_length,\
num_beams=num_beams,num_return_sequences=num_return_sequences,\
return_dict_in_generate=True,output_scores=True)
print(output.beam_indices)
```


### Expected behavior
This is super weird that `beam_indices` of second batch has indices in the first 10 beams. If calculate the average logits across the sentence according to this `beam_indices`, we won't get the `output.sequences_scores` So I think the number in the red box of the first picture should be added 10 (num_beams), if we add 10, we can get the correct token to be generated in `output.sequences[5]` as shown in the second picture | 08-31-2022 16:44:39 | 08-31-2022 16:44:39 | Also, could you please check this ? https://discuss.huggingface.co/t/larger-sum-logits-larger-sum-probability/22358<|||||>Also cc @gante for `generate` :)<|||||>Hey @Hannibal046 π Thank you so much for raising this issue, there was indeed a problem with the last beam index for all batches except the first one!
Check #18851 for the fix and for snippets that confirm the correctness after the fix π€
Regarding the forum issue -- it seems like a relevant problem. Could you please open an issue here on GitHub? β€οΈ |
transformers | 18,838 | closed | Skip XNLI test | Skips an XNLI test that currently fails due to https://github.com/fsspec/filesystem_spec/issues/1034 | 08-31-2022 16:34:16 | 08-31-2022 16:34:16 | Preceded by https://github.com/huggingface/transformers/pull/18837<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18838). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,837 | closed | Pin ffspec | # What does this PR do?
The recent test failures on XLNI are due to a release of ffspec. This PR excludes the problematic version to avoid the test failures. | 08-31-2022 16:32:17 | 08-31-2022 16:32:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,836 | closed | Input_embeds for Albert | Hello! I am checking the source codes for Albert's inputs_embeds. But I think the open sourced codes do not match with the document:
In the document, the description for inputs_embeds says that the shape should be the of shape (batch_size, sequence_length, hidden_size).
But in the source codes(https://huggingface.co/transformers/v3.4.0/_modules/transformers/modeling_albert.html) we have the followings:
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size)
position_embeddings = self.position_embeddings(position_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
So in order to make the final additions' operations valid, the shape between inputs_embeds and the other two should be the same. But the default embedding size is 128 while the default hidden size is 768("albert-base-v2"). So if we use the method described in document, the compiler will output errors.
| 08-31-2022 14:05:31 | 08-31-2022 14:05:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,835 | closed | Adding multiprocessing option to transformers.pipelines.automatic_speech_recognition | ### Feature request
Hi,
in `transformers.pipelines.automatic_speech_recognition` in case of `self.type = ctc_wit_lm`, the `postprocess` method uses `self.decoder.decode_beams(items)`. This is too slow and does decoding iterative.
`decoder.decode_beams_batch(pool,items)` is able to make things faster and pharallel.
### Motivation
`transformers.pipelines.automatic_speech_recognition` works really slow in complex `ctc_with_lm` scenarios.
### Your contribution
`None` | 08-31-2022 13:25:27 | 08-31-2022 13:25:27 | WDYT @Narsil?<|||||>Hi, I don't mind having a parameter for that for sure.
The biggest reason I don't think it should be the defaults is that some users might already be using different processes for different pipelines so doing parallelism twice is usually hurtful.
Also, do you mind providing small benchmarks to see the performance improvement ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,834 | closed | Fix add model like | It seems that `pip show` does not return the same location as before for editable packages.
However, listing editable packages (`pip list -e`) returns the correct location in which it was installed. Doing it locally returns the following:
```
Package Version Editable project location
------------ ----------- ---------------------------------------------
transformers 4.22.0.dev0 /home/lysandre/Workspaces/python/transformers
```
I'm therefore updating the way to identify if we're looking at the correct install to use `pip list -e` instead of `pip show`. | 08-31-2022 13:04:45 | 08-31-2022 13:04:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18834). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,833 | closed | TF: TFMarianMTModel final logits bias as a layer | # What does this PR do?
Fixes #18802
As stated in the issue above, `final_logits_bias` in `TFMarianMTModel` are not being loaded at `from_pretrained(...)` time. The PT model has this variable defined, and thus the outputs of the model in the two frameworks are very different (>1e-1).
Actually, these weights are also not being stored when the TF version is saved, for the same reason -- only layers are stored/loaded with the functions we are using (`.save_weights` and `.load_weights`), and this bias weight is not inside a layer.
As a solution, this PR moves the bias to a layer and creates an alias for it, resulting in no interface changes. After this change, the models from `Helsinki-NLP` can be converted with the `pt-to-tf` CLI, passing all the quality checks.
β οΈ Other models have this pattern, so I will apply the change to them in a separate PR if this one gets approved. | 08-31-2022 12:44:33 | 08-31-2022 12:44:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante Thanks a lot. It looks like it works well!
However, there is one thing I don't understand quite well.
```bash
(Pdb) [x.name for x in model.non_trainable_weights]
['final_logits_bias:0']
```
and this is good as it makes loading correctly. But I was thinking I will see `['final_logits_bias.final_logits_bias:0']`, as you pass the name to the layer as well as in `add_weight`.
Is it true that when we use `add_weight` inside a layer, that layer name won't appear in the variable name for that weight?
(I set a breakpoint at in `src/transformers/modeling_tf_utils.py` at line 847)<|||||>@ydshieh hah, I had the same question but I tried, it worked, and I forgot to dig deeper to understand why :D
After some digging, I found that it is poorly documented -- variables created with `.add_weight` are set without any name scope, i.e. their name consists of the name set in `name`. This is opposed to the weights from layers, such as `tf.keras.layers.Dense`, that automatically get a scoped name according to the `name` of the layers (e.g. `foo/bar/weights:0`).
This implies that initializing `BiasLayer` with a `name` has no effect whatsoever regarding weight storing/loading. If we wanted the weights to have a scoped name (we don't here), we could either hardcode it in `name` ([example](https://github.com/huggingface/transformers/blob/811c4c9f79758235762b4f70ffae00deae494fb1/src/transformers/models/albert/modeling_tf_albert.py#L493)) or use `tf.name_scope` ([example](https://github.com/huggingface/transformers/blob/811c4c9f79758235762b4f70ffae00deae494fb1/src/transformers/models/albert/modeling_tf_albert.py#L150)).
I'm adding a link to this comment in the code, for future reference.<|||||>Thanks a lot @gante , you are the best! |
transformers | 18,832 | closed | Delete `state_dict` to release memory as early as possible | # What does this PR do?
Fix #18782.
Note that this is not a real memory issue. A call to `gc.collect()` at the end of `from_pretrained()` works well too.
However this PR finds and simply `del state_dict` at the end of `_load_state_dict_into_model()`, and `GC` is able to perform housekeeping on its own at a earlier time. | 08-31-2022 12:24:04 | 08-31-2022 12:24:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The change regarding having a new argument `state_dict` in the nested function `load` is to pass black check, otherwise we get
```bash
src/transformers/modeling_utils.py:422:17: F821 undefined name 'state_dict'
```
with the new line `del state_dict`. (It's quite strange though)<|||||>Ready for review.<|||||>The failing test is `test_encodings_from_xnli_dataset` which is irrelevant to this PR. |
transformers | 18,831 | closed | Add support for open_clip | ### Feature request
Add `open_clip` (https://github.com/mlfoundations/open_clip) support to Transformers
### Motivation
open_clip has released ViT-B-32, ViT-B/16, ViT-B/16+, ViT-L/14 trained on LAION-400M and LAION-2B which are very relevant models - matching and sometimes surpassing OAI models benchmarks - but are not yet compatible with Transformers.
Also, [soon a ViT-H is going to drop](https://twitter.com/EMostaque/status/1558851591469400066) which will be the SOTA open source CLIP (since OAI never open sourced their ViT-H used to train DALL-E 2) - so it will also make it even more relevant to support OpenCLIP models and code
cc @rwightman | 08-31-2022 10:34:14 | 08-31-2022 10:34:14 | (Is the "new model" tag adequate here or it would be considered adapting to the existing CLIP model?) <|||||>cc'ing @patil-suraj. From the README, it seems that they replaced OpenAI's `quickgelu` with `torch.nn.GELU`, which is apparently better.
Normally the strategy is to add a new model, no matter how small the changes to an existing model.<|||||>@apolinario @NielsRogge
@rom1504 and were discussing moving OpenCLIP LAION trained weights to the HF hub under the LAION org https://huggingface.co/laion ... for enhanced visibility. It'd be nice if the PyTorch model.bin could be shared with the OpenCLIP use (so remap if the HF transformers keys are a bit different).
I can't comment on the 'add a new model bit', seems unecessary for adding a changeable activation but not a big deal either way. However, there will may be some (small) architecture additions in future LAION + OpenCLIP model releases so requiring yet another model for those would be, well a bit exessive. They will all be done in a manner that can be dynamically enabled/disabled without breaking backwards weight compat.
Romain and I are currently training larger models on LAION-2B (english). I'm using remainder of JUWELS research grant for an L/14, Romain is working via Stability on H/14 and possibly g/14. We've both run into stability problems at this data + model scale, hence the 'might need arch' additions. The checkpoints for H/14 and g/14 will be 3.7G and 5G respectively, so one model hub instance per model would also avoid some waste here :)
Also, since OpenCLIP is a relatively small and unknown project, it would be nice to keep some pointers and links back there for people looking to train from scratch and/or fine-tune the models.
EDIT re the 'visibility', and LAION org, we've been working on this paper and will likely do a splash once the next revision is out and we get some more results https://openreview.net/forum?id=M3Y74vmsMcY
<|||||>Hey @rwightman, excited to hear you'd like to contribute Open CLIP to `transformers`!
The implementation of `CLIP` is done using the `ACT2FN` activation function dictionary: https://github.com/huggingface/transformers/blob/a26c752353a127ba8e4728413806f545718a8d78/src/transformers/models/clip/modeling_clip.py#L281
If this is the only change necessary, then it should be loadable directly in the existing architecture by specifying the appopriate `hidden_act` configuration option.
Do you have in mind what other changes might be needed down the road for the support of additional checkpoints? I would be personally be open to having an `OpenCLIP` model archigtecture which could be the host for current checkpoints and upcoming checkpoints, even while unaware of the changes that might need to be done in the future (therefore with modeling code that would be a bit more dynamic than others), but I'm pinging @patrickvonplaten and @sgugger for their opinion as well.<|||||>Agreed with @LysandreJik . The change of activation function does not require a new model by itself (since you can set the right one in the config) but if you anticipate other modeling tweaks, a new architecture definitely makes sense.<|||||>I was hoping to have transformers CLIP, OpenCLIP, and timm (for vision backbone finetune) all use the same hub weights ... ie something like,
* `https://huggingface.co/laion/vit_base_patch32_laion400m`
* `https://huggingface.co/laion/vit_base_patch32_laion2b`
etc
For `timm`, I'll just remux the weight into timm vit style on the fly. I was thinking the weights would natively match their source (ie OpenCLIP, which is just OpenAI model names w/o any jit / typing mess). Is there any precedent for doing remap of a pytorch bin file on the fly in transformers or are hub weights always native without any on the fly conversion?
<|||||>No, there is no on-the-fly conversion in Transformers. The state dict is loaded as is.<|||||>Closing as support has been added, see e.g. https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K<|||||>> cc'ing @patil-suraj. From the README, it seems that they replaced OpenAI's quickgelu with torch.nn.GELU, which is apparently better.
>
> Normally the strategy is to add a new model, no matter how small the changes to an existing model.
Should the default `hidden_act` be `gelu` then in `CLIPConfig`?<|||||>@fxmarty changing the default activation in the config would be way too breaking though :-)<|||||>Hi! We have a similar issue; we want to bring a CLIP model we fine-tuned using Open AI's CLIP [implementation](https://github.com/openai/CLIP/blob/main/clip/model.py) to the Hub. As far as I understand, the two implementations do not match 1-to-1, so... is there any public script to readapt the weights?
I am asking here since it seems related. Let me know if it's better to open a new issue.
P.S. In the process of analysis of the two models, we also noticed that our model, as well as Open AI's weights (e.g., on [azure](https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt)), weigh 300MB while HF's checkpoints weigh 600MB. Do you know why?<|||||>@g8a9 checkpoint size differences are likely either due to train checkpoint (ie incl optimizer state) vs state dict only, or one is float32 and the other is float16 (since it's exactly 2x I'm guessing the latter).
Original OpenAI -> Transformers conversion code is in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py ... it's error prone though, so be careful, the params are just being copied so if you have any mismatch in sizing it will fail silently.
I have a modified conversion script as a gist that I used to convert OpenCLIP models to Transformers (the ViT OpenCLIP models w/ standard text tower match OpenAI checkpoint naming). It uses copy_ so you get an error if param sizes don't match, but it was hacked together so I manually plugged each model config in.
https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990
<|||||>> Original OpenAI -> Transformers conversion code is in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py ... it's error prone though, so be careful, the params are just being copied so if you have any mismatch in sizing it will fail silently.
Interesting, I usually use `model.load_state_dict` with the default `strict=True` to make sure any missing or unexpected keys as well as size mismatches are caught when porting models. I can add a similar script to convert OpenCLIP models to Transformers if you want.<|||||>> I have a modified conversion script as a gist that I used to convert OpenCLIP models to Transformers (the ViT OpenCLIP models w/ standard text tower match OpenAI checkpoint naming). It uses copy_ so you get an error if param sizes don't match, but it was hacked together so I manually plugged each model config in.
>
> https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990
Thanks @rwightman ! We'll start tweaking from here. |
transformers | 18,830 | closed | ValueError: Unknown layer: Custom>TFViTMainLayer when using a Google transformer model in Streamlit | ### System Info
transformers == 4.21.1
tensorflow == 2.9.1
streamlit ==1.11.1
Windows 10
### Who can help?
@Rocketknight1 @NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import streamlit as st
import numpy as np
from PIL import Image
import tensorflow as tf
st.title("Binary Human Detection Web App")
st.markdown("Is there a human in office space? π§")
## Initialize tensorflow model (This can be loaded before anything else)
path_to_model = "C:/Users/myname/Jupiter_Notebooks/Dataset_Thermal_Project/Camera_videos/Saved_models/model_vit.h5"
model_loader = tf.keras.models.load_model(path_to_model)
model_vit = tf.keras.models.Model(model_loader.inputs, model_loader.outputs)
## Preprocess images
def preprocessImage(photo):
resize_photo = photo.resize((224,224))
normalized_photo = np.array(resize_photo)/255 # a normalised 2D array
reshaped_photo = normalized_photo.reshape(-1, 224, 224, 3) # to shape as (1, 224, 224, 3)
return reshaped_photo
uploaded_file = st.sidebar.file_uploader(" ",type=['jpg', 'jpeg'])
if uploaded_file is not None:
## Use a context manager to make sure to close the file!!
with Image.open(uploaded_file) as photo:
tensorflow_image = preprocessImage(photo)
## Show preprocessed image
streamlit_widget_image = st.image(tensorflow_image, 'Uploaded Image', use_column_width=True)
## Do prediction
if st.sidebar.button("Click Here to Predict"):
if uploaded_file is None:
st.sidebar.write("Please upload an Image to Classify")
else:
## Pass the preprocessed image to the vit model (not the streamlit widget)
pred_label = model_vit.predict(tensorflow_image)[0]
## Print prediction
st.sidebar.header("ViT model results:")
if pred_label > 0.5: st.sidebar.info('Human is detected')
else: st.sidebar.info('No human is detected')
```
### when I run this I get the ValueError
ValueError: Unknown layer: Custom>TFViTMainLayer. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
```
#### my model in Tensorflow
# Base model pre-trained on ImageNet-21k with the 224x224 image resolution
base_model = TFViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
# Freeze base model
base_model.trainable = False
# Create new model
inputs = keras.Input(shape = (3, 224, 224))
x = data_augmentation_vit(inputs)
vit = base_model.vit(inputs)[0]
vit = keras.layers.GlobalAveragePooling1D()(vit)
vit = tf.keras.layers.Dense(256, activation='relu')(vit)
vit = tf. keras.layers.Dropout(0.15)(vit)
outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(vit)
model_vit = tf.keras.Model(inputs, outputs)
print(model_vit.summary())
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 3, 224, 224)] 0
vit (TFViTMainLayer) TFBaseModelOutputWithPoo 86389248
ling(last_hidden_state=(
None, 197, 768),
pooler_output=(None, 76
8),
hidden_states=None, att
entions=None)
global_average_pooling1d (G (None, 768) 0
lobalAveragePooling1D)
dense_2 (Dense) (None, 256) 196864
dropout_37 (Dropout) (None, 256) 0
outputs (Dense) (None, 1) 257
=================================================================
model_vit.save("Saved_models/model_vit.h5")
```
### Expected behavior
I have a model, model_vit.h5, trained and saved in Tensorflow based on google's
vit-base-patch16-224-in21k model.
I expect it to make a prediction in my app like other models. Yet I am not sure how to register custom object for the TFViTMainLayer in this model. | 08-31-2022 08:28:05 | 08-31-2022 08:28:05 | here is the link to model_vit.h5 https://drive.google.com/file/d/1ASXJ6-QVxV7W-rVUV57pUy5sYK1BokZ4/view?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,829 | closed | follow layoutlmv3 to avoid device error | # What does this PR do?
Slightly modify the way that we calculate the height and width embeddings for LayoutLMv2. The calculation is simply same as LayoutLMv3 to avoid the device-assert error when we run experiments on GPU.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L270-L271
Fixes # (issue)
## Who can review?
Models:
- LayoutLMv2: @patrickvonplaten @NielsRogge
| 08-31-2022 08:23:06 | 08-31-2022 08:23:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18829). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,828 | closed | Add a vit-based ocr model to hugging face | ### Model description
We want to add MGPSTR model(ECCV 2022) to hugging face.
MGPSTR is a ViT(Vision Transformer)-based pure vision model for STR, which shows its superiority in recognition accuracy. It has a Multi-Granularity Prediction (MGP) strategy to inject information from the language modality. MGPSTR algorithm achieves state-of-the-art performance.
We followed the guidance of https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model, but encountered some problems, such as being unable to find a suitable huggingface-hub version when installing the environment.
```
ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.8.1 (from transformers[dev]) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0)
ERROR: No matching distribution found for huggingface-hub<1.0,>=0.8.1
```
Can I get some help or guidance?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The paper will be published soon. | 08-31-2022 03:23:35 | 08-31-2022 03:23:35 | Sure, do you have an email address? We can set up a slack channel for easier communication <|||||>Yes, my email address is [email protected].
Do I need to register for slack in advance?<|||||>You should have received an invite by email :) |
transformers | 18,827 | closed | [HF Trainer] [new optimizer] add `AnyPrecisionAdamW` (bf16) | ### Feature request
pytorch just merged https://github.com/pytorch/torchdistx/pull/52, which adds `AnyPrecisionAdamW` (bf16-support, and future new dtypes)
we should add it to our HF Trainer arsenal
This is open to the community - it shouldn't be too difficult to add by just checking the existing optimizers. Here are some pointers to start unraveling:
https://github.com/huggingface/transformers/blob/e88e9ff045347c9d92d85806a6987dc7ebcbdd5b/src/transformers/training_args.py#L393-L394
and
https://github.com/huggingface/transformers/blob/e88e9ff045347c9d92d85806a6987dc7ebcbdd5b/src/transformers/training_args.py#L94-L106
the key of course is the documentation and tests. checking the existing tests and working from there is what's needed.
One would start looking at mimicking the integration of other optimizers,
So in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer.
it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done. Except this one should be simpler than `adamw_bnb_8bit` as it just plugs in a core pytorch optimizer, that's why I said `adamw_torch` is another good model.
Please remember that this requires pytorch-nightly as this new feature hasn't made it yet into pytorch-1.13. So you will need to install it from https://pytorch.org/get-started/locally/ (Choose Preview (Nightly))
Thank you!
| 08-31-2022 01:32:42 | 08-31-2022 01:32:42 | Hello, I'll like to be assigned to this issue. <|||||>Yes, please, @Zeesky-code - once you have a working PR please tag me there.
Thank you!<|||||>No longer working on this issue and it's now open to the community.
Thank you :)
> Hello, I'll like to be assigned to this issue.
<|||||>Hi, may I take it if it's available?<|||||>Yes, please. But please read the OP first and see if you understand what needs to be done. Thank you!<|||||>@stas00, where do I import `AnyPrecisionAdamW` from? I tried installing `pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu`<|||||>Oh! my bad! it's not pt-nightly but `torchdistx`
```
$ git clone https://github.com/pytorch/torchdistx
$ cd torchdistx/
$ grep -Ir AnyPrecisionAdamW
src/optimizers/anyprecision_optimizer.py:# AnyPrecisionAdamW: a flexible precision AdamW optimizer
src/optimizers/anyprecision_optimizer.py:class AnyPrecisionAdamW(Optimizer):
src/optimizers/anyprecision_optimizer.py: "AnyPrecisionAdamW does not support sparse gradients"
```
We can probably try something like this:
```
try:
from optimizers.anyprecision_optimizer import AnyPrecisionAdamW
except:
raise ValueError("please install https://github.com/pytorch/torchdistx")
```<|||||>also please note that the import is about to move and once this PR is merged https://github.com/pytorch/torchdistx/pull/60 please update your clone. Thank you!
so it'll become:
```
from torchdistx.optimizers.anyprecision_optimizer import AnyPrecisionAdamW
```<|||||>Hi @stas00 and @atturaioe -
Just wanted to drop in here to say thanks for integrating AnyPrecision!
Also wanted to let you know I'm adding a bfloat16 check internal to the anyPrecisionAdamW optimizer as a safety mechanism, and working on the documentation as well.
Please let me know if you hit specific integration issues or questions, but for now the very short documentation preview is that there are two primary use cases for AnyPrecision currently:
a - successfully training *entirely* in pure BF16 - you can do this b/c kahan summation ensures high precision updates to the weights. Without that you will hit 'weight stagnation' where BF16 can't keep up like FP32 over the training cycle. This gives you faster training with lower memory requirements and generally meets or even exceeds full FP32 results (some regularization effect).
Has been tested up to 11B param size models.
Basic process:
~~~
# init model
my_model = build_model(config)
# move model to all BF16
my_model.to(torch.bfloat16)
# run AnyPrecision in pure BF16 with Kahan - pass in usual adamW options in ... below:
optimizer = AnyPrecisionAdamW(my_model.parameters(),...,
momentum_dtype=torch.bfloat16, variance_dtype=torch.bfloat16,
use_kahan_summation=True)
~~~
b - Training with AdamW variance state in BF16 - results in both memory savings and training speed up, and in limited testing up to 1B, matches mixed precision results after second epoch. (variance state (the variance of the variance) typically rapidly declines after second epoch or so, which was the intuition that fp32 precision probably is not needed after that).
~~~
optimizer = AnyPrecisionAdamW(my_model.parameters(),..., momentum_dtype=torch.float32, variance_dtype=torch.bfloat16, use_kahan_summation=False)
~~~
Hope that the above is helpful for now. Will have more detailed docs etc. this week but please let me know if any questions in the interim and thanks again for the integration work here!
<|||||>That's very helpful, Less - thank you for sharing these use cases and the details!
I will leave to @atturaioe the stage to ask questions as he has been performing the heavy lifting on this task.<|||||>I should add - setting momentum_dtype and variance_dtype to torch.float32 and use_kahan_summation=False, brings AnyPrecision to the traditional AdamW optimizer so you can quickly compare using BF16, pure or variance only, for your training. <|||||>> setting momentum_dtype and variance_dtype to torch.float32 and use_kahan_summation=False, brings AnyPrecision to the traditional AdamW optimizer so you can quickly compare using BF16, pure or variance only, for your training.
awesome, that would make a good quality test then.
Let's continue the discussion in the PR https://github.com/huggingface/transformers/pull/18961 so it's more "actionable" :)<|||||>@stas00 Hi! I'd like to pick up this issue if no one else is working on it at the moment.<|||||>Oh, thank you for bringing this up, @mollerup23 - and wanting to contribute!
This has already been resolved in https://github.com/huggingface/transformers/pull/18961
We just forgot to close this issue. |
transformers | 18,826 | closed | Examples do not seem to work on any spaces right now (possible downtime?) | ### System Info
This is observed online on spaces:
E.g. https://huggingface.co/spaces/nielsr/donut-docvqa, if you click any of the examples, you see

Similarly, https://huggingface.co/spaces/impira/docquery produces console errors like:
```
POST https://hf.space/embed/impira/docquery/api/predict/ 500
post_data @ index.09173af6.js:7790
(anonymous) @ index.09173af6.js:7872
(anonymous) @ index.09173af6.js:6566
(anonymous) @ index.09173af6.js:506
(anonymous) @ index.09173af6.js:505
click_handler_1 @ index.d284cf1a.js:1881
click_handler_1 @ index.d284cf1a.js:1346
```
I also tried https://huggingface.co/spaces/Epoching/DocumentQA.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Visit any space with examples, and try clicking on them.
### Expected behavior
The examples should populate. | 08-30-2022 23:07:59 | 08-30-2022 23:07:59 | Hey @ankrgyl! I tried using them a few seconds ago and it seems to work; is the issue still happening from your side? It may have been a transient error.<|||||>It wasn't working until around 11:15 PM PST, at which point the website seem to have reset (see screenshot below) and ~10 minutes later, it started working.

During this period, I also noticed that deploying a gradio space with `enable_queue=False` would not work -- it seemed like something was broken with the `/predict` handler (which gets called in that case). I did some extensive testing while working with the Gradio team on https://github.com/gradio-app/gradio/issues/2132.<|||||>Understood, thank you! Should this be closed as it seems the error has been resolved?<|||||>Yes it can definitely be closed! I mostly opened it in case it was helpful as an alert while things were behaving weirdly.<|||||>Sounds good! Let me close it and feel free to reopen if you ever run into something similar. |
transformers | 18,825 | closed | Wav2Vec2ProcessorWithLM in pipeline issue | Opening a new issue for better tracking purposes. This issue follows: https://github.com/huggingface/transformers/issues/16759
> Hey @gxbag,
>
> Please make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `"language_model/vocabulary.txt"`.
>
> Regarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following:
>
> ```python
> from transformers import AutoProcessor
> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
> vocab_dict = processor.tokenizer.get_vocab()
>
> from pyctcdecode import build_ctcdecoder
> unigrams_file = open("language_model/vocabulary.txt", "r")
> unigrams_list = unigrams_file.readlines()
> decoder = build_ctcdecoder(
> labels=list(vocab_dict.keys()),
> kenlm_model_path="language_model/5gram.bin",
> unigrams=unigrams_list
> )
>
> from transformers import Wav2Vec2ProcessorWithLM
> processor_with_lm = Wav2Vec2ProcessorWithLM(
> feature_extractor=processor.feature_extractor,
> tokenizer=processor.tokenizer,
> decoder=decoder
> )
>
> from transformers import pipeline
> pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-large-960h-lv60-self", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0)
> ```
>
> This should correctly initialize the pipeline.
Hi @patrickvonplaten ,
I've just tried your solution. However, it does not use the LM for decoding. `self.type` is always `"ctc"` as `feature_extractor._processor_class` is alway `None`. See here:
https://github.com/huggingface/transformers/blob/b487096b02307cd6e0f132b676cdcc7255fe8e74/src/transformers/pipelines/automatic_speech_recognition.py#L127
And this is my code:
``` python
model = Wav2Vec2ForCTC.from_pretrained("./results/checkpoint-11600").to("cuda")
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
from pyctcdecode import build_ctcdecoder
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="lm.small_3gram_correct.arpa",
)
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
pipe = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor_with_lm.tokenizer,
feature_extractor=processor_with_lm.feature_extractor,
decoder=processor_with_lm.decoder,
device=0)
```
Any clues? | 08-30-2022 22:22:26 | 08-30-2022 22:22:26 | Hi @anderleich,
I am encountering the same issue when I try to use the AutomaticSpeechRecognitionPipeline in combination with a Languague Model. Have you been able to find a solution? I've tracked the issue down to the same lines of code as you, I cannot get self.type to evaluate to "ctc_with_lm" with the models I am using, even though they work fine when I use them outside of the pipeline.
Best wishes,
Judith<|||||>Hi @judithvdw ,
Not yet. I decided to use the LM outside the pipeline for the moment<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patrickvonplaten ,
Any suggestions on this?<|||||>cc @sanchit-gandhi <|||||>Hey @anderleich,
As a temporary fix, could you set the feature extractor's `_processor_class` attribute manually?
```python
model = Wav2Vec2ForCTC.from_pretrained("./results/checkpoint-11600").to("cuda")
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
from pyctcdecode import build_ctcdecoder
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="lm.small_3gram_correct.arpa",
)
# set class manually
feature_extractor._set_processor_class("Wav2Vec2ProcessorWithLM")
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor= feature_extractor,
tokenizer=tokenizer,
decoder=decoder,
)
pipe = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor_with_lm.tokenizer,
feature_extractor=processor_with_lm.feature_extractor,
decoder=processor_with_lm.decoder,
device=0,
)
```
I'll take a deeper look into why the class is defaulting to None<|||||>Did you happen get a chance to look further into this? Not to push you, but just to make sure the bot doesn't close the issue again for a lack of activity.<|||||>I haven't had the chance sadly - keeping the bot from closing the issue! Maybe if you have the chance to look into this @hollance?<|||||>Sure, I'll have a look.
<|||||>I can't reproduce this. I used the following code:
```python
from transformers import (
AutomaticSpeechRecognitionPipeline,
Wav2Vec2ForCTC,
Wav2Vec2CTCTokenizer,
Wav2Vec2FeatureExtractor,
Wav2Vec2Processor,
Wav2Vec2ProcessorWithLM
)
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/wav2vec2-base-100h")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-base-100h")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
# without LM
pipe = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor
)
print(pipe.type) # "ctc"
print(processor.feature_extractor._processor_class) # None
pipe("https://huggingface.co/spaces/Matthijs/speecht5-asr-demo/resolve/main/examples/hmm_i_dont_know.wav")
# {'text': "I DON'T KNOW I THINK MAY BE ITS EASY TO GET A NEW ONE WE CAN GO TO THE STORL LATER TO SEE IF THEY HAVE ANY IN STOCK"}
# note that STORL is spelled wrong
# with LM
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
pipe_with_lm = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor_with_lm.tokenizer,
feature_extractor=processor_with_lm.feature_extractor,
decoder=processor_with_lm.decoder
)
print(pipe_with_lm.type) # "ctc_with_lm"
print(processor_with_lm.feature_extractor._processor_class) # Wav2Vec2ProcessorWithLM
pipe_with_lm("https://huggingface.co/spaces/Matthijs/speecht5-asr-demo/resolve/main/examples/hmm_i_dont_know.wav")
# {'text': "I DON'T KNOW I THINK MAY BE ITS EASY TO GET A NEW ONE WE CAN GO TO THE STORE LATER TO SEE IF THEY HAVE ANY IN STOCK"}
# and now STORE is spelled correctly
```
I verified that the decoder was indeed called on the `pipe_with_lm` pipeline.
There might still be an issue with your own models but I can't tell that without having access to those models.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Going to leave this one as is since the issue is not reproducible using a public checkpoint and we don't have access to a local model that demonstrates this behaviour, so we're unable to pinpoint where the bug potentially lies in transformers
The thread did result in two workarounds for this issue that you can try:
* Start from a pre-trained checkpoint like [facebook/wav2vec2-base-100h](hf.co/faceobook/wav2vec2-base-100h) that works as expected
* Use the 'hack' described in https://github.com/huggingface/transformers/issues/18825#issuecomment-1410416281<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,824 | closed | model with longformer encoder cannot be saved due to OperatorNotAllowedInGraphError | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.10.102-99.473.amzn2.x86_64-x86_64-with-glibc2.10 (AWS SageMaker)
- Python version: 3.8.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I created a NER model which makes use of the longformer encoder. I can train it successfully. However, if I try to save the model like this: tf.keras.models.save_model(self.model, model_path) I get an `OperatorNotAllowedInGraphError` error. The full message is below. As far as I can tell, the function `_pad_to_window_size` calculates `padding_len` which in my execution happens to be a tf.Tensor object. The statement `if padding_len > 0` fails to evaluate, since a tensor cannot be compared to a bool directly. This smells like a bug. Besides, the only purpose of this check seems to be to print out a message in the log, so perhaps it's not strictly necessary.
Cheers,
Riccardo
```
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
/tmp/ipykernel_29348/3622010497.py in <cell line: 2>()
1 output_path = "trained_models/jd_parser_baseline_all-longformer"
----> 2 t.save_model(output_path)
~/SageMaker/jd-parser/jd_parser/trainer.py in save_model(self, model_path)
381 if model_path is None:
382 model_path = f"trained_models/{self.model.name}"
--> 383 tf.keras.models.save_model(self.model, model_path)
384
385 def upload_to_s3(self, model_local_dir=None):
~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
~/anaconda3/envs/tensorflow2_p38/lib/python3.8/contextlib.py in __exit__(self, type, value, traceback)
118 if type is None:
119 try:
--> 120 next(self.gen)
121 except StopIteration:
122 return False
~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
411
412 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 413 return func(self, **unpacked_inputs)
414
415 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in call(self, input_ids, attention_mask, head_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, training)
1728 position_ids,
1729 inputs_embeds,
-> 1730 ) = self._pad_to_window_size(
1731 input_ids=input_ids,
1732 attention_mask=attention_mask,
~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in _pad_to_window_size(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, pad_token_id)
1814 padding_len = (attention_window - seq_len % attention_window) % attention_window
1815
-> 1816 if padding_len > 0:
1817 logger.info(
1818 f"Input ids are automatically padded from {seq_len} to {seq_len + padding_len} to be a multiple of "
OperatorNotAllowedInGraphError: Exception encountered when calling layer "longformer" (type TFLongformerMainLayer).
Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
Call arguments received by layer "longformer" (type TFLongformerMainLayer):
β’ args=({'input_ids': 'tf.Tensor(shape=(None, None), dtype=int32)', 'attention_mask': 'tf.Tensor(shape=(None, None), dtype=int32)'},)
β’ kwargs={'training': 'False'}
```
### Expected behavior
The model should be saved to disk correctly as it happens with other encoder models. | 08-30-2022 18:42:32 | 08-30-2022 18:42:32 | @rdisipio I tried to reproduce this error, But I was able to save model. I followed [this](https://huggingface.co/docs/transformers/model_doc/longformer#transformers.TFLongformerForTokenClassification.call.example) example to build a simple model. Can you add some steps to reproduce this bug ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,823 | closed | Memory is not released when moving model to CUDA | ### System Info
- `transformers` version: 4.21.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: GPU
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce behavior:
1. Run Colab: https://colab.research.google.com/drive/1NWJPqwe7MOJIWd4w5LGYGaflkWXTkHGB?usp=sharing
2. Check results:
```
model device = cuda:0
Filename: memory_leak.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 239.4 MiB 239.4 MiB 1 @profile
8 def main():
9 271.6 MiB 32.2 MiB 1 processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch16")
10 1405.8 MiB 1134.2 MiB 1 model = CLIPModel.from_pretrained("openai/clip-vit-base-patch16")
11
12 1405.8 MiB 0.0 MiB 1 device = torch.device("cuda")
13 2644.0 MiB 1238.2 MiB 1 model = model.to(device)
14
15 2644.5 MiB 0.5 MiB 1 print(f"model device = {model.device}")
16 2644.5 MiB 0.0 MiB 1 gc.collect()
```
### Expected behavior
RAM should be released when a model is moved to GPU.
This bug can be reproduced for lots of different models within the lib. | 08-30-2022 17:46:27 | 08-30-2022 17:46:27 | Pinging the king of memory, @ydshieh :raised_hands: <|||||>Hi @piEsposito
As I have seen quite a few times `torch` has some of its own memory management, and related memory issues, it would be great if you can provide and example that creates a (big enough) PyTorch models (not from `transformers`) on CPU, send it to CUDA, and see if you get the memory been released.
The goal is to make sure this issue is not coming from `PyTorch`. Would you like to work on an example, please? Thanks in advance.<|||||>@ydshieh just added it to the notebook. It seems like this issue comes from PyTorch. Thanks! |
transformers | 18,822 | closed | add a script to get time info. from GA workflow jobs | # What does this PR do?
As we might need to get the running time for workflow jobs again in the future, here is a simple script. It's probably better to move to the directory `utils`. I put it under `.github/scripts/` to emphasize this is really for GitHub Actions only.
The output looks like
```bash
(py39) Ξ» python get_github_job_time.py --workflow_run_id 2945609517
Model tests (onnx, multi-gpu): 337
Model tests (onnx, single-gpu): 334
Torch CUDA extension tests (multi-gpu): 44
Torch CUDA extension tests (single-gpu): 43
TensorFlow pipelines (multi-gpu): 20
TensorFlow pipelines (single-gpu): 19
...
```
P.S. I will add another simple script to get test failures and their counts in another PR. | 08-30-2022 16:37:53 | 08-30-2022 16:37:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, I will move it to `utils`. |
transformers | 18,821 | closed | Add Image To Text Generation pipeline | # What does this PR do?
Add Image To Text Generation pipeline. The pipeline currently defaults to [nlpconnect/vit-gpt2-image-captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
This features was asked for by @Narsil.
| 08-30-2022 16:32:28 | 08-30-2022 16:32:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @mishig25, the inference widgets for models like TrOCR, Donut, [image captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) can now be created! π₯³ |
transformers | 18,820 | closed | Disable nightly CI temporarily | # What does this PR do?
Disable nightly CI temporarily until the test suite can be run under 12 hours. | 08-30-2022 15:45:32 | 08-30-2022 15:45:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,819 | open | ONNX test suite is slow - run in 5.5 hours | ### Who can help?
@lewtun @LysandreJik
As shown in [this job](https://github.com/huggingface/transformers/runs/8074936385?check_suite_focus=true) run, the ONNX tests now run in 5.5 hours.
From https://github.com/huggingface/transformers/blob/73c6273d481f9052098a2a5a5f001fa75daaace9/tests/onnx/test_onnx_v2.py#L182
we see that the tests use real model checkpoints. As ONNX graph compile is known to be slow, and the models are quite big, it makes the tests very slow.
The whole scheduled CI test suite now run in 14.5 hours, and we have 2 test suites to run each day, so it requires 29 hours and can't finish in one day. This causes the test suite and their reports being delayed a lot.
We are wondering if it is possible to use tiny models from [hf-internal-testing](https://huggingface.co/hf-internal-testing) for ONNX tests.
| 08-30-2022 15:28:02 | 08-30-2022 15:28:02 | Thanks for raising this - it's an issue we've also faced with `optimum`'s test suite. Let me take a look and see if it's possible to use the tiny models as you suggest
cc @echarlaix @philschmid <|||||>Keep the issue alive :-)<|||||>Thanks for the ping - on my TODO list this week! |
transformers | 18,818 | closed | Pin maximum TF version | # What does this PR do?
We now also depend on `tensorflow-text`, whose minor versions are typically released a few days after new `tensorflow` releases. From tests against the `tensorflow` release candidate, `tensorflow-text`-based functions fail when these two libraries do not have the same version.
This PR pins the maximum TF version so that our CI doesn't break with the upcoming `tensorflow` release. When the corresponding `tensorflow-text` library gets released we should be able to unpin it again. | 08-30-2022 15:16:02 | 08-30-2022 15:16:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh the CI fail seems unrelated π€ do you know potential causes?<|||||>Unfortunately no. Let me find some time to take a look, but I think you are good to merge :-)<|||||>Yes I think that's an unrelated error linked to a new cache being made as the `setup.py` is updated. It's unrelated to the PR, but we need to have a look at what's going on.
Let's merge this PR!
Thanks for your contribution :) |
transformers | 18,817 | open | Identifying backend compatibility versions | We are currently working on identifying the backend versions with which we are compatible and with which we want to be compatible. These backends are PyTorch and TensorFlow. We will be considering Flax at a later point in time.
The first step was to identify the number of failures in each PyTorch/TensorFlow version and was done in https://github.com/huggingface/transformers/issues/18181.
Total number of tests: 38,991.
| Framework | No. Failures | Release date | Older than 2 years |
| :--------------- | ---------- | ---------- | ---------- |
| PyTorch 1.10 | 50 | Mar 10 2021 | No |
| PyTorch 1.9 | 710 | Jun 15 2021 | No |
| PyTorch 1.8 | 1301 | Mar 4 2021 | No |
| PyTorch 1.7 | 1567 | Oct 27 2020 | No |
| PyTorch 1.6 | 2342 | Jul 28 2020 | Yes |
| PyTorch 1.5 | 3315 | Apr 21 2020 | Yes |
| PyTorch 1.4 | 3949 | Jan 16 2020 | Yes |
| TensorFlow 2.8 | 118 | Feb 2 2022 | No |
| TensorFlow 2.7 | 122 | Nov 4 2021 | No |
| TensorFlow 2.6 | 122 | Aug 11 2021 | No |
| TensorFlow 2.5 | 128 | May 13 2021 | No |
| TensorFlow 2.4 | 167 | Dec 14 2020 | No |
We're proposing to drop versions older than 2 years old and to work towards providing support (support = 0 tests failing) for versions we aim to support. We will drop support for older versions once we reach their two-year-old date.
Here is the proposed plan moving forward:
- [ ] Have a detailed breakdown of failures for the following versions:
- [ ] Torch 1.7
- [ ] Torch 1.8
- [ ] Torch 1.9
- [ ] Torch 1.10
- [ ] Torch 1.11
- [ ] Torch 1.12
- [ ] TensorFlow 2.4
- [ ] TensorFlow 2.5
- [ ] TensorFlow 2.6
- [ ] TensorFlow 2.7
- [ ] TensorFlow 2.8
- [ ] TensorFlow 2.9
- [ ] Start with an initial compatibility document to mention which models are supported in which versions
- [ ] Open good first issues to improve compatibility for models not compatible with all versions, starting from the latest one and moving back in time.
- [ ] As versions become supported, run tests on older versions to ensure no regression.
Work by @ydshieh and @LysandreJik
----------
### Some context and tips when working on Past CI
1. The Past CI runs against a specific commit/tag:
- **Motivation**: To be able to run the test against the **same** commit to see if a set of fixes improves the overall backward compatibility without new issues introduced.
- The chosen commit could be changed (to more recent ones) along the time, but it should never be `main`.
- When working on the fix for Past CI , keeping in mind that we should **check the source code in the commit that is chosen for that particular Past CI run**. The commit given at the beginning of each report provided in the following comments.
2. For each report, there is an attached `errors.txt` where you can find more information to ease the fix process:
- The file contains a list whose elements have the following content:
- The line where an error occurs
- The error message
- The complete name of the failed test
- The link to the job that ran that failed test
- The errors in the reports sometimes don't contain enough information to make the decision/action. You can use the corresponding links provided in `errors.txt` to see the full trackback on the job run pages.
3. One (possible) fix process would be like:
- For a framework and a particular version, go to the corresponding reporting table provided in the following comments.
- Make sure you have a preferred way to navigate the source code in a specific commit.
- Download/Open the corresponding `errors.txt`.
- From the `General` table, take a row whose `status` is empty. Ideally, take the ones with higher value in `no.` column.
- Search in `errors.txt` for the `error` in the picked row. You get information about the failed line, failed test, and the job link.
- Navigate to the failed line or failed test in your workspace (or in a browser) that checks out to the specific commit for the run.
- Use the job link to go to the job run page if you need more information about the error.
- Then you might come up with a solution :-), or decide a fix is not necessary with good reasons.
- Update the `status` column with a comment once a fix or a decision is made.
4. Some guides/hints for the fix:
- π₯ To install a specific framework version, `utils/past_ci_versions.py` can help!
- β οΈ As the tests are run against a chosen commit, which may not contain some fixes in the `main` branch. (This is particular confusing if you try to run the failed test without checking out to that commit.).
- If the test passes when you run a failed test (in the report) against the `main` branch, with the target framework version, it's very likely a fix exists on `main` that applies to the target framework version too.
- In this case,
- either update `status` with `fixed in #XXXXX` (if you know clearly that PR fixes that error)
- or `works for commits since **b487096**` - a commit sha (It's not always trivial to find out which PR fixed a particular error - especially when working with Past CI)
- We decide to focus on the PyTorch and TensorFlow version, and not to consider other 3rd libraries. Therefore, some packages are not installed, like `kenlm` or `detectorn2`. We could just simply update the `status` column with `XXX not installed`.
- When an error is coming from a C/C++ exception, and the same code and inputs work for new framework versions, we could skip that failed test with a `@unittest.skipIf`, and update the status like `torch._C issue -> works wth PT >= 11 Fixed in #19122`.
- PR [#19122](https://github.com/huggingface/transformers/pull/19122) is one such example.
- If an error occurs in several framework versions, say, PT 11 and PT 10, and a status is updated for the newer version (here PT 11), we can simply put `see PT 11` in the report `status` column for older versions.
- Some old framework versions lack attributes or arguments introduced in newer versions. See [#19201](https://github.com/huggingface/transformers/pull/19201) and [#19203](https://github.com/huggingface/transformers/pull/19203) for how a fix would look like in such cases. If a similar warning (to the one in [#19203](https://github.com/huggingface/transformers/pull/19203)) already exists, we could update `status` with, for example, `Vilt needs PT >= 1.10`.
- Adding such warning is not a fix in a strict sense, but at least it provides some information. Together with the updated `status`, we keep information tracked.
| 08-30-2022 14:38:32 | 08-30-2022 14:38:32 | ### **Past CI - PyTorch 1.11 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error | status |
|-:|:-|:-|
| 32 | NameError: name 'kenlm' is not defined | not installed |
| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | fixed in #18303 |
| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | fixed in #18531 |
| 6 | NameError: name 'GPT2Tokenizer' is not defined | fixed in #19010 |
| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | fixed in #18303 |
| 3 | ImportError: | `detectron2` and `accelerate` not installed |
| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | fixed in #18303 |
| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | fixed in #18303 |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |
| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| bloom | 6 | OSError: gs555750 is not a valid git identifier (branch name | 6 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| layoutlmv2 | 2 | ImportError: | 2 |<|||||>### **Past CI - PyTorch 1.10 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error | status |
|-:|:-|:-|
| 32 | NameError: name 'kenlm' is not defined | see PT 11 |
| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 11 |
| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 11 |
| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 11 |
| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 11 |
| 4 | RuntimeError: Index is supposed to be an empty tensor or a vector | `torch._C` issue -> works wth PT >= 11 Fixed in #19122 |
| 3 | ImportError: | see PT 11 |
| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | `self.assertEqual` is too strict. Fixed in #19200
| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |
| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |
| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |
| bloom | 8 | OSError: gs555750 is not a valid git identifier (branch name | 6 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| longt5 | 4 | RuntimeError: Index is supposed to be an empty tensor or a v | 4 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| layoutlmv2 | 2 | ImportError: | 2 |<|||||>### **Past CI - PyTorch 1.9 (Patch release: v4.21.2 | b487096b0)**
[errors-pt-1-9.txt](https://github.com/huggingface/transformers/files/9678016/errors-pt-1-9.txt)
#### General
| no. | error | status |
|-:|:-|:-|
| 50 | AttributeError: module 'torch' has no attribute 'pi' | Need PT >= 1.10. But we can use np.pi. See #19201 |
| 44 | TypeError: meshgrid() got an unexpected keyword argument 'indexing' | `Vilt` needs PT >= 1.10 |
| 32 | NameError: name 'kenlm' is not defined | see PT 11 |
| 18 | AttributeError: module 'torchaudio.functional' has no attribute 'melscale_fbanks' | Need torchaudio >= 0.10. See #19203 |
| 15 | RuntimeError: CUDA error: an illegal memory access was encountered | LeViT re-run OK |
| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 11 |
| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 11 |
| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 11 |
| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 11 |
| 3 | ImportError: | see PT 11 |
| 2 | RuntimeError: "LayerNormKernelImpl" not implemented for 'BFloat16' | fixed in #19261 |
| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | See PT 10 |
| 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 places (5.7006835930906163e-05 difference | diff acceptable |
| 2 | RuntimeError: Index is supposed to be an empty tensor or a vector | torch._C issue -> works wth PT >= 11 Fixed in https://github.com/huggingface/transformers/pull/19122 |
| 2 | RuntimeError: Expected node type 'onnx::Constant' for argument 'num_classes' of node 'one_hot', got | test already skipped in #19122 (due to another error)|
| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |
| 2 | TypeError: Caught TypeError in replica 0 on device 0. | Vilt needs PT >= 1.10 (`meshgrid` error) |
| 1 | RuntimeError: transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access wa | See #20859 (opened) |
| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| maskformer | 50 | AttributeError: module 'torch' has no attribute 'pi' | 50 |
| vilt | 46 | TypeError: meshgrid() got an unexpected keyword argument 'in | 44 |
| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |
| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |
| mctct | 18 | AttributeError: module 'torchaudio.functional' has no attrib | 18 |
| levit | 16 | RuntimeError: CUDA error: an illegal memory access was encou | 15 |
| bloom | 10 | OSError: gs555750 is not a valid git identifier (branch name | 6 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| longt5 | 4 | RuntimeError: Index is supposed to be an empty tensor or a v | 2 |
| flava | 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 p | 2 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| layoutlmv2 | 2 | ImportError: | 2 |<|||||>### **Past CI - PyTorch 1.8 (Patch release: v4.21.2 | b487096b0)**
[errors-pt-1-8.txt](https://github.com/huggingface/transformers/files/9678007/errors-pt-1-8.txt)
#### General
| no. | error | status |
|-:|:-|:-|
| 570 | AttributeError: module 'torch.jit._state' has no attribute '_clear_class_state' | WIP |
| 50 | AttributeError: module 'torch' has no attribute 'pi' | See PT 1.9 |
| 44 | TypeError: conv1d(): argument 'padding' (position 5) must be tuple of ints, not str | WIP |
| 44 | TypeError: meshgrid() got an unexpected keyword argument 'indexing' | See PT 1.9 |
| 30 | NameError: name 'kenlm' is not defined | see PT 11 |
| 26 | AttributeError: module 'torch' has no attribute 'permute' | WIP |
| 18 | AttributeError: module 'torchaudio.functional' has no attribute 'melscale_fbanks' | See PT 1.9 |
| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 1.11 |
| 8 | RuntimeError: einsum() operand subscript must be in range [a, z] but found B for operand 0 | WIP |
| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 1.11 |
| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 1.11 |
| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 1.11 |
| 4 | TypeError: Caught TypeError in replica 0 on device 0. | see PT 1.10 |
| 3 | ImportError: | See PT 1.11 |
| 2 | RuntimeError: "LayerNormKernelImpl" not implemented for 'BFloat16' | See PT 1.9 |
| 2 | RuntimeError: "min_cuda" not implemented for 'BFloat16' | WIP |
| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | See PT 10 |
| 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 places (5.7006835930906163e-05 difference | See PT 9 |
| 2 | AssertionError: False is not true |
| 2 | RuntimeError: Expected node type 'onnx::Constant' for argument 'num_classes' of node 'one_hot', got | See PT 1.9 |
| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |
| 2 | TypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for re | WIP |
| 2 | TypeError: save_for_backward can only save variables, but argument 2 is of type bool | WIP |
| 2 | AttributeError: module 'torchaudio.functional' has no attribute 'resample' | WIP |
| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |
| 1 | AssertionError: 2.9253265857696533 != 2.925307273864746 within 1e-05 delta (1.9311904907226562e-05 d | diff acceptable
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| mctct | 64 | TypeError: conv1d(): argument 'padding' (position 5) must be | 44 |
| maskformer | 50 | AttributeError: module 'torch' has no attribute 'pi' | 50 |
| vilt | 46 | TypeError: meshgrid() got an unexpected keyword argument 'in | 44 |
| owlvit | 33 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |
| longt5 | 26 | AttributeError: module 'torch.jit._state' has no attribute ' | 24 |
| perceiver | 26 | AttributeError: module 'torch' has no attribute 'permute' | 26 |
| bloom | 18 | OSError: gs555750 is not a valid git identifier (branch name | 6 |
| prophetnet | 18 | AttributeError: module 'torch.jit._state' has no attribute ' | 18 |
| data2vec | 18 | AttributeError: module 'torch.jit._state' has no attribute ' | 18 |
| hubert | 14 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| realm | 14 | RuntimeError: einsum() operand subscript must be in range [a | 8 |
| wav2vec2 | 14 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| clip | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| marian | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| opt | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| blenderbot | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| pegasus | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| t5 | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| funnel | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| mvp | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| mbart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| plbart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| blenderbot_small | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| bart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |
| swin | 8 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| sew | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| resnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| xlm_roberta_xl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| splinter | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| dpt | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| xlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| speech_to_text_2 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| dpr | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| squeezebert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| vit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| mobilebert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| convnext | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| xlnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| glpn | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| segformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| cpm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| nezha | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| bigbird_pegasus | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| megatron_bert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| trocr | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| rembert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| van | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| mobilevit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| gpt_neox | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| openai | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| albert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| nystromformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| distilbert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| gpt2 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| mpnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| roberta | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| deit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| unispeech | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| flaubert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| codegen | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| wavlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| xglm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| roformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| regnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| bert_generation | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| convbert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| beit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| transfo_xl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| electra | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| ctrl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| canine | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| groupvit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| gptj | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| gpt_neo | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| fnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| fsmt | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| m2m_100 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| layoutlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| layoutlmv2 | 2 | ImportError: | 2 |
| flava | 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 p | 2 |
| trajectory_transformer | 2 | TypeError: save_for_backward can only save variables, but ar | 2 |<|||||>### **Past CI - TensorFlow 2.8 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error |
|-:|:-|
| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |
| 30 | NameError: name 'kenlm' is not defined |
| 6 | NameError: name 'GPT2Tokenizer' is not defined |
| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |
| 4 | ImportError: |
| 2 | NameError: name 'MaskFormerModel' is not defined |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |
| 1 | ValueError: You called `set_weights(weights)` on layer "tf_segformer_for_image_classification_8" wit |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |
| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| speech_to_text | 4 | ImportError: | 4 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |
| segformer | 1 | ValueError: You called `set_weights(weights)` on layer "tf_s | 1 |<|||||>### **Past CI - TensorFlow 2.7 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error |
|-:|:-|
| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |
| 30 | NameError: name 'kenlm' is not defined |
| 6 | TypeError: Invalid keyword argument(s) in `compile()`: ({'jit_compile'},). Valid keyword arguments i |
| 6 | NameError: name 'GPT2Tokenizer' is not defined |
| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |
| 4 | ImportError: |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 2 | NameError: name 'MaskFormerModel' is not defined |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |
| 1 | ValueError: You called `set_weights(weights)` on layer "tf_segformer_for_image_classification_8" wit |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |
| t5 | 10 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 1 |
| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| speech_to_text | 4 | ImportError: | 4 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| gptj | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |
| bart | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |
| gpt2 | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |
| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |
| segformer | 1 | ValueError: You called `set_weights(weights)` on layer "tf_s | 1 |<|||||>### **Past CI - TensorFlow 2.6 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error |
|-:|:-|
| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |
| 30 | NameError: name 'kenlm' is not defined |
| 10 | ValueError: in user code: |
| 6 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_compile'} |
| 6 | NameError: name 'GPT2Tokenizer' is not defined |
| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |
| 4 | ImportError: |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |
| 2 | NameError: name 'MaskFormerModel' is not defined |
| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |
| 1 | ValueError: You called `set_weights(weights)` on layer "tf_segformer_for_image_classification_8" wit |
| 1 | ValueError: Unable to save function b'__inference_tf_speech2text_model_25_layer_call_and_return_cond |
| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), d |
| 1 | ValueError: Unable to save function b'__inference_tf_speech2text_model_25_layer_call_and_return_cond |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |
| t5 | 10 | ValueError: in user code: | 10 |
| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |
| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |
| speech_to_text | 6 | ImportError: | 4 |
| bart | 3 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |
| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |
| gptj | 2 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |
| gpt2 | 2 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |
| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |
| segformer | 1 | ValueError: You called `set_weights(weights)` on layer "tf_s | 1 |<|||||>### **Past CI - TensorFlow 2.5 (Patch release: v4.21.2 | b487096b0)**
#### General
| no. | error |
|-:|:-|
| 70 | RuntimeError: Failed to import transformers.models.albert.modeling_tf_albert because of the followin |
| 28 | NameError: name 'kenlm' is not defined |
| 18 | RuntimeError: Failed to import transformers.models.gpt2.modeling_tf_gpt2 because of the following er |
| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |
| 2 | RuntimeError: Failed to import transformers.models.t5.modeling_tf_t5 because of the following error |
| 2 | RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the |
| 2 | RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following er |
| 2 | NameError: name 'MaskFormerModel' is not defined |
#### Per model
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |
| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |
| squeezebert | 4 | RuntimeError: Failed to import transformers.models.albert.mo | 4 |
| xglm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| bert_generation | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| byt5 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| bloom | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| perceiver | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| layoutlmv2 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| bort | 2 | RuntimeError: Failed to import transformers.models.bert.mode | 2 |
| tapex | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| plbart | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| barthez | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| layoutxlm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| nllb | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| canine | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| layoutlmv3 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| xlm_prophetnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| luke | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| mbart50 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| realm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| mluke | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| bertweet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| mvp | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| big_bird | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| phobert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| fnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| speech_to_text_2 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| prophetnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| herbert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| fsmt | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| codegen | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| retribert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| m2m_100 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| bartpho | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |
| reformer | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |<|||||>I was trying to fix the kenlm issue, but I see it's correctly installed [here](https://github.com/huggingface/transformers/blame/main/docker/transformers-all-latest-gpu/Dockerfile#L49) and has been for a while.
I guess it is an image issue?<|||||>> I was trying to fix the kenlm issue, but I see it's correctly installed [here](https://github.com/huggingface/transformers/blame/main/docker/transformers-all-latest-gpu/Dockerfile#L49) and has been for a while.
>
> I guess it is an image issue?
Hi @LysandreJik. In fact, Past CI use `transformers-past-gpu/Dockerfile`:
https://github.com/huggingface/transformers/blame/main/docker/transformers-past-gpu/Dockerfile
It's probably arguable if we should (or should not) include `kenlm`. I don't remember well if I got issue when installing it. Maybe yes for more elder versions, so I decide not to install it for all versions (to avoid confusion).
We can try with it in the next launch.<|||||>I think we can add it, we've had it in the main file for 8 months so it's unlikely to cause an issue. Looking forward to the next launch! |
transformers | 18,816 | closed | New update breaks T5, gpt2, opt models (probably all models actually) if bitsandbytes is installed | ### System Info
It seems like a new update in the repo is causing an error for all models tested, including all the T5, gpt2 and opt models.
The error only occurs if bitsandbytes is installed, I tried an earlier version of bitesandbytes and same problem occured.
I made a colab to showcase it: https://colab.research.google.com/drive/1TSMLP3oPkAb-sBL_9l9KmXtRpP314Axc?usp=sharing
It must have been an update made in the past few hours, since code I used earlier today suddenly raised this error:
```
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1030, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 36, in <module>
from ...modeling_utils import PreTrainedModel
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/modeling_utils.py", line 88, in <module>
from .utils.bitsandbytes import get_key_to_not_convert, replace_8bit_linear, set_module_8bit_tensor_to_device
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/bitsandbytes.py", line 10, in <module>
import bitsandbytes as bnb
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 6, in <module>
from .autograd._functions import (
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 4, in <module>
import bitsandbytes.functional as F
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/functional.py", line 14, in <module>
from .cextension import COMPILED_WITH_CUDA, lib
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 41, in <module>
lib = CUDALibrary_Singleton.get_instance().lib
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 37, in get_instance
cls._instance.initialize()
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 15, in initialize
binary_name = evaluate_cuda_setup()
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 136, in evaluate_cuda_setup
cc = get_compute_capability(cuda)
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 112, in get_compute_capability
return ccs[-1]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 462, in from_pretrained
model_class = _get_model_class(config, cls._model_mapping)
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 359, in _get_model_class
supported_models = model_mapping[type(config)]
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 583, in __getitem__
return self._load_attr_from_module(model_type, model_name)
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 597, in _load_attr_from_module
return getattribute_from_module(self._modules[module_name], attr)
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 553, in getattribute_from_module
if hasattr(module, attr):
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1020, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1032, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
list index out of range
```
### Who can help?
@younesbelkada
@TimDettmers
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!pip install git+https://github.com/huggingface/transformers.git
!pip install bitsandbytes==0.32.1
from transformers import AutoModel
model = AutoModel.from_pretrained("gpt2")
### Expected behavior
It should load the model, but it never does. | 08-30-2022 13:15:43 | 08-30-2022 13:15:43 | Hi @ViktorThink . Could you try if the following version works?
`bitsandbytes==0.31.8` or `bitsandbytes==0.31.5`<|||||>Oh, I got what was the issue now...
I had bitsandbytes installed on an instance with no cuda device, and that raised the error.
Thank you very much for the quick reply! Highly appreciated!<|||||>Thanks for pointing out this issue @ViktorThink !
This should be fixed in https://github.com/huggingface/transformers/pull/18859 that has been merged recently πͺ |
transformers | 18,815 | closed | MSN (Masked Siamese Networks) for ViT | # What does this PR do?
Adds the [MSN](https://arxiv.org/abs/2204.07141) checkpoints for ViT. MSN shines in the few-shot regimes which would benefit real-world use cases. Later we could add a pre-training script so that people can actually perform pre-training with MSN with their own datasets.
Closes #18758
## Who can review?
@sgugger @NielsRogge @amyeroberts
## TODO
- [x] Add documentation
- [x] Add rest of the files for repo consistency
- [ ] Host MSN weights on the Facebook org on HF Hub (@NielsRogge ?)
- [ ] Change the checkpoint paths wherever needed
| 08-30-2022 11:40:49 | 08-30-2022 11:40:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge, after studying the [pretraining script of MSN](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py) thoroughly I am still unsure of how to put together a `ViTMSNForPretraining` similar to `ViTMAEForPreTraining`. There are multiple moving pieces that I think are best off residing inside a standalone pretraining script:
* A target encoder [updated with EMA](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L373).
* [Learnable prototypes](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L217) that are needed to compute the final MSN loss.
* [Target sharpening](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L347) amongst other things.
Both the EMA and sharpening components operate with their own schedules.
Given this, I think it's best to resort to a separate pre-training script and use this model for feature extraction and fine-tuning.
There's an [ongoing discussion](https://github.com/facebookresearch/msn/issues/7) around releasing the weights of the linear classification layers and fine-tuned models. So when that's available, we could directly support those via `ViTMSNForImageClassification`. Regardless, I am happy to add a `ViTMSNForImageClassification` for easy access.
What do you think? <|||||>Thanks for your PR! It would be great to have the `ViTMSNForImageClassification` even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.
For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?<|||||>> For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?
Sounds good to me.
> Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.
Sure, I will continue the work from here on then. Thank you! <|||||>@sgugger @NielsRogge @amyeroberts ready for review.<|||||>@sgugger @NielsRogge @amyeroberts a friendly nudge on the PR. <|||||>@sgugger addressed your comments. After the weights are transferred to the right org, I will open a PR there adding README. <|||||>Hi @sayakpaul . First, thank you for this PR π€ .
The doctest for this model is currently failing, as
https://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/src/transformers/models/vit_msn/modeling_vit_msn.py#L646
this outputs the predicted label, but there is no expected value provided.
The [config](https://huggingface.co/facebook/vit-msn-small/blob/main/config.json) has `LABEL_0` ... `LABEL_999` in `id2label`, but I feel it should be the actual labels for the COCO dataset.
Could you take a look for this config, as well as the missing expected outputs for the doctest? Thank you!
Here is the failing doctest job:
https://github.com/huggingface/transformers/actions/runs/3109562462/jobs/5039877349<|||||>> The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset.
The model was trained on ImageNet-1k.
I will add the expected outputs. Thanks for flagging it. |
transformers | 18,814 | closed | Add support for Japanese GPT-NeoX-based model by ABEJA, Inc. | # What does this PR do?
This PR adds a new GPT NeoX Japanese Model and a new Tokenizer. The specific features are,
- Trained for Japanese Dataset with specific preprocess.
- Used [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) to train with Pipe. In addition, we removed bias parameters from the transformer blocks following [PaLM from Google](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html).
- Applied Japanese special sub-word tokenizer to accommodate the distinctive structure of the Japanese. Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a combination of hiragana, katakana, and kanji, and variants such as "1" and "β " are often used.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Thank you in advance to review this PR!
- GPT model and tokenizer: @patrickvonplaten, @LysandreJik
- Documentation: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 08-30-2022 09:16:55 | 08-30-2022 09:16:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Impressive PR, @SO0529!
I'd like either @younesbelkada or @ArthurZucker (or both :smile:) to have a look at your PR; they are both well versed in Japanese and have reviewed models in the past.
They're both on leave until the 9th of September, is it okay with you if we review this PR in about 1.5 weeks' time?
Thanks for your understanding!<|||||>@LysandreJik
Thank you for a quick comment. Of course we can wait! We look forward to having you review this PR! <|||||>Hey @SO0529 ! We just came back, gonna take a look at this with @younesbelkada asap π <|||||>@ArthurZucker @younesbelkada Thank you in advance for taking your time to review this PR!:smiley:<|||||>Thanks for your valuable comments! I correct them one by one:muscle:<|||||>Awesome, I will review again once you are done with that π <|||||>Sorry I found mistakes, I re-push later when I fix.<|||||>@younesbelkada @ArthurZucker
I think I could fix all your comments like below. I'm glad if you could review this PR again:smile:
> - We might need a bit more documentation about RowParrallelLinear as it is a new module that is not present in the original GPT-NeoX implementation. I also think that the name can be changed as I think that there is no model parallelism involved (we just do a F.linear). Feel free to have a look at the BLOOM implementation to see [how we got rid of the output_bias variable. ](https://github.com/huggingface/transformers/blob/9faa9f9dacf8c818ab2513da3ef92ce66f39515d/src/transformers/models/bloom/modeling_bloom.py#L383)there. But I am not sure yet how to adapt this with your model since the argument skip_bias_add might be important (for BLOOM this argument was set to False on all models).
- I removed both `RowParrallelLinear` and `ColumnParrallelLinear` from this model.
> - A small nit on the Attention block that can be easily fixed with the small change that I proposed, but I feel that the error initially came from using F.linear on the RowParrallelLinear and ColumnParrallelLinear modules (accelerate does not support torch.functionnal functions so that is why it might be related).
- I used `nn.Linear` instead of `F.linear`.
> - If some modules are entirely copied from the original GPT-NeoX implementation, you can just add a # Copied from... statement on the top of the class definition for better tracking πͺ
- I put a comment at the top of class definition.
> - Also not sure if we have to keep the bias_dropout_fn as I experienced a very small throughput enhancement that we can neglect for better code readability. Feel free to have a look [https://github.com/huggingface/transformers/blob/9faa9f9dacf8c818ab2513da3ef92ce66f39515d/src/transformers/models/bloom/modeling_bloom.py#L130](https://github.com/huggingface/transformers/pull/here) as we went through the same dilemma when integrating BLOOM from Megatron-DeepSpeed
- I keep `bias_dropout_add` function to use the bias param, but it has been changed from an unnecessarily difficult-to-read writing style to a simple description.
In addition to above, I added a simple `test_generate` function.
Again, thank you for taking your time to review this PR! <|||||>By the way, `build_and_test ` and `ci/circleci: run_tests_torch` looks to be in error with the model `tests/models/pegasus/test_modeling_pegasus.py`, how do I address this error?:worried:<|||||>Hey! Awesome work, we will review again today!
The Pegasus test should not really be related to you, I will have a look π<|||||>@younesbelkada
Thank you for quick review!
I modified below 2 items following your comments.
- I put a documentation for `bias_dropout_add`.
- I changed coding style regarding `nn.ModuleList`.<|||||>@sgugger
Thank you for taking your time to review PR!
I think I could modify all your comments. I would like you to look the correction please.<|||||>@sgugger
Thank you for taking your time again and again! I've changed your last comment:fire:
We are looking forward to being merge! |
transformers | 18,813 | closed | Feature to highlight or color code the text from the NER output of token classification having offsets using python | ### Feature request
I have fine tuned a `Hugging face` `token classification` model for `NER task`. I use `pipeline` from Hugging face to do prediction on test text data.
I tag the data as `BIOL` format. `B stands of Beginning, I stand for Including, O means no entity, L means Last`
**Example:**
`Joh J Mathew` will be tagged as `B_PERSON` `I_PERSON` `L_PERSON`
**Here is how the output looks like:**
model = AutoModelForTokenClassification.from_pretrained("model_x")
tokenizer = AutoTokenizer.from_pretrained("model_x")
token_classifier = pipeline("token-classification", model=model, aggregation_strategy="max",tokenizer=tokenizer)
text=("""'IOWA DRIVER LICENSE 1 SAMPLE 2 MARK LIMITED-TERM 8 123 NORTH STREET APT 201 DES MOINES, IA 50301-1234
Onom d DL No. 123XX6789 4a iss 1107/2016 4b exp 01/12/2021 15 Sex M 16 Hgt 5\'-08" 18 Eyes BRO 9a End NONE 9
Class C 12 Rest NONE Mark Sample DONOR MED ALERT: Y HEARING IMP: Y MED ADV DIR: Y 3 OOB 01/12/1967 5
DD 12345678901234567890123 NIVIA AL NA LANG ---- QUE EROL DE USA 01/12/67""")
for ent in token_classifier(text):
print(ent)
{'entity_group': 'B_LAST_NAME', 'score': 0.9999994, 'word': 'SAMPLE', 'start': 23, 'end': 29}
{'entity_group': 'B_FIRST_NAME', 'score': 0.99999905, 'word': '', 'start': 32, 'end': 33}
{'entity_group': 'L_FIRST_NAME', 'score': 0.9999949, 'word': 'MARK', 'start': 32, 'end': 36}
{'entity_group': 'B_ADDRESS', 'score': 0.9999989, 'word': '123', 'start': 52, 'end': 55}
{'entity_group': 'I_ADDRESS', 'score': 0.99999917, 'word': 'NORTHSTREETAPT201DESMOINES,IA', 'start': 56, 'end': 91}
{'entity_group': 'I_DRIVER_LICENSE_NUMBER', 'score': 0.9999995, 'word': '123XX6789', 'start': 118, 'end': 127}
{'entity_group': 'L_ISSUE_DATE', 'score': 0.99999964, 'word': '1107/2016', 'start': 135, 'end': 144}
{'entity_group': 'I_EXPIRY_DATE', 'score': 0.99999964, 'word': '01/12/2021', 'start': 152, 'end': 162}
{'entity_group': 'B_PERSON_NAME', 'score': 0.99999905, 'word': 'Mark', 'start': 234, 'end': 238}
{'entity_group': 'I_PERSON_NAME', 'score': 0.9999993, 'word': 'Sample', 'start': 239, 'end': 245}
{'entity_group': 'L_DATE_OF_BIRTH', 'score': 0.99999976, 'word': '01/12/1967', 'start': 301, 'end': 311}
So, given the offset values `entity_group`, `word`, `start`, `end` how can I highlight the original text with the `entity_group` so that it is easy to visulize.
**Final Output**
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/Ef8WB.png
`Is there any python-library that I can use to do it.?`
### Motivation
Makes easy to visualise the NER output
### Your contribution
NA | 08-30-2022 09:13:26 | 08-30-2022 09:13:26 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
>
> Thanks!
Its a feature request<|||||>Ah, sorry, I misunderstood! In that case, I would say that this is unfortunately out of the scope of the repository. I would look into other utilities to provide color highlighting in outputs<|||||>> Ah, sorry, I misunderstood! In that case, I would say that this is unfortunately out of the scope of the repository. I would look into other utilities to provide color highlighting in outputs
Will it be possible to integrate `Displacy`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,812 | closed | Add TF implementation of LongT5 | Fixes #18063
Add:
- [ ] Fix PT-TF equivalence (Local)
- [ ] Fix PT-TF equivalence (TGlobal)
- [ ] Run all slow tests
- [ ] Prepare TF checkpoints
- [long-t5-local-base](https://huggingface.co/Stancld/long-t5-local-base)
- [long-t5-local-large](https://huggingface.co/Stancld/long-t5-local-large)
- [long-t5-tglobal-base](https://huggingface.co/Stancld/long-t5-tglobal-base)
- [long-t5-tglobal-large](https://huggingface.co/Stancld/long-t5-tglobal-large)
- long-t5-tglobal-xl https://github.com/huggingface/transformers/issues/19965
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 08-30-2022 09:08:21 | 08-30-2022 09:08:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18812). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @patrickvonplaten and @gante,
FYI I've been gradually fixing some PT-TF discrepancies -- I should have some spare time again next weekend, so hopefully, then it should be ready for review :]<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@stancld should I reopen the PR? :)<|||||>Hi @gante, yes, I'd open that again.
I apologize for being so slow here, but I've been pretty busy now. I'll try to finish this.<|||||>No worries @stancld, take your time π€ And thank you for working on it!<|||||>Hi @gante, I managed to fix some bugs. There are still some minor discrepancies between PT and TF implementations. Would you mind having a first look if you spot any obvious differences, please? :]
Otherwise, TF-only tests seem to be passing π°
(Btw, CI is passing, but the tests are failing locally, so I'm not really sure :D )<|||||>@stancld Will have a look π Can I have a copy of the error(s) you see locally? (I'm assuming on the slow tests)<|||||>Also cc @ArthurZucker here<|||||>> @stancld Will have a look π Can I have a copy of the error(s) you see locally? (I'm assuming on the slow tests)
Sorry for the late reply. I fail on `PT-TF` equivalence tests, basically, saying there's a too high difference between outputs.<|||||>Hey @stancld ! Thanks for the addition! There are a few approaches we can take here. Sometimes the tolerance is a bit too high and part of the hidden states don't match but the final output does, in that case, we can lower the tolerance (maybe to around `4e-2` other wise, I will have a look ! <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18812). All of your documentation changes will be reflected on that endpoint.<|||||>Almost there I think :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,811 | closed | Inconsistencies between `nn.functional.interpolate` and `tf.image.resize` | ### System Info
Upsampling of intermediate feature maps is common for computer vision models. In PyTorch, it's usually implemented using [`nn.functional.interpolate`](https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html). For TensorFlow, it's usually [`tf.image.resize`](https://www.tensorflow.org/api_docs/python/tf/image/resize).
But there is an inconsistency between what these two methods yield ([Colab Notebook](https://colab.research.google.com/gist/sayakpaul/be24f152d91d0f1cbe95d5cea9ae8b14/scratchpad.ipynb)). Sometimes the differences in their outputs are small enough to ignore ([ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit), for example). But when interpolation is repeated several times in a model ([MobileViT](https://github.com/huggingface/transformers/tree/main/src/transformers/models/mobilevit), for example), these small differences can add up and shake the final outputs of a model quite a bit. More details [here](https://github.com/huggingface/transformers/pull/18555#issuecomment-1229703811).
@hollance wrote an [amazing blog post](https://machinethink.net/blog/coreml-upsampling/) discussing this issue.
### Who can help?
@amyeroberts @gante @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The Colab Notebook mentioned in the description.
### Expected behavior
We should work on a TF utility that yields the same output as `nn.functional.interpolate`. | 08-30-2022 08:50:43 | 08-30-2022 08:50:43 | Note that `align_corners=None` does give the same result as `tf.image.resize`, to an absolute tolerance of 1e-6 or so:
```python
upsampled_logits_pt = torch.nn.functional.interpolate(
dummy_logits_pt, size=interp_shape, mode="bilinear", align_corners=None
)
```
<|||||>I had to face a similar probelem.
With the `torch.nn.functional.interpolate`:
- On `align_corners=True`, the best option is to use the `tf.compat.v1.image.resize`
- On `align_corners=False`, the `tf.image.resize` does the trick.
Here is a [colab notebook](https://colab.research.google.com/gist/ariG23498/39a20bd536ffaedd145310e2b1c4a1b6/scratchpad.ipynb) that details the solution that I proposed.
@amyeroberts and @gante were against using the `tf.compat.v1.image.resize` for obvious reasons. @amyeroberts did come up with a solution which is documented in this [comment](https://github.com/huggingface/transformers/pull/18020#discussion_r953674162). I hope this provides some value to this thread.
<|||||>Thanks @hollance and @ariG23498 for chiming in.
When there's one / two interpolation ops, the small differences are fine but when they are done several times (as mentioned earlier), these differences compound up which creates mismatches in the final outputs. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Pinging this to keep it open - we've implemented TF versions of Torch ops like `AdaptivePool`, so this might be something to explore as well. I hope a performant solution can be implemented with native ops (with or without XLA compilation) without needing to write our own CUDA, though!<|||||>> Pinging this to keep it open - we've implemented TF versions of Torch ops like AdaptivePool, so this might be something to explore as well.
Could you point me to something relevant? Would love to see the new implementations. <|||||>Hi @sayakpaul, I'm extremely sorry! I thought we'd implemented it in `data2vec2` already, but checking the code it seems like we still have the old version there. Here's a [much, much more performant version](https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638).<|||||>The new version will also allow XLA compilation, unlike the original sparse implementation<|||||>Oh yeah, I remember this one. I need to update the data2vec code with your latest implementation. Reminded me of that.
Thanks, Matt! <|||||>Although, checking out that notebook, it seems like Pytorch's `interpolate` and TF's `resize` are using the same algorithm, and the differences are mostly numerical; when I switch the dtype to `float64`, the max absolute differences is 1e-7.
I think this will make it extremely hard to bring the accuracies any closer - we would have to rewrite the internals of the algorithm so that they're not just mathematically equivalent, but they lose precision in the same places as well. I think we're stuck with the issue, unfortunately!<|||||>Fair enough. The tiny differences just add up when there's multiple such interpolations. Otherwise, it's not a big deal. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ping.
I don't think it is resolved yet.<|||||>I doubt it will be apples-to-apples resolved ever, considering https://github.com/huggingface/transformers/issues/18811#issuecomment-1262563941.
We need to be aware when comparing predictions of models (PT vs. TF) with stacks of interpolations, especially focusing on what tolerances we're using. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,810 | closed | SSLError |
Version:
Python 3.8
transformer 4.21.0
I am trying to use DistilBertTokenizer using the following line:
`tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased", do_lower_case=True)`
but SSLerror arise from the line above
`requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /distilbert-base-uncased/resolve/main/vocab.txt (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))`
Does anyone manage to solve this? I tried some methods online but still got the same error. | 08-30-2022 08:16:54 | 08-30-2022 08:16:54 | It seems there is an issue with your SSL module and `requests`, not with `transformers`; I would head to Stack Overflow or another forum focused on these issues, we're unfortunately unlikely to be able to help you out here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,809 | closed | Changing a single example for BLOOM 176-B affects forward pass for other examples in a batch | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@thomasw21, @younesbelkada This issue if for unexpected BLOOM outputs.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I wrote this script to do get the conditional NLL for the labels given the context.
Tried different batches with only the first example changing and rest of the examples fixed in the batch. However, after a certain point, the changing of first examples, affects the NLL for other examples.
This is not supposed to happen.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "bigscience/bloom"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
max_memory={0: '0GIB', 1: '51GIB', 2: '51GIB', 3: '51GIB',
4: '51GIB', 5: '51GIB', 6: '51GIB', 7: '51GIB'},
torch_dtype=torch.bfloat16,
)
model.eval()
def compute_gen_loss(lm_logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
batch_size = labels.shape[0]
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = torch.nn.CrossEntropyLoss(reduction="none")
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)),
shift_labels.view(-1)
)
loss = loss.reshape(batch_size, -1)
loss = loss.sum(dim=-1) / (shift_labels != -100).sum(dim=-1)
return loss
def pad_ids(arrays, padding, max_length=-1):
if (max_length < 0):
max_length = max(list(map(len, arrays)))
arrays = [[padding] * (max_length - len(array)) +
array for array in arrays]
return arrays
def forward(text: list, labels: str, conditional: bool = True):
input_tokens = tokenizer(text).input_ids
label_tokens = tokenizer(labels).input_ids
input_ids = [x + y for (x, y) in zip(input_tokens, label_tokens)]
attention_mask = [(len(x) + len(y)) * [1]
for (x, y) in zip(input_tokens, label_tokens)]
if (conditional):
labels = [[-100] * len(x) + y for (x, y)
in zip(input_tokens, label_tokens)]
else:
labels = input_ids
pad = 3
input_ids = pad_ids(input_ids, pad)
attention_mask = pad_ids(attention_mask, 0)
# labels need to be on output device
labels = pad_ids(labels, -100)
input_ids = torch.tensor(input_ids)
attention_mask = torch.tensor(attention_mask)
labels = torch.tensor(labels)
lm_logits = model(
input_ids=input_ids,
attention_mask=attention_mask
).logits
print(compute_gen_loss(lm_logits, labels).cpu().tolist())
text = [
"DeepSpeed",
"DeepSpeed is a",
"DeepSpeed is a machine",
"DeepSpeed is a machine learning framework",
]
labels = [
" is awesome.",
" good person.",
" that can wipe out the planet.",
" for generating memes.",
]
forward(text, labels)
labels[0] = " is awesome. really awesome"
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it."
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised"
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed."
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed. Oh no the values are bugging out now."
forward(text, labels)
```
```shell
[4.8125, 5.1875, 3.296875, 5.09375]
[5.625, 5.1875, 3.296875, 5.09375]
[4.375, 5.1875, 3.296875, 5.09375]
[4.0625, 5.1875, 3.28125, 5.09375]
[3.953125, 5.1875, 3.28125, 5.0625]
[4.25, 5.1875, 3.296875, 5.09375]
```
Value drops from 3.29 to 3.28 in column 2 when only example for column 0 is changed. Even column 3 changes in last case.
Only column 0 is supposed to change here.
### Expected behavior
```shell
[4.8125, 5.1875, 3.296875, 5.09375]
[5.625, 5.1875, 3.296875, 5.09375]
[4.375, 5.1875, 3.296875, 5.09375]
[4.0625, 5.1875, 3.296875, 5.09375]
[3.953125, 5.1875, 3.296875, 5.09375]
[4.25, 5.1875, 3.296875, 5.09375]
``` | 08-29-2022 21:51:32 | 08-29-2022 21:51:32 | Hey! It's a bit hard to run a testing env with bloom, can you share a reproductible script with a smaller model?
This looks like some instabilities from torch.bfloat16, and I'm willing to bet that those values come from there (both 3.28 occurences are exactly the same, so seems like a rounding error to me, we can perhaps check that those values are consecutive values in bfloat16, ie there's no value between 3.28 and 3.29). What I think might be happening is you're adding `pad` as you increase the length of the labels and those pad values change the behaviour of previous values. I don't think we have much control over this as this relies on `torch` operators usually.
Also if you can run on `main` that'd be great, typically https://github.com/huggingface/transformers/pull/18344 hasn't been incorporated yet in a release and I think it fixed a bunch of instabilities.<|||||>Thanks @thomasw21 for taking a look at this. I will try to reproduce this with a smaller model (say GPT-2) and get back on this. I will also try main branch.<|||||>Also, since there are no batch-norm ops in BLOOM. I don't really understand why this should happen. Also, since the pads have been given an attention mask = 0. Shouldn't the output be the same?
Maybe I am understanding this incorrectly.<|||||>hi @mayank31398 !
Thanks for pointing out this issue πͺ
If I wrap up what I have understood from your issue, when doing batched generation changing the value of one of the label changes the value of the loss function. If I understood correctly the labels are not used when inferring there, so the problem should occur when computing the loss (*i.e.,* the input text is always fixed, right?).
I tried your script on the `main` branch using `gpt2` as below:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
)
# lm_logits = torch.randn((4, 11, 250880), dtype=torch.bfloat16)
def compute_gen_loss(lm_logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
batch_size = labels.shape[0]
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = torch.nn.CrossEntropyLoss(reduction="none")
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)),
shift_labels.view(-1)
)
loss = loss.reshape(batch_size, -1)
loss = loss.sum(dim=-1) / (shift_labels != -100).sum(dim=-1)
return loss
def pad_ids(arrays, padding, max_length=-1):
if (max_length < 0):
max_length = max(list(map(len, arrays)))
arrays = [[padding] * (max_length - len(array)) +
array for array in arrays]
return arrays
def forward(text: list, labels: str, conditional: bool = True):
input_tokens = tokenizer(text).input_ids
label_tokens = tokenizer(labels).input_ids
input_ids = [x + y for (x, y) in zip(input_tokens, label_tokens)]
attention_mask = [(len(x) + len(y)) * [1]
for (x, y) in zip(input_tokens, label_tokens)]
if (conditional):
labels = [[-100] * len(x) + y for (x, y)
in zip(input_tokens, label_tokens)]
else:
labels = input_ids
pad = 3
input_ids = pad_ids(input_ids, pad)
attention_mask = pad_ids(attention_mask, 0)
# labels need to be on output device
labels = pad_ids(labels, -100)
input_ids = torch.tensor(input_ids)
attention_mask = torch.tensor(attention_mask)
labels = torch.tensor(labels)
lm_logits = model(
input_ids=input_ids,
attention_mask=attention_mask
).logits
print(compute_gen_loss(lm_logits, labels).cpu().tolist())
text = [
"DeepSpeed",
"DeepSpeed is a",
"DeepSpeed is a machine",
"DeepSpeed is a machine learning framework",
]
labels = [
" is awesome.",
" good person.",
" that can wipe out the planet.",
" for generating memes.",
]
forward(text, labels)
labels[0] = " is awesome. really awesome"
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it."
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised"
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed."
forward(text, labels)
labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed. Oh no the values are bugging out now."
forward(text, labels)
```
and getting
```
[10.3125, 7.0, 3.609375, 7.65625]
[8.25, 7.0, 3.609375, 7.65625]
[6.84375, 7.0, 3.609375, 7.65625]
[3.78125, 7.09375, 6.9375, 8.5625]
[4.34375, 9.5, 8.6875, 10.75]
[4.53125, 9.6875, 9.0, 12.125]
```
I suspect that logits may be flaky when using half-precision models, therefore I second what @thomasw21
suspected ;) !<|||||>Hey, first of all: sorry for late reply.
Thanks for trying out my example with gpt2 @younesbelkada
Any way to get around this then?
I guess computing logits in bf16 might not be the best we can do?<|||||>Okay I think gpt2 test isn't instability. Essentially it's absolute positional embeddings that's screwing with you as you move things to the right and adding padding to the left as you increase the label size, which is why you see big shifts in the loss.
I do think that the bloom test is instability. Typically `3.28125` and `3.296875` are consecutive.
```
>>> import torch
>>> torch.set_printoptions(precision=10)
>>> torch.frombuffer(bytes(np.array([83,64], np.int8)), dtype=torch.bfloat16)
tensor([3.2968750000], dtype=torch.bfloat16)
>>> torch.frombuffer(bytes(np.array([82,64], np.int8)), dtype=torch.bfloat16) # replace 83 with 82
tensor([3.2812500000], dtype=torch.bfloat16)
>>> torch.frombuffer(bytes(np.array([-94,64], np.int8)), dtype=torch.bfloat16)
tensor([5.0625000000], dtype=torch.bfloat16)
>>> torch.frombuffer(bytes(np.array([-93,64], np.int8)), dtype=torch.bfloat16)
tensor([5.0937500000], dtype=torch.bfloat16)
```
So as you said you can try computing the logits in fp32, which will increase precision (but will be slower). There's a bit of a workaround as you need to cast the embedding layers to fp32 and such.<|||||>Everything makes sense in your explanation @thomasw21 ! Missed the absolute positional embedding part. Thanks for explaining it πͺ <|||||>I guess this is not a fixable problem then right?
I think even in BLOOM AliBi might be screwing up with attention values right?
So, even if we have padded, the result will change.
Thanks for clarificatioon @thomasw21 .
I think we can close this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,808 | closed | [Update README] Add SegFormer and ViLT links | # What does this PR do?
As we now also have inference widgets for semantic segmentation & visual question answering, it makes sense to add them to the main README. | 08-29-2022 12:57:32 | 08-29-2022 12:57:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Very cool! We've got a finetune guide for semantic segmentation in the works at #18640 right now, with hopefully VQA to come soon. |
transformers | 18,807 | closed | supported dynamic batch for torchscript trace | # What does this PR do?
Fixup for supporting dynamic batch for Swin-Transformer.
Fixes #18806
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@novice03
@sgugger
@NielsRogge
| 08-29-2022 12:41:44 | 08-29-2022 12:41:44 | Hi,
Thanks for your PR. Note that, when contributing, you need to run `make fixup` from the root of the repo, which will fix code style & checks quality of the code. Normally here, it will complain that you need to run `make fix-copies`, which ensures that other models that rely on Swin's implementation also get updated (in this case, Donut).
Thanks! <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18807). All of your documentation changes will be reflected on that endpoint.<|||||>Hi, @MenglingD .
In order to make CircleCI tests run, could you follow [this instruction](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) to refresh your CircleCI token, and let's see if it fixes the test not running issue.
Thank you! |
transformers | 18,806 | closed | Swin trace is not correctedly for dynamic batch. | ### System Info
transformers: v4.21.2
system: centos
python version: 3.6
### Who can help?
@sgugger @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python3
import torch
from models import build_model
from config import _C, _update_config_from_file
config = _C.clone()
_update_config_from_file(config, "configs/swin/swin_tiny_patch4_window7_224.yaml")
model = build_model(config).cuda().eval()
input1 = torch.randn((1, 3, 224, 224)).to("cuda")
input2 = torch.randn((2, 3, 224, 224)).to("cuda")
jit_model = torch.jit.trace(model, input1)
assert((model(input1) - jit_model(input1)).abs().sum() == 0)
assert((model(input2) - jit_model(input2)).abs().sum() == 0)
```
### Expected behavior
Expected nothing, but got `AssertionError`. | 08-29-2022 12:33:58 | 08-29-2022 12:33:58 | Wondering why this wasn't caught by the torchscript tests, cc @michaelbenayoun <|||||>The dimension of batch is casted to integer forcelly, which leads to swin cann't support dynamic batch [modeling_swin.py#L218](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L218)οΌ
```python3
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))
windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)
windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(batch_size, height, width, -1)
return windows
```
And the following modifications has same semantics but supports dynamic batch:
```python3
def window_reverse(windows, window_size, height, width):
"""
Merges windows to produce higher resolution features.
"""
channels = int(windows.shape[-1])
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, channels)
windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, height, width, channels)
return windows
```
ps: I have pulled request before #18807, but I can't `make fixup` as the python version is too old(python3.6.8), and I have close that PR. I may need to trouble you to fix it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @MenglingD,
I am not even able to trace the model except if I disable `SwinLayer.set_shift_and_window_size` [here](https://github.com/huggingface/transformers/blob/v4.21-release/src/transformers/models/swin/modeling_swin.py#L649).
But when disabling it, I am able to get it to work with the change you recommended.
I think we can apply this fix, but as long as `SwinLayer.set_shift_and_window_size` is not tracing friendly, it won't fix a thing.<|||||>> Hi @MenglingD, I am not even able to trace the model except if I disable `SwinLayer.set_shift_and_window_size` [here](https://github.com/huggingface/transformers/blob/v4.21-release/src/transformers/models/swin/modeling_swin.py#L649). But when disabling it, I am able to get it to work with the change you recommended.
>
> I think we can apply this fix, but as long as `SwinLayer.set_shift_and_window_size` is not tracing friendly, it won't fix a thing.
OK, thanks.
I wander why that `SwinLayer.set_shift_and_window_size is not tracing friendly`. The `input_dimensions` is fixed at the deployment phase, and it is ok for jit.trace which will choose one of branch of if-else statement. I am interesting in this trace error, could you please provide more message for it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,805 | closed | Fix luke docstring | # What does this PR do?
The output logits of `LukeForEntitySpanClassification` is in `(batch_size, entity_length, config.num_labels)` shape instead of `(batch_size, config.num_labels)`, which is that of `LukeForEntityClassification`.
https://github.com/huggingface/transformers/blob/8b67f20935e48b26c5803cf31e0e89b9cfaa22ab/tests/models/luke/test_modeling_luke.py#L402-L404
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-29-2022 10:21:57 | 08-29-2022 10:21:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,804 | closed | Fix mock in `test_cached_files_are_used_when_internet_is_down` | # What does this PR do?
Fix the CI that is currently breaking because of the [0.9.1 patch release](https://github.com/huggingface/huggingface_hub/releases/tag/v0.9.1) of `huggingface_hub`.
Problem is that we are looking at the response from the server when having a HTTPError. In the tests, the response is mocked which makes `response.json()` a mock instead of a dictionary. I now set it to `{}` which means an empty response from the server.
See [slack thread](https://huggingface.slack.com/archives/C01NE71C4F7/p1661526937492729) (internal link) for more context.
# Expected result
The CI should now pass correctly. | 08-29-2022 10:20:23 | 08-29-2022 10:20:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you, @Wauplin . As my knowledge of `Mock` is also mocked, here is my noob question:
The `response` object here
```
server_message = response.json().get("error", None)
```
is the `response_mock` in
```
response_mock.json.return_value = {}
```
?
And if we don't set `return_value` as done in this PR, `response.json()["error"]` is also a mocked object? (Is it `response_mock` itself, or another one).
<|||||>Hi @ydshieh, I'll try to explain the PR a bit.
In the tests, we are defining a Mock object `response_mock`. In python, Mock are objects on which you can call any attribute and it will return a new mock object. Since object methods are also attributes, they are mocked as well.
Here is a short example to understand it better:
```py
>>> from unittest.mock import Mock
>>> my_mock = Mock()
# the mock we defined
>>> my_mock
<Mock id='140556566369952'>
# `foo` is also a mock we different id
>>> my_mock.foo
<Mock name='mock.foo' id='140556566370816'>
# `foo` can be called as a function and return a new mock with different id
>>> my_mock.foo()
<Mock name='mock.foo()' id='140556566424112'>
# `.foo()` is a mock so you can call anything from it
>>> my_mock.foo().bar
<Mock name='mock.foo().bar' id='140556530612016'>
# now we set a custom return value for `.foo()`
>>> my_mock.foo.return_value = 4
# `.foo` still a mock
>>> my_mock.foo
<Mock name='mock.foo' id='140556566370816'>
# `.foo()` is now a "normal" value, here the integer 4
>>> my_mock.foo()
4
# not a mock so cannot call `.bar` from `4`
>>> my_mock.foo().bar # not a mock so cannot call `.bar` from `4`
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute 'bar'
```
The goal of a mock is that almost any manipulation done on it will not fail. It will continue to pass mocked values which in the end you can test. There are a few tweaks you can do on a mock like raising an Error (which is done with the `side_effect` in the implemented test or returning a special value (like in this PR)
---
To come back to you initial question, when "requests.request" is patch, it means if in this context `requests.request` is called then the actual code will not be returned but instead the `response_mock` will be returned. This is done so that the actual call is not done.
So yes, when in `huggingface_hub` there is a `response.json().get("error", None)`, it is actually `response_mock.json().get("error", None)`. Since a return value is set to `response_mock.json()`, this is equivalent to do `{}.get("error", None)` (which is None).
```py
with mock.patch("requests.request", return_value=response_mock) as mock_head:
_ = BertConfig.from_pretrained("hf-internal-testing/tiny-random-bert")
```
And finally to be complete, mock objects are also nice because they count how much calls have been made to them. So at the end of test the `mock_head.assert_called()` means that we expect that the mock has been called at least once otherwise it would fail.
Hope this help you understand what mocks are doing :)<|||||>@ydshieh I answered to your question in the previous comment but posted it when it was half-written by mistake. Now the explanation is complete :)
Please let me know if you have other questions. Otherwise I let you merge it. |
transformers | 18,803 | closed | [Swin, Swinv2] Fix attn_mask dtype | # What does this PR do?
This PR fixes the dtype of the `attn_mask`, making mixed precision training possible.
Fixes #17481 | 08-29-2022 09:23:22 | 08-29-2022 09:23:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,802 | closed | `load_tf_weights` doesn't handle the weights added to the TF models at the top level | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.9.11
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Reproduction
(TF)MarianMTModel has weights `final_logits_bias` added at the top-level (i.e. not under any layer)
https://github.com/huggingface/transformers/blob/5f06a09b9f3f05b4860f11bbbe22861923b49d81/src/transformers/models/marian/modeling_tf_marian.py#L1287
However, the method `load_tf_weights` only handle weights under some layers
https://github.com/huggingface/transformers/blob/5f06a09b9f3f05b4860f11bbbe22861923b49d81/src/transformers/modeling_tf_utils.py#L850
This causes problem when we load TF checkpoints for `TFMarianMTModel`, i.e. `final_logits_bias` is not loaded.
```python
from transformers import MarianMTModel, TFMarianMTModel
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
pt_model = MarianMTModel.from_pretrained(model_name)
tf_model_from_pt = TFMarianMTModel.from_pretrained(model_name, from_pt=True)
tf_model = TFMarianMTModel.from_pretrained(model_name, from_pt=False)
# Only has `TFMarianMainLayer` in `layers`
print(tf_model.layers)
print(pt_model.final_logits_bias.numpy())
print(tf_model_from_pt.final_logits_bias.numpy())
print(tf_model.final_logits_bias.numpy())
```
Outputs:
```bash
[<transformers.models.marian.modeling_tf_marian.TFMarianMainLayer object at 0x000001F00ECE9940>]
[[11.757146 -1.7759448 -7.3816853 ... -1.6559223 -1.6663467 0. ]]
[[11.757146 -1.7759448 -7.3816853 ... -1.6559223 -1.6663467 0. ]]
[[0. 0. 0. ... 0. 0. 0.]]
```
### Expected behavior
`load_tf_weights` should be able to load weights like `final_logits_bias`, and the TF checkpoint should be loaded correctly. | 08-29-2022 08:52:28 | 08-29-2022 08:52:28 | Related to #18149.<|||||>cc @patrickvonplaten as we might need to change the core method `load_tf_weights`. |
transformers | 18,801 | closed | Add security warning about the from_pretrained() method | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds a warning to the docs about the `from_pretrained()` method being susceptible to malicious code injection. This is similar to other warnings provided in the [PyTorch](https://pytorch.org/docs/stable/generated/torch.load.html#torch.load) and [Python](https://docs.python.org/3/library/pickle.html#module-pickle) docs.
For now I've put this in the autoclass tutorial, but can also put it in the API docs if we agree this makes sense.
Questions:
* Does the same issue apply to TensorFlow models?
* Does the [malware scanner](https://huggingface.co/docs/hub/security-malware) catch malicious code injection for _all_ Hub repos (public and private)?
h/t to @yk who pointed this out to me.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
cc @sgugger @McPatate
| 08-29-2022 08:38:12 | 08-29-2022 08:38:12 | > Does the [malware scanner](https://huggingface.co/docs/hub/security-malware) catch malicious code injection for all Hub repos (public and private)?
It doesn't "catch malicious code injection" per se, it extracts the list of module-function pairs that can be called when unpickling. We still haven't implemented anything on top of that.
To answer your question, we're only scanning public repositories atm.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really `torch.load` that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.
I'd rather focus on showing how using the Hub is likely a smaller security risk than using an external tool like GDrive where no such verification can be done.<|||||>> Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really `torch.load` that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.
>
> I'd rather focus on showing how using the Hub is likely a smaller security risk than using an external tool like GDrive where no such verification can be done.
Good idea! Added a sentence about the scanner and reworded the text in [8098c4b](https://github.com/huggingface/transformers/pull/18801/commits/8098c4bede79f2e3f95308fbd47fd5b326f502e3)<|||||>I'm not disagreeing that the HF hub is a smaller risk compared to other things, just pointing out that the malware scanner currently lists my model as safe (and due to the mechanics of torch.save, it will always be evadable). And signed commits are super useful, but they don't mitigate this particular problem, as the signatory would be me, and therefore the signature valid.<|||||>> due to the mechanics of torch.save, it will always be evadable
@yk would you mind expanding on that ?<|||||>> @yk would you mind expanding on that ?
torch.save uses pickle, which in turn allows for arbitrary code execution. If the malware scanner detects loaded modules, I can simply un-load them after I've used them. If the malware scanner looks for the presence of certain instructions, strings, etc. I can always evade it somehow, that's just the nature of turing-completeness. I could probably even DDOS the malware scanner by just running an infinite loop. I'm happy to show more, and I'm going to, but I was waiting to release that info publicly before going through the responsible disclosure process here. DM me in case you want an early insight.<|||||>> If the malware scanner detects loaded modules, I can simply un-load them after I've used them.
I get the list of module-function pairs directly from the pickle opcodes ([`pickletools.genops`](https://docs.python.org/3/library/pickletools.html#pickletools.genops)), without executing anything. We went through some pretty [sophisticated exploits](https://ctftime.org/writeup/16723) and from what I saw when replicating there is always a trace of the module/function, e.g. if you wanted to run `eval` but alias it, you would still see the original `eval` reference.
> I could probably even DDOS the malware scanner by just running an infinite loop.
Since I'm going through the instruction list in the pickle file, there would be no DDOS possible via code execution. You could always generate a massive pickle file to bloat the scanner, but then you'd run into issues like uploading files to the hub before our scanner even goes through them + we can easily add heuristics to mark these files as inherently unsafe.
> If the malware scanner looks for the presence of certain instructions, strings, etc. I can always evade it somehow, that's just the nature of turing-completeness.
I'd like to see how you do that, happy to have the early insight, but I can wait for the public release :)
Note that you cannot serialize functions or lambdas in a pickle file, you can only execute functions that are in scope at deserialization time.
cc @adrinjalali
EDIT: thank you for expanding !<|||||>It seems there are actually ways to DDOS the scanner, we currently wouldn't catch the sploits [referenced here](https://github.com/moreati/pickle-fuzz#denial-of-service) (meaning we'd proceed to deserialization).<|||||>> I get the list of module-function pairs directly from the pickle opcodes
fair point, then I interpreted your previous statement in the wrong way. yea that could actually work to mitigate some of these things. At this time, my model here is still marked as safe, though: https://huggingface.co/ykilcher/totally-harmless-model/tree/main<|||||>> At this time, my model here is still marked as safe, though: https://huggingface.co/ykilcher/totally-harmless-model/tree/main
That's normal, at the time we haven't implemented any checks per se, we just extract the data from the pickles on the hub.
(coming soon)<|||||>> Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really torch.load that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.
@LysandreJik saving and loading pickle files is only insecure if the author is not _trusted_, which is the case for the HF hub. The hub is a place where people share arbitrary pickle files, so it becomes a hub issue. It wouldn't be an issue if people kept using pickles from sources they trust. And the scanner doesn't really detect all vulnerabilities, and can't. For instance, when joblib does an `eval` on a given `n_jobs` parameter, there you could do whatever you want. In terms of signatures, we check signatures, but we don't check who the people behind the accounts are (and I don't think we should).
So I do think people should be wary when they load pickles from the hub.
Note that AFAIK in pretty much all major communities (like pytorch, sklearn, etc) people know about this and are working on having better solutions. But for now, users should be aware of the risks.<|||||>BTW @McPatate as discussed it would be awesome if we could write a post (or even some documentation) about what we currently know about pickle safety, and the potential next steps we are working on.
And maybe @yk you'd be interested in taking a look? Would be awesome to have your insights :)<|||||>@julien-c sure, I'm happy to contribute what I know<|||||>hi @yk the team (@McPatate in particular) wrote this document https://huggingface.co/docs/hub/security-pickle the doc's source is at https://github.com/huggingface/hub-docs/pull/294 β would love to get your feedback (including on the proposed solutions) Thanks! |
transformers | 18,800 | closed | Fix gradient checkpointing tests for `encoder-decoder` models | # What does this PR do?
The recently added tests (#18697) didn't send the model to the correct device and caused some test failed. This PR fixes it. | 08-29-2022 08:11:26 | 08-29-2022 08:11:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry about that! Thanks for fixing @ydshieh |
transformers | 18,799 | closed | torchmetrics support | ### Feature request
In eval loop the HF transformers collects and keeps in memory predictions and targets up to the end of eval and then computes metrics passed via callback.
When transformers is used to fine-tune model with big dataset and many labels (in my case up to 220k labels), predictions and targets take huge amount of RAM. It's especially noticable with DDP and many GPUs per node.
It would be great to add [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/index.html) support where metrics can be computed for every eval step and then averaged at the end
With torchmetrics it's also possible to compute metrics on GPU in distibuted way. What makes eval significantly faster.
### Motivation
With current metrics callback I can't run eval with > 120k labels.
I tried to sparsify and/or cut data. However, with growing number of labels and etc it anyway became inefficient.
Eventually I had to update trainer to compute torchmetrics for every eval step on GPU what fully solved this issue in my case.
So, it would be great to have this feature out-of-the-box.
### Your contribution
I can create PR with appropriate changes if you also view this feature as useful. | 08-29-2022 07:28:21 | 08-29-2022 07:28:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger, could you please share your opinion on above feature request ? ^^<|||||>At this stage we're not planning to add this no. You can always subclass the Trainer to put your own evaluation loop in it however.<|||||>> At this stage we're not planning to add this no. You can always subclass the Trainer to put your own evaluation loop in it however.
this is exactly what I did. I just thought that it could be useful for others because I saw some other reports in GH about OOM in eval.<|||||>Note that you can use `eval_accumulation_steps` to avoid taking too much space on the GPU. There is no need to use torchmetrics for that. |
transformers | 18,798 | closed | Support mixed precision FP16 in TF Segformer | Nan loss | ### Description
I'm trying to bring mixed precision training (FP16) support to [TF Segformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py) .
I had to do a very small modification to the inital code for supporting Fp16 ([my commit ](https://github.com/huggingface/transformers/commit/41d2bd145111a743a577f15955bac58a74871b33)).
Everytime, the net converge fine for the first 10-15 epoch, and then the loss suddenly goes to Nan.
Any idea ?
I'm using
`policy = mixed_precision.Policy("mixed_float16")` ([TF doc here ](https://www.tensorflow.org/guide/mixed_precision))
### System
Tensorflow version 2.8.2
Cuda 11.6
Nvidia titan X
### Who can help?
@sayakpaul @NielsRogge
### Reproduction
1) Modify 1 line of code as in ([my commit ](https://github.com/huggingface/transformers/commit/41d2bd145111a743a577f15955bac58a74871b33)).
2) Launch a training with mixed precision FP16
3) wait 10~15 epoch
### Expected behavior
The net shoudl train fine with FP16 mixed precision | 08-29-2022 07:00:46 | 08-29-2022 07:00:46 | Thanks! Are you casting the softmaxed outputs or the logits to FP32 before loss computation as recommended [here](https://www.tensorflow.org/guide/mixed_precision#building_the_model)?
Nit: You didn't provide any training code. I guess it's always better to provide a self-contained Colab Notebook or a notebook that anyone can spin up to debug the issue further. I also acknowledge that Colab doesn't always provide a GPU that has support for Tensor cores but I hope you got the point. <|||||>thanks for the quick answer,
Even though my commit above doesn't include it, yes I tried casting the logits output to FP32, ans sadly it didn't solve it :/
Good point about the small code example for debugging, I will do one today :) <|||||>Then we need to inspect the layer-wise activations and the weight distributions that are probably impacted by the casting. Maybe try using the TensorBoard callback and inspect if that's the case. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,797 | closed | CLIPTextModel gives invalid output for zeroed attention mask | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.33
- Python version: 3.8.6
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.12.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
### Reproduction
```python
from transformers import CLIPTextModel
import torch
model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
inputs = {
"input_ids": torch.tensor([[49406, 320, 1125, 539, 320, 1929, 49407]]),
"attention_mask": torch.tensor([[0, 0, 0, 0, 0, 0, 0]])
}
outputs = model(**inputs)
```
Given the zeroed attention mask, the attention weights should be all equal here:
https://github.com/huggingface/transformers/blob/21f6f58721dd9154357576be6de54eefef1f1818/src/transformers/models/clip/modeling_clip.py#L246
However, causal and attention masks are added separately ([here](https://github.com/huggingface/transformers/blob/21f6f58721dd9154357576be6de54eefef1f1818/src/transformers/models/clip/modeling_clip.py#L228-L246)), so in this case, before going through softmax, certain values are twice as small as the other ones (to be more precise, some values are -min_float and other are -inf). Consequently, softmax outputs probabilities that match the causal mask.
This is also the case for `TFCLIPTextModel`. | 08-28-2022 23:33:24 | 08-28-2022 23:33:24 | I've also experienced this issue, and anecdotally this inconsistency seems to impact the quality of stable diffusion outputs from https://github.com/huggingface/diffusers. More specifically, when trying to port the stable diffusion pipeline to another framework (e.g. like Flax where the implementation is not present) using the provided stable diffusion weights, the images which rely on PT's CLIPTextModel are mostly incoherent.
I am assuming Stable Diffusion was trained using the PT CLIPTextModel, and thus results rely on this inconsistent/invalid text embedding? <|||||>Thanks a lot for the issue @jonatanklosko !
This indeed seems a bit strange, I see two solutions here
- We could combine causal mask and attention mask and see what happens
- Or instead of using additive masks, we could replace the masked values with large negative numbers.
however, is it really a bug ? as long as the values for masked positions are much lower than non-masked tokens, those tokens will still be ignored. Do you have an example where you see the masked positions are not ignored ?
@seanmor5 good point!
>I've also experienced this issue, and anecdotally this inconsistency seems to impact the quality of stable diffusion outputs from https://github.com/huggingface/diffusers
I'm not sure if this impacts the quality of diffusers, for example, as discussed in this [issue](https://github.com/huggingface/diffusers/issues/233), we have verified that the results are 1:1 with the original repo.
>I am assuming Stable Diffusion was trained using the PT CLIPTextModel, and thus results rely on this inconsistent/invalid text embedding?
Yes, it was trained using `CLIPTextModel`, but both this training and the actual pre-trained CLIP model never used attention mask. They always pad the sequence to max_len 77 and use causal mask. This is how we recommend to use the stable diffusion model. I know this is not ideal but that's how it was trained. cf https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/encoders/modules.py#L155
Also cc @patrickvonplaten , wdyt ?<|||||>>e.g. like Flax where the implementation is not present
It's coming soon https://github.com/patil-suraj/stable-diffusion-jax/<|||||>Also quite interested in the use case of masking all text tokens - when would this make sense?<|||||>@patil-suraj consider the following weights
```python
import torch
from torch.nn.functional import softmax
softmax(torch.tensor([1.0, 0.0]))
#=> tensor([0.7311, 0.2689])
```
if we add large negative values, then the actual weights are neglectable and softmax outputs equal values:
```python
softmax(torch.tensor([1.0 - 1e30, 0.0 - 1e30]))
#=> tensor([0.5000, 0.5000])
```
however if we also add large negative values from causal mask this makes the values disproportional:
```python
softmax(torch.tensor([1.0 - 1e30, 0.0 - 1e30 - 1e30]))
#=> tensor([1., 0.])
```
In the flax implementation masks are combined and negative values are added just once (though apparently the output still differs, because the flax version uses `-1e4` as the negative value). But thinking about this, shouldn't all weights be 0 when masking all tokens? If so, modifying the input to softmax is not enough.
@patrickvonplaten
> Also quite interested in the use case of masking all text tokens - when would this make sense?
This came up with unconditional input to stable diffusers, I believe [this part](https://github.com/huggingface/diffusers/blob/16172c1c7ef6ec721bfe4d0787313519157749a1/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L92-L94). @seanmor5 please correct me if I'm wrong :)<|||||>Interesting! Thanks for the pointer @jonatanklosko - I think though that even when doing:
The tokenizer should return tokens that should be attended to. E.g. taking the line here: https://github.com/huggingface/diffusers/blob/16172c1c7ef6ec721bfe4d0787313519157749a1/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L92-L94
and doing:
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("openai/clip-vit-large-patch14")
uncond_input = tok(
[""] * 2, padding="max_length", max_length=77, return_tensors="pt"
)
print(uncond_input)
```
I get:
```
{'input_ids': tensor([[49406, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407],
[49406, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,
49407, 49407, 49407, 49407, 49407, 49407, 49407]]), 'attention_mask': tensor([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0]])}
```
which shows that at least two tokens are attended to (so > 0 positions of attention_mask are equal to 1) which should avoid the problem described above. What would be the use case where all values in `attention_mask` are 0?<|||||>@patrickvonplaten Ahh! This was my mistake, when doing some initial testing with the stable diffusion weights in our framework I noticed after a step or 2 the latent results started to diverge from the PT stable diffusion implementation. I noticed some slight differences between the results of the text encoder from PT and our implementation (within some reasonable amount of precision) and ended up writing an invalid test case where all of the attention masks were 0. The source of our issue probably lies elsewhere then, sorry for the confusion!<|||||>No worries! Better be safe than sorry! :-) <|||||>So I think we can close this :)
The additive mask before softmax is a trick that works under the assumption that the mask has at least a single 1. So from what I understand can be said that for zeroed mask the output of most models is just not well defined, and it's fine given the use cases so far. |
transformers | 18,796 | closed | Fix docstring for BartForSequenceClassification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes docstring for BartForSequenceClassification, which uses the last time step of the last hidden state for classification.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-28-2022 21:53:26 | 08-28-2022 21:53:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,795 | closed | Add docstring for BartForCausalLM | # What does this PR do?
Adds docstring for BartForCausalLM
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@patrickvonplaten, @patil-suraj | 08-28-2022 18:42:33 | 08-28-2022 18:42:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,794 | closed | fix the description of token used for Bart classification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the description of token used for Bart classification. It uses last EOS token and not a special pooled output.
https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bart/modeling_bart.py#L1520-L1527
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@patrickvonplaten, @patil-suraj | 08-28-2022 18:09:51 | 08-28-2022 18:09:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten - does this look fine to you?<|||||>@ArthurZucker @gante could you take a look here? <|||||>> @ArthurZucker @JoaoLages could you take a look here?
Maybe you wanted to call JoΓ£o Gante and not me, but here are my 2 cents anyway π : the description that @ekagra-ranjan wrote is correct but the previous description was not incorrect either.
Taking the last EOS token embedding from the model's output is a type of pooling. For example, [in BERT we take the first token instead, that corresponds to the CLS token](https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bert/modeling_bert.py#L653) and we also have [the same description for `BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bert/modeling_bert.py#L1509).
The previous description is simpler, more general and it is not incorrect. Not against having more descriptive docstrings, but then it would make sense to review all the `(...)ForSequenceClassification` classes, not only BART.
<|||||>Sorry @JoaoLages :sweat_smile: ! You are right indeed<|||||>@JoaoLages I see your point and I guess you are right! Thanks for sharing your thoughts. |
transformers | 18,793 | closed | [BUG] Getting different sentence embeddings when using model on CPU and GPU | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik @sg
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```from transformers import RobertaConfig, RobertaModel, RobertaTokenizer
import torch
import numpy as np
device = ("cuda" if torch.cuda.is_available() else "cpu")
# Initializing tokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
# Initializing a RoBERTa configuration
configuration = RobertaConfig()
configuration.vocab_size = tokenizer.vocab_size
# Initializing a model from the configuration
model = RobertaModel(configuration)
model = model.to(device)
model = model.eval()
with torch.no_grad():
tokenized_task = tokenizer('random_sentence_check_v000', return_tensors="pt")
outputs = model(**tokenized_task.to(device))
embedding = outputs.pooler_output.squeeze(0).cpu().numpy().tolist()
### Expected behavior
I should get same sentence embedding either on CPU and GPU from a pretrained model | 08-28-2022 17:27:48 | 08-28-2022 17:27:48 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>To fix this issue, we need to set the seed values which I forgot. |
transformers | 18,792 | closed | UnimplementedError: The Conv2D op currently does not support grouped convolutions on the CPU. | # Info
```
Coalb TPU v2
Kaggle TPU v3
TensorFlow: 2.4.1
Transformer: 4.22.0.dev0
```
# Who can help?
@Rocketknight1 @NielsRogge @sgugger @amyeroberts
# Information
- [ ] The official example scripts
- [X] My own modified scripts
# Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
# Reproduction
Please, get the file form [HERE](https://drive.google.com/file/d/1qAo4PsZ8DekZqFzF5Oq2E3nkJ3aJWFuM/view?usp=sharing). A notebook scripts, just plug-n-play.
**What to do**
1. Run the script in Colab with TPU.
2. Run the script in Kaggle with TPU.
You may not need to change anything, just through the file to these platform and run all.
# Expected behavior
**What was I doing**
With the given script above, I was trying to run a vision transformer model on Kaggle TPU (with TF 2.4.1 by default). And I got
```python
2 prime_input = tf.keras.Input(shape=(*IMAGE_SIZE, 3))
3 mode_inputs = tf.keras.layers.Permute(dims=(3, 1, 2))(prime_input)
----> 4 backbone = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224")
5 backbone.trainable = False
....
171 def call(self, hidden_states, training=False):
172 input = hidden_states
--> 173 x = self.dwconv(hidden_states)
174 x = self.layernorm(x)
175 x = self.pwconv1(x)
```
> UnimplementedError: The Conv2D op currently does not support grouped convolutions on the CPU. A grouped convolution was attempted to be run because the input depth of 96 does not match the filter input depth of 1
A known tf issue, discussed also [here](https://github.com/tensorflow/tensorflow/issues/29005). But this issue didn't appear when I ran the same script on Colab TPU (with `tf 2.4.1`) system. The model build successfully.
As I am currently using transformer on kaggle platform, I need to make it work. The given script above is just about model construction code. Any pointer what's going on here?
Please note again, Kaggle TPU v3 and Colab TPU v2. Not sure if it's something to do with this. | 08-28-2022 16:45:51 | 08-28-2022 16:45:51 | cc @gante <|||||>Hey @innat π That exception doesn't come from `transformers`, but from TensorFlow itself. As you mentioned, using a newer TF version does not result in an error.
I'm afraid we won't be able to help you here :)<|||||>@gante
Sorry I think I didn't explain well. Here is short and exact summary.
> I ran transformer on Colab TPU and Kaggle TPU with TensorFlow 2.4.1. The script that I shared above works in Colab TPU but throws error in Kaggle TPU.
Please let me know if you had trouble to run the script.<|||||>@gante Here are the prepared notebook. It would be easy to test.
- [kaggle-tpu](https://www.kaggle.com/code/ipythonx/transformer-issue-github-18792/)
- [colab-tpu](https://drive.google.com/file/d/1qAo4PsZ8DekZqFzF5Oq2E3nkJ3aJWFuM/view?usp=sharing)<|||||>Grouped convolution on CPU were added in Tensorflow 2.5, so 2.4 won't work. That's why colab with Tensorflow 2.9 isn't affected. Have you tried updating Tensorflow on kaggle with `!pip install -U tensorflow`?<|||||>I set accelerator **TPU** to both colab and kaggle environments. I'm using `TF 2.4.1` on both environment. With this set up, convn-next model successfully built on colab tpu (with `tf 2.4.1`) but didn't work on kaggle tpu (with `tf 2.4.1`).
Please note, I'm **NOT** using `TF 2.9.1`, either in colab or kaggle environments. Have you run the the code that I shared above. I think it's straightforward to get on the same page.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
The last comemnt came from the original OP. This bot message doesn't make sense. Instead of such alert, it should tag `stat:awaiting:transformers`.<|||||>Hi @innat -- as I've mentioned above this is an issue for the Kaggle and/or the TensorFlow team, there is nothing the `transformers` team can do.
We don't have the power to go back in time and add code to a repository that isn't ours, nor to update Kaggle's TPU runtimes. |
transformers | 18,791 | closed | Fix decode_input_ids to bare T5Model and improve doc | # What does this PR do?
* Fix 1: use the tokenizer to obtain the labels as tensors. `docs/source/en/model_doc/t5.mdx`
* Fix 2: `src/transformers/models/t5/`
* Present case: T5 prepends the decoder_input_ids with pad token. This preprocessing is handled internally by `T5ForConditionalGeneration` by shifting the labels to the right.
* Issue: This preprocessing needs to be done manually while using bare T5Model. This is missing from the example which uses bare T5Model.
* Proposed Fix: Added a preprocessing step in the example so that the input matches with what T5 expects at its decoder. The PR reuses the `_shift_right()` method which is an internal function to T5. Please let me know if we can rename `_shift_right()` to `shift_right()` or if there is a better way to handle this.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger @patrickvonplaten @patil-suraj
| 08-28-2022 15:09:38 | 08-28-2022 15:09:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the fixes! <|||||>@patrickvonplaten Thanks for the review! Applied your suggestions. |
transformers | 18,790 | closed | circular import issue when importing with `transformers` and `happytransformer` | ### System Info
Libraries:
```
transformers 4.21.2
happytransformer 2.4.1
huggingface-hub 0.9.1
torch 1.12.1
tensorflow 2.9.1
```
Env:
```
python 3.10
Windows 10.0.19044.1889
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the entry file(`code.py`):
```
from ping import onichan
from pong import araara
if __name__ == '__main__':
pass
```
A module called `ping.py`:
```
from happytransformer import HappyTextToText, TTSettings
def onichan():
print('onichan')
```
Another module called `pong.py`:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
def araara():
print('araara')
```
And I have a file named `__init__.py` to initialize the module.
Directory structure:
```
β---code.py
β---ping.py
β---pong.py
β---__init__.py
```
### Expected behavior
Not getting this error, I would say.
Here is the complete trace:
```
C:\Users\ShibaInu\Desktop\err>python code.py
2022-08-28 18:51:13.374326: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-08-28 18:51:13.374507: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1002, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "C:\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Python310\lib\site-packages\transformers\pipelines\__init__.py", line 37, in <module>
from .audio_classification import AudioClassificationPipeline
File "C:\Python310\lib\site-packages\transformers\pipelines\audio_classification.py", line 20, in <module>
from .base import PIPELINE_INIT_ARGS, Pipeline
File "C:\Python310\lib\site-packages\transformers\pipelines\base.py", line 34, in <module>
from ..modelcard import ModelCard
File "C:\Python310\lib\site-packages\transformers\modelcard.py", line 44, in <module>
from .training_args import ParallelMode
File "C:\Python310\lib\site-packages\transformers\training_args.py", line 26, in <module>
from .trainer_utils import (
File "C:\Python310\lib\site-packages\transformers\trainer_utils.py", line 47, in <module>
import tensorflow as tf
File "C:\Python310\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Python310\lib\site-packages\tensorflow\python\__init__.py", line 42, in <module>
from tensorflow.python import data
File "C:\Python310\lib\site-packages\tensorflow\python\data\__init__.py", line 21, in <module>
from tensorflow.python.data import experimental
File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\__init__.py", line 95, in <module>
from tensorflow.python.data.experimental import service
File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py", line 387, in <module>
from tensorflow.python.data.experimental.ops.data_service_ops import distribute
File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 22, in <module>
from tensorflow.python.data.experimental.ops import compression_ops
File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py", line 16, in <module>
from tensorflow.python.data.util import structure
File "C:\Python310\lib\site-packages\tensorflow\python\data\util\structure.py", line 22, in <module>
from tensorflow.python.data.util import nest
File "C:\Python310\lib\site-packages\tensorflow\python\data\util\nest.py", line 36, in <module>
from tensorflow.python.framework import sparse_tensor as _sparse_tensor
File "C:\Python310\lib\site-packages\tensorflow\python\framework\sparse_tensor.py", line 24, in <module>
from tensorflow.python.framework import constant_op
File "C:\Python310\lib\site-packages\tensorflow\python\framework\constant_op.py", line 25, in <module>
from tensorflow.python.eager import execute
File "C:\Python310\lib\site-packages\tensorflow\python\eager\execute.py", line 24, in <module>
from tensorflow.python.framework import ops
File "C:\Python310\lib\site-packages\tensorflow\python\framework\ops.py", line 23, in <module>
from absl import app
File "C:\Python310\lib\site-packages\absl\app.py", line 31, in <module>
import pdb
File "C:\Python310\lib\pdb.py", line 77, in <module>
import code
File "C:\Users\ShibaInu\Desktop\err\code.py", line 1, in <module>
from ping import onichan
ImportError: cannot import name 'onichan' from partially initialized module 'ping' (most likely due to a circular import) (C:\Users\ShibaInu\Desktop\err\ping.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\ShibaInu\Desktop\err\code.py", line 1, in <module>
from ping import onichan
File "C:\Users\ShibaInu\Desktop\err\ping.py", line 1, in <module>
from happytransformer import HappyTextToText, TTSettings
File "C:\Python310\lib\site-packages\happytransformer\__init__.py", line 1, in <module>
from happytransformer.happy_question_answering import HappyQuestionAnswering
File "C:\Python310\lib\site-packages\happytransformer\happy_question_answering.py", line 7, in <module>
from transformers import QuestionAnsweringPipeline, AutoModelForQuestionAnswering
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 992, in __getattr__
module = self._get_module(self._class_to_module[name])
File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1004, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'onichan' from partially initialized module 'ping' (most likely due to a circular import) (C:\Users\ShibaInu\Desktop\err\ping.py)
``` | 08-28-2022 13:30:46 | 08-28-2022 13:30:46 | Got resolved after restarting the environment. Weird.<|||||>Hi, I had a similar issue by simply running "from transformers import GPT2Tokenizer"
This is the error I got:
`ImportError: cannot import name 'GPT2Tokenizer' from partially initialized module 'transformers' (most likely due to a circular import)` |
transformers | 18,789 | closed | fix the error which will make shape not match occur. | # What does this PR do?
When using PyTorch for model trace, the size of feature map will be inconsistent, resulting in failure to export successfully. The reason is that += is used in the size calculation code, and the object that happens to be added happens to be a shallow copy object, causing input_shape to change. You can solve this problem by declaring variables directly.
# Test script
```python3
from einops import repeat
from transformers import BertTokenizerFast,LayoutXLMTokenizerFast,LayoutLMv2ForSequenceClassification
import torch
## dummy inputs
dummy_input_ids = torch.LongTensor(torch.randint(low=0, high=1000, size=(2,256)))#.cuda()
box = [[48, 84, 73, 128]] * 256
dummy_bboxes = repeat(torch.LongTensor(box).unsqueeze(0), '1 b s-> 2 b s')
dummy_attention_mask = torch.LongTensor(torch.randint( low=0, high=1024, size=(2, 256) ))#.cuda()
dummy_imgs = torch.randn(2, 3, 448, 448)#.cuda()
dummy_token_ids = torch.zeros_like(dummy_attention_mask)
dummy_inputs = [
dummy_input_ids,
dummy_bboxes,
dummy_imgs,
dummy_attention_mask,
dummy_token_ids
]
with torch.no_grad():
model = LayoutLMv2ForSequenceClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", torchscript=True,num_labels=30522)
model.eval()
traced_model = torch.jit.trace(func=model,
strict=False,
example_inputs=dummy_inputs)
torch.jit.save(traced_model,'temp.pt')
model = torch.jit.load('temp.pt')
model.eval()
with torch.no_grad():
result = model(*dummy_inputs)
print(result)
```
**If the script is run directly before the code is modified, the feature map size will be inconsistent**
# Who can review?
@LysandreJik
| 08-28-2022 05:50:18 | 08-28-2022 05:50:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18789). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,788 | closed | Improve Text Generation doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
* Add relevant args explicitly in beam search decoding example in generation utils
* PAD token is absent in GPT2 instead of EOS token
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger
| 08-27-2022 18:21:05 | 08-27-2022 18:21:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @sgugger for the review. I have applied your suggestions. |
transformers | 18,787 | closed | Improve GPT2 doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the typos and dimensions of args in doc of GPT2.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger | 08-27-2022 18:12:27 | 08-27-2022 18:12:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,786 | closed | reflect max_new_tokens in `Seq2SeqTrainer` | # What does this PR do?
in most cases, VisionEncoderDecoderModel's `max_length` is set implicitly.
it leads to the problem if the model generates prediction given `max_new_tokens`.
this PR makes `max_new_tokens` handled as expected in `Seq2SeqTrainer. prediction_step()` in the case.
Fixes #18785
P.S. I can reproduce the issue if using `huggingface/transformers`.
but, using this PR with same codes to reproduce, no exceptions raised.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
| 08-27-2022 17:15:39 | 08-27-2022 17:15:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh, could you take a look at this when you have some time please? Thanks a lot!<|||||>Hi, @kumapo
I believe it also requires a change in
https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L129
right?<|||||>@ydshieh, yes. at same time I believe `Seq2SeqTrainer.evaluate()` needs the same change.
<|||||>@sgugger, thank you for your feedback. I've updated the PR.<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>@LysandreJik, I've done all steps to refresh circleci permission.
but it seems that nothing happens with tests. let me know if I missed something to be known.<|||||>Can you try pushing an empty commit on your branch to re-trigger the tests?
```
git commit -m "Trigger CI" --allow-empty
```<|||||>To pass the test, you can run
```bash
make style
```
and commit the change. |
transformers | 18,785 | closed | Raise ValueError if given max_new_tokens to `Seq2SeqTrainer.predict()` | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.6.4 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: `yes`
- Using distributed or parallel set-up in script?: `no`
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```Python3
model = transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k",
"bert-base-uncased"
)
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased")
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
eval_ds = datasets.load_dataset(
"kumapo/stair_captions_dataset_script", "2014",
data_dir="../input/coco-2014-val", split="validation", streaming=True
)
# do some preprocessing eval_ds with map() ..
training_args = transformers.Seq2SeqTrainingArguments(
predict_with_generate=True,
fp16=False,
output_dir="output/",
report_to="none",
)
trainer = transformers.Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
data_collator=transformers.default_data_collator
)
_ = trainer.predict(eval_ds, max_new_tokens=16)
```
then, `ValueError: Both max_new_tokens and max_length have been set but they serve the same purpose` raised:
```
ValueError Traceback (most recent call last)
/tmp/ipykernel_23/2318841552.py in <module>
61 data_collator=transformers.default_data_collator,
62 )
---> 63 _ = trainer.predict(eval_ds, max_new_tokens=16)
/opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py in predict(self, test_dataset, ignore_keys, metric_key_prefix, **gen_kwargs)
135 self._gen_kwargs = gen_kwargs
136
--> 137 return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
138
139 def prediction_step(
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in predict(self, test_dataset, ignore_keys, metric_key_prefix)
2844 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
2845 output = eval_loop(
-> 2846 test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix
2847 )
2848 total_batch_size = self.args.eval_batch_size * self.args.world_size
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
2947
2948 # Prediction step
-> 2949 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
2950 inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None
2951
/opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py in prediction_step(self, model, inputs, prediction_loss_only, ignore_keys)
201 generated_tokens = self.model.generate(
202 generation_inputs,
--> 203 **gen_kwargs,
204 )
205 # in case the batch is shorter than max length, the output should be padded
/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs)
1237 elif max_length is not None and max_new_tokens is not None:
1238 raise ValueError(
-> 1239 "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a"
1240 " limit to the generated output length. Remove one of those arguments. Please refer to the"
1241 " documentation for more information. "
ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
```
### Expected behavior
nothing raised. | 08-27-2022 16:56:10 | 08-27-2022 16:56:10 | thank you all for kind supports! |
transformers | 18,784 | closed | Improve GPT2 doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the typos and dimensions of args in doc of GPT2.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@sgugger | 08-27-2022 13:12:17 | 08-27-2022 13:12:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Superseeded by #18787 |
transformers | 18,783 | closed | Fix broken link DeepSpeed documentation link | # What does this PR do?
Fix broken link DeepSpeed documentation link. The current `<a>` is not working. | 08-27-2022 07:06:38 | 08-27-2022 07:06:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,782 | closed | Memory increment and release when loading model via PretrainedModel.from_pretrained | Issue
---
I was trying to understand the memory usage when loading a Hugging Face model. I found that when loading the model via `AutoModelForMaskedLM.from_pretrained("bert-base-uncased")`, the resulting increment in memory was (1) larger than the cached BERT model on disk (859MB v.s. 421MB) and (2) when deleting the variable, not all of the allocated memory got released. On the other hand, if I just do `torch.load("[path to cached model]")`, the memory allocation and release matched and the number was very close to that on disk. May I know why was there such a difference in behaviour?
Code to reproduce the issue
---
```python
import torch
from transformers import AutoModelForMaskedLM
from memory_profiler import profile
@profile
def hf_load():
# bert-base-uncased: 421MB on disk
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
del model
@profile
def direct_load():
model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu')
del model
```
Profile
---
`hf_load`:
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
13 241.9 MiB 241.9 MiB 1 @profile
14 def hf_load():
15 # bert-base-uncased: 421MB on disk
16 1100.4 MiB 858.5 MiB 1 model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
17 683.0 MiB -417.4 MiB 1 del model
```
`direct_load`:
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
19 240.6 MiB 240.6 MiB 1 @profile
20 def direct_load():
21 661.4 MiB 420.8 MiB 1 model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu')
22 241.7 MiB -419.7 MiB 1 del model
```
To supplement, I also observed that when running `hf_load` above multiple times, the memory usage was rather unobvious.
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
19 239.7 MiB 239.7 MiB 1 @profile
20 def multiple_hf_load():
21 681.0 MiB 441.3 MiB 1 hf_load()
22 1008.8 MiB 327.9 MiB 1 hf_load()
23 919.4 MiB -89.4 MiB 1 hf_load()
24 993.6 MiB 74.2 MiB 1 hf_load()
25 995.9 MiB 2.2 MiB 1 hf_load()
26 992.9 MiB -3.0 MiB 1 hf_load()
```
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
34 240.8 MiB 240.8 MiB 1 @profile
35 def multiple_direct_load():
36 242.0 MiB 1.1 MiB 1 direct_load()
37 241.9 MiB -0.1 MiB 1 direct_load()
38 241.9 MiB 0.0 MiB 1 direct_load()
39 241.9 MiB 0.0 MiB 1 direct_load()
40 241.9 MiB 0.0 MiB 1 direct_load()
41 241.9 MiB 0.0 MiB 1 direct_load()
```
It increased in the first two times, but did not keep increasing from the third time onwards. I wonder how could this be explained.
P.S. Also attached the case for `direct_load` above. No increment was observed.
Supplementary information
---
OS: 5.10.60.1-microsoft-standard-WSL2, 4.15.0-1113-azure #126~16.04.1-Ubuntu
Python: 3.8.12
PyTorch: 1.11.0
Transformers: 4.21.2
@LysandreJik | 08-27-2022 06:49:45 | 08-27-2022 06:49:45 | @ydshieh has been looking into memory leaks as well recently and might have some insights for you!<|||||>Hi @tobyych
Could you also try to add `gc.collect()` after `del model` in both loading methods, and see what you get (memory usage) after `gc.collect()` is done. You have to import `gc`.<|||||>Hi @ydshieh,
Tried to add `gc.collect()` after `del model`.
For single `hf_load`,
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
14 245.5 MiB 245.5 MiB 1 @profile
15 def hf_load():
16 # bert-base-uncased: 421MB on disk
17 1103.9 MiB 858.4 MiB 1 model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
18 686.2 MiB -417.7 MiB 1 del model
19 266.5 MiB -419.7 MiB 1 gc.collect()
```
For single `direct_load`,
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
32 240.8 MiB 240.8 MiB 1 @profile
33 def direct_load():
34 661.5 MiB 420.7 MiB 1 model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu')
35 241.9 MiB -419.6 MiB 1 del model
36 241.9 MiB 0.0 MiB 1 gc.collect()
```
For multiple `hf_load`,
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
22 241.4 MiB 241.4 MiB 1 @profile
23 def multiple_hf_load():
24 263.1 MiB 21.6 MiB 1 hf_load()
25 921.1 MiB 658.1 MiB 1 hf_load()
26 263.3 MiB -657.9 MiB 1 hf_load()
27 263.3 MiB 0.0 MiB 1 hf_load()
28 263.3 MiB 0.0 MiB 1 hf_load()
29 263.3 MiB 0.0 MiB 1 hf_load()
```
For multiple `direct_load`,
```
Line # Mem usage Increment Occurrences Line Contents
=============================================================
38 240.8 MiB 240.8 MiB 1 @profile
39 def multiple_direct_load():
40 241.9 MiB 1.1 MiB 1 direct_load()
41 242.0 MiB 0.1 MiB 1 direct_load()
42 242.0 MiB 0.0 MiB 1 direct_load()
43 242.0 MiB 0.0 MiB 1 direct_load()
44 242.0 MiB 0.0 MiB 1 direct_load()
45 248.0 MiB 6.0 MiB 1 direct_load()
```
With the explicit garbage collection, single `hf_load` could release the remaining memory that wasn't released previously. However, still not sure why it used double memory compared to `direct_load`.
As for multiple `hf_load`, the results looked much better after adding the explicit garbage collection as the memory used after several loads dropped to 263.3MB instead of the ~1000MB seen previously.
Any clue what weren't collected by the GC previously?<|||||>Hi, @tobyych
Glad to know `gc.collect()` works.
I don't know what weren't collected by the GC previously. In general, (I believe) it's not easy to know exactly what `gc` has done at at which timing. If you are able to investigate and share your finding, it would be great.
I will try to check the `double memory` part.<|||||>@ydshieh, I tried to inspect the objects that were collected by the GC between `del model` and `gc.collect()` using the code below, where I aggregated the sizes of Python objects by their modules. I filtered those from the `transformers` package and might be a starting point for your side to see which part of code might be related.
```python
def hf_load():
# bert-base-uncased: 421MB on disk
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
del model
d = defaultdict(int)
for o in gc.get_objects():
try:
d[o.__module__] += sys.getsizeof(o)
except:
d['others'] += sys.getsizeof(o)
for k, v in d.items():
if type(k) is str and k.startswith("transformers"):
print(k, v)
gc.collect()
```
```
transformers.modeling_utils 19120
transformers.utils.doc 1224
transformers.utils.versions 408
transformers.utils.logging 6664
transformers.utils.import_utils 21552
transformers.utils.generic 10363
transformers.utils.hub 7928
transformers.utils 136
transformers.dependency_versions_check 136
transformers.utils.dummy_speech_objects 2400
transformers.utils.dummy_tensorflow_text_objects 1200
transformers.utils.dummy_sentencepiece_and_speech_objects 1200
transformers.utils.dummy_timm_objects 4008
transformers.utils.dummy_scatter_objects 6136
transformers.utils.dummy_tf_objects 390408
transformers.utils.dummy_flax_objects 187200
transformers.dynamic_module_utils 1224
transformers.tokenization_utils_base 21309
transformers.tokenization_utils 6480
transformers.models.t5.tokenization_t5 3104
transformers.convert_slow_tokenizer 42888
transformers.tokenization_utils_fast 4328
transformers.models.t5.tokenization_t5_fast 1744
transformers.configuration_utils 6640
transformers.models.auto.configuration_auto 7184
transformers.models.auto.auto_factory 22544
transformers.models.auto.modeling_auto 27936
transformers.onnx.utils 1656
transformers.onnx.config 9288
transformers.models.bert.configuration_bert 2232
transformers.models.albert.configuration_albert 2400
transformers.models.bart.configuration_bart 3216
transformers.models.big_bird.configuration_big_bird 2400
transformers.models.roberta.configuration_roberta 2400
transformers.models.camembert.configuration_camembert 2264
transformers.models.convbert.configuration_convbert 2400
transformers.models.data2vec.configuration_data2vec_text 2400
transformers.models.deberta.configuration_deberta 2672
transformers.models.deberta_v2.configuration_deberta_v2 2672
transformers.models.distilbert.configuration_distilbert 2400
transformers.models.electra.configuration_electra 2400
transformers.models.xlm.configuration_xlm 2400
transformers.models.flaubert.configuration_flaubert 2400
transformers.models.fnet.configuration_fnet 1200
transformers.models.funnel.configuration_funnel 1744
transformers.models.ibert.configuration_ibert 2400
transformers.models.layoutlm.configuration_layoutlm 2672
transformers.models.longformer.configuration_longformer 1200
transformers.models.luke.configuration_luke 1200
transformers.models.mbart.configuration_mbart 3216
transformers.models.megatron_bert.configuration_megatron_bert 1200
transformers.models.mobilebert.configuration_mobilebert 2400
transformers.models.mpnet.configuration_mpnet 1200
transformers.models.mvp.configuration_mvp 1200
transformers.models.nezha.configuration_nezha 1200
transformers.models.nystromformer.configuration_nystromformer 1200
transformers.feature_extraction_utils 5256
transformers.models.perceiver.configuration_perceiver 2672
transformers.models.qdqbert.configuration_qdqbert 1200
transformers.models.reformer.configuration_reformer 1200
transformers.models.rembert.configuration_rembert 1200
transformers.models.roformer.configuration_roformer 2400
transformers.models.squeezebert.configuration_squeezebert 2400
transformers.models.tapas.configuration_tapas 1200
transformers.models.wav2vec2.configuration_wav2vec2 1336
transformers.models.xlm_roberta.configuration_xlm_roberta 2264
transformers.models.xlm_roberta_xl.configuration_xlm_roberta_xl 2400
transformers.models.yoso.configuration_yoso 1200
transformers.activations 14496
transformers.modeling_outputs 45024
transformers.deepspeed 3896
transformers.generation_beam_constraints 9944
transformers.generation_beam_search 6568
transformers.generation_logits_process 25728
transformers.generation_stopping_criteria 6752
transformers.pytorch_utils 2288
transformers.generation_utils 17464
transformers.models.bert.modeling_bert 41152
```<|||||>@tobyych
I opened a PR #18832 which could solve this issue. Notice that this is not a real memory issue however. `GC` usually makes its own decision for when to collect. But it's not bad if we can release some memory earlier. <|||||>Thanks @ydshieh! |
transformers | 18,781 | closed | Add inference section to task guides | Currently, the task guides only show how to finetune a model, but it doesn't directly connect the dots to how you can use that model for inference. For completeness, this PR adds a section to the task guides to show how to use a model for inference after finetuning. This gives users a better overview of the model lifecycle.
In doing so, when we update `task_summary.mdx`, we can focus less on the practical steps of how to use a model for inference (we can add links to the task guides) and instead discuss the more theoretical aspects of these tasks as intended by the Conceptual Guide section. | 08-26-2022 23:14:06 | 08-26-2022 23:14:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the suggestions! I reworked the `sequence_classification` task a bit, and if we like the changes, then I can apply them to the other tasks. Main changes are:
- Encourage users to login with their HF accounts so they can push their finetuned models. Along these lines, I've added `push_to_hub` and the `PushToHubCallback`.
- Added a section to include a `compute_metrics` function so users can evaluate their models during training.
I think adding these two will help the task guides be more complete :) |
transformers | 18,780 | closed | TAPAS model usage issue | ### System Info
Hi,
I have some issues when I am using TAPAS.
`from transformers import TapasConfig, TapasForQuestionAnswering `
report following errors:
RuntimeError: Failed to import transformers.models.tapas.modeling_tapas because of the following error (look up to see its traceback):
module 'distutils' has no attribute 'version'
After I do `pip install setuptools==59.5.0`
it just returned following error:
Segmentation fault (core dumped)
Here is my env
[transformers](https://pypi.python.org/pypi/transformers)==4.20.1
[evaluate](https://pypi.python.org/pypi/evaluate)==0.1.2
[bert-score](https://pypi.python.org/pypi/bert-score)==0.3.11
[datasets](https://pypi.python.org/pypi/datasets)==2.3.2
[accelerate](https://pypi.python.org/pypi/accelerate)
[deepspeed](https://pypi.python.org/pypi/deepspeed)==0.6.5
[wordninja](https://pypi.python.org/pypi/wordninja)==2.0.0
[sacrebleu](https://pypi.python.org/pypi/sacrebleu)==2.1.0
[fasttext](https://pypi.python.org/pypi/fasttext)==0.9.2
[nltk](https://pypi.python.org/pypi/nltk)==3.7
[scikit-learn](https://pypi.python.org/pypi/scikit-learn)==1.0.2
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import TapasConfig, TapasForQuestionAnswering
### Expected behavior
Segmentation fault (core dumped)
| 08-26-2022 20:38:30 | 08-26-2022 20:38:30 | This is likely an error due to a mismatch of versions between your Torch and TorchScatter installations<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as we've answered the question, feel free to re-open if you still have this issue |
transformers | 18,779 | closed | fix a typo in auto feature extraction for videomae | # What does this PR do?
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/18778
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @sgugger
| 08-26-2022 19:18:49 | 08-26-2022 19:18:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,778 | closed | Incorrect auto feature extractor for videomae | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Windows-10-10.0.19043-SP0
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@NielsRogge @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install dev dependencies:
```bash
cd transformers
pip install -e ".[dev]"
```
2. Init new model:
```bash
transformers-cli add-new-model-like
>>What is the model you would like to duplicate?
videomae
```
### Expected behavior
It should ask me:
`Will your new model use the same processing class as videomae (VideoMAEFeatureExtractor)?`
Instead, it asks:
`Will your new model use the same processing class as videomae (ViTFeatureExtractor)?` | 08-26-2022 19:17:07 | 08-26-2022 19:17:07 | Tried to fix it with a PR: https://github.com/huggingface/transformers/pull/18779 |
transformers | 18,777 | closed | Cache results of is_torch_tpu_available() | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`xm.xla_device()` (called by `is_torch_tpu_available()`) hangs when calling multiple times but no XLA devices are available, and this results in Trainer hanging. Since currently `torch_xla` will be used as long as it is installed in the current active Python environment, I encountered this issue even when I only want to run the Trainer with PyTorch on GPU.
The detail reason behind `torch_xla` is still under investigation (see https://github.com/pytorch/xla/issues/3939).
To workaround this issue, this PR adds `lru_cache` to `is_torch_tpu_available()`, so that `xm.xla_device()` is guaranteed to be called only once when no XLA device is available.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@muellerzr @sgugger | 08-26-2022 17:22:07 | 08-26-2022 17:22:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not sure the reason of CI failure. It seems not relevant to this PR.<|||||>@LysandreJik thanks for the review and sure we could wait for @sgugger.
Meanwhile, do I need to do anything to fix the CI failure?<|||||>Thanks for bearing with us. The test failures are spurious and unrelated, so we can merge this. |
transformers | 18,776 | closed | TF: Can't create sharded XGLM model | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
- Python version: 3.8.13
- Huggingface_hub version: 0.9.0
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (gpu)
- Jax version: 0.3.5
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running this CLI command
```
CUDA_VISIBLE_DEVICES="" TOKENIZERS_PARALLELISM=false NVIDIA_TF32_OVERRIDE=0 transformers-cli pt-to-tf --model-name facebook/xglm-2.9B --new-weights --max-error 3e-3
```
Gets you the following exception (in the sharding code)
```
Traceback (most recent call last):
File "/home/joao/hf/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/home/joao/transformers/src/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/home/joao/transformers/src/transformers/commands/pt_to_tf.py", line 309, in run
tf_from_pt_model.save_pretrained(self._local_dir)
File "/home/joao/transformers/src/transformers/modeling_tf_utils.py", line 2020, in save_pretrained
param_dset = shard_file.create_dataset(
File "/home/joao/hf/lib/python3.8/site-packages/h5py/_hl/group.py", line 161, in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
File "/home/joao/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py", line 156, in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl, dapl=dapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5d.pyx", line 84, in h5py.h5d.create
TypeError: expected bytes, str found
```
### Expected behavior
Successful sharding :D | 08-26-2022 16:10:57 | 08-26-2022 16:10:57 | cc @ArthurZucker <|||||>Hey! Little update on this : the problem comes from the previously introduced "hack" :
```python
return tf.Variable(emb, trainable=False, name="model.embed_positions.weights")
```
This appears [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/xglm/modeling_tf_xglm.py#L86). This hack can also be seen in [BART](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_tf_bart.py#L1036-L1038) .
In order to have as little breaking changes as possible, I think we can add the followiing :
```python
if "model." in layer.name : # potentially all models that have the hack will have model. something"
param_dset = shard_file.create_dataset(
".".join(layer.name.split(".")[1:]), layer.numpy().shape, dtype=layer.numpy().dtype
)
```
I think we have to keep the "." separation for coherence.
Will see if I can open a PR on that soon
|
transformers | 18,775 | closed | fix missing block when there is no failure | # What does this PR do?
Currently, when doc test has no failure, we don't see anything other than `π€ Results of the doc tests.` in slack report.
This is because the block defined in `no_failures` property is not included in `def payload`, as the condition `self.no_failures == 0` should be `self.n_failures == 0`. | 08-26-2022 15:31:33 | 08-26-2022 15:31:33 | @amyeroberts request you as a reviewer just in case you want to take a look.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,774 | closed | LayoutXLMProcessor: Enforce using "return_overflowing_tokens" with "return_offsets_mapping" | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/18726
Code to reproduce the error: https://colab.research.google.com/drive/1ETpz8UP42r7HjRg4VUkC7L8ou10qY3bQ?usp=sharing
Specific combination that caused the error: `LayoutXLMProcessor` with `use_fast=False`, `return_overflowing_tokens=True` and `return_offsets_mapping=False`
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-26-2022 11:18:43 | 08-26-2022 11:18:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, although it doesn't include the same changes as #17092, could you add those?<|||||>> Thanks, although it doesn't include the same changes as #17092, could you add those?
The additional changes in the other PR were already present, I added the missing ones, see the pictures below
This PR:

The other PR:

<|||||>Oh yeah, just realized I added that myself when adding LayoutLMv3. Thanks! I'll add another reviewer before merging<|||||>Btw, one issue I have with this is that if we're using the processor with `use_fast=False`, we won't be allowed to process data with `return_overflowing_tokens=True`, because this will force us to set `return_offsets_mapping=True`, and doing that will raise this error:
```
NotImplementedError: return_offset_mapping is not available when using Python tokenizers.
To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.
More information on available tokenizers at https://github.com/huggingface/transformers/pull/2674
````
I was able to get 1-to-1 token mappings with the non-fast tokenizer but in a very hacky way. If I find a better way, I'll suggest doing that to support `return_overflowing_tokens` with regular tokenizers.
_N.B: This is the same case with LMv2, not sure about LMv3_ |
transformers | 18,773 | closed | Training loss of BART is going to nan in transformers>=4.21.0 | ### System Info
transformers==4.20.1 and transformers>=4.21.0
torch==1.12.1
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi, I'm using a huge dataset, so it is hard to show how to reproduce my problem.
I'm using [BART pre-trained model](https://huggingface.co/gogamza/kobart-base-v1) and trying to fine-tune the model as a translation model. But, the training loss is completely differently calculated depending on transformers's versions.
My pseudo code is like:
```python
net = BartForConditionalGeneration.from_pretrained("gogamza/kobart-base-v1").to(rank)
net.train()
with amp.autocast(enabled=True):
output = net(
input_ids=input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,,
decoder_attention_mask=decoder_attention_mask,
)
# draw graphs of output.loss
```
I drawed the graph (training loss by iterations) using wandb.

`effortless-water-23` (green): `transformers>=4.21.0`
`swept-tree-24` (pink): `transformers==4.20.1`
`swept-tree-24` was slowly coverged to zero, but `effortless-water-23` eventually got nan at 80k+ iterations. (The above graph didn't show that.)
I've searched the difference between `transformers>=4.21.0` and `transformers==4.20.1` especially about BART, and I'm suspicious [this part](https://github.com/huggingface/transformers/commit/d3cb28886ac68beba9a6646b422a4d727b056c0c#diff-5b256a6fcfe7956b744dc1e902f150a79e2692b822a6dbac2c6612f8690846b6).
So as I reverted [the part](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L96) in `transformers>=4.21.0` like:
```python
# mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf")))
```
the problem was gone. (The result is same as `swept-tree-24` used `transformers==4.20.1`)
Anyway my problem is solved, but I'm wondering what is the real cause of the problem.
Thanks in advance.
### Expected behavior
I explained this part in the reproduction part. | 08-26-2022 10:43:28 | 08-26-2022 10:43:28 | This was introduced as part of https://github.com/huggingface/transformers/pull/17306
cc @ydshieh <|||||>Hi @soocheolnoh.
My best guess is that #17306 is not working well with FP16 training (as I see you use `amp`).
- Is it possible for you to share your modified training script, together with the training arguments?
If you have enough computational resource:
- I know you mentioned you run on a huge dataset. Would it possible for you to check if the issue could be reproduced with a smaller dataset (ideally, with a dataset available on [HF datasets](https://huggingface.co/datasets)?
- Could you also try with FP32 training and see if the issue appears too?
(You don't need to get NaN - just a training loss graph showing the issue would be enough)
Thank you in advance!
<|||||>Thanks for the response! @ydshieh
In my experience, both fp16 and fp32 got nan.

These experiments were executed using `transformers>=4.21.0` and I don't have any results using `transfomers==4.20.1` under the same condition used the above graph.
Currently I don't have any resource, so I can't try HF datasets right now. But when I get the resource, I will try it and see if the same issue happens.
<|||||>@soocheolnoh Thank you for the detailed information. I will find some time to investigate this issue. If you are able to share any further findings, it would be very helpful.
Still wondering if it's possible for you to share your modified training script, together with the training arguments? Otherwise I will try to reproduce with the official training script and a HF dataset.<|||||>@ydshieh
When I get GPUs and reproduce the issue again (with my private dataset/HF dataset), I will share my training script with separating essential parts as soon as possible. Currently my scripts aren't an unique file and I'm working on a private repository, so it's hard to share the original scripts.<|||||>No problem @soocheolnoh ! Thanks a lot.<|||||>Hi, @ydshieh
With several experiments, I found the reason of the issue. Since I'm a kind of beginner of NLP task, I've used incorrectly a decoder start token(=`decoder_prefix` in args). Originally the model I used was pretrained with \<s> decoder start token, but I've used \<pad> decoder start token which led an unexpected result I think.
My training code `train.py` is here:
```python
import argparse
import random
import warnings
import numpy as np
import torch
import torch.cuda.amp as amp
import torch.distributed as dist
import torch.multiprocessing as mp
from datasets import load_dataset
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import AdamW
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast
from transformers.optimization import get_constant_schedule
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--fp16", action="store_true")
parser.add_argument("--use_wandb", action="store_true")
parser.add_argument("--wandb_project", default="hg-kobart", type=str)
parser.add_argument("--num_gpus", default=3, type=int)
parser.add_argument("--batch_per_gpu", default=24, type=int)
parser.add_argument("--workers_per_gpu", default=6, type=int)
parser.add_argument("--epochs", default=1, type=int)
parser.add_argument("--decoder_prefix", default="<pad>", type=str)
args = parser.parse_args()
return args
def seed_everything(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
class TranslationDataset(Dataset):
def __init__(self, tokenizer, dataset_path, dataset_split, decoder_prefix, max_len=512, ignore_id=-100):
super().__init__()
self.tokenizer = tokenizer
self.decoder_prefix = decoder_prefix
self.max_len = max_len
self.ignore_id = ignore_id
self.pad_id = tokenizer.pad_token_id
self.dataset = load_dataset(dataset_path, split=dataset_split)
def __len__(self):
return len(self.dataset)
def _pad(self, input, padding_value=-100):
return [*input, *(padding_value,) * (self.max_len - len(input))]
def __getitem__(self, idx):
data = self.dataset[idx]
input = self.tokenizer(
data["premise_original"] + "</s>", padding="max_length", truncation=True, max_length=self.max_len
)
decoder_input = self.tokenizer(
self.decoder_prefix + data["premise"] + "</s>",
padding="max_length",
truncation=True,
max_length=self.max_len,
)
output = self.tokenizer(data["premise"] + "</s>")
output = self._pad(output["input_ids"])
return {
"input_ids": np.array(input["input_ids"], dtype=np.int64),
"decoder_input_ids": np.array(decoder_input["input_ids"], dtype=np.int64),
"labels": np.array(output, dtype=np.int64),
}
def train(rank, world_size, args):
dist.init_process_group(backend="nccl", init_method="tcp://127.0.0.1:33445", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
seed_everything(42 + rank)
warnings.filterwarnings("ignore")
tokenizer = PreTrainedTokenizerFast.from_pretrained("gogamza/kobart-base-v1")
net = BartForConditionalGeneration.from_pretrained("gogamza/kobart-base-v1").to(rank).train()
net = DDP(net, device_ids=[rank], find_unused_parameters=False)
param_optimizer = list(net.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{"params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], "weight_decay": 0.01},
{"params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=3e-5)
scheduler = get_constant_schedule(optimizer)
scaler = amp.GradScaler(enabled=args.fp16)
dataset = TranslationDataset(
tokenizer=tokenizer,
dataset_path="MoritzLaurer/multilingual-NLI-26lang-2mil7",
dataset_split="ko_wanli",
decoder_prefix=args.decoder_prefix,
)
sampler = DistributedSampler(dataset, shuffle=True, drop_last=True)
loader = DataLoader(
dataset,
batch_size=args.batch_per_gpu,
num_workers=args.workers_per_gpu,
pin_memory=True,
drop_last=True,
sampler=sampler,
persistent_workers=True,
)
if args.use_wandb and rank == 0:
import wandb
wandb.init(project=args.wandb_project)
wandb.watch(net)
for epoch in range(args.epochs):
if rank == 0:
print(f"Epoch {epoch+1}/{args.epochs}")
pbar = tqdm(loader) if rank == 0 else loader
for data in pbar:
input_ids = torch.tensor(data["input_ids"]).to(dtype=torch.int64, device=rank)
attention_mask = torch.ne(input_ids, tokenizer.pad_token_id).to(dtype=torch.float32)
decoder_input_ids = torch.tensor(data["decoder_input_ids"]).to(dtype=torch.int64, device=rank)
decoder_attention_mask = torch.ne(decoder_input_ids, tokenizer.pad_token_id).to(dtype=torch.float32)
labels = torch.tensor(data["labels"]).to(dtype=torch.int64, device=rank)
optimizer.zero_grad()
with amp.autocast(enabled=args.fp16):
output = net(
input_ids=input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=labels,
)
scaler.scale(output.loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(net.parameters(), 1.0)
scaler.step(optimizer)
scaler.update()
scheduler.step()
if rank == 0:
pbar.set_description(f"Loss: {output.loss.item():.4f}")
if args.use_wandb and rank == 0:
wandb.log({"train loss": output.loss.item()})
if __name__ == "__main__":
args = get_args()
mp.spawn(train, nprocs=args.num_gpus, args=(args.num_gpus, args))
```
I used 3 * V100 (one node), torch==1.12.1+cu113, and fp16. I added manually special token to data since the original tokenizer didn't post-process the data. The results of `python train.py --fp16 --decoder_prefix <s>` and `python train.py --fp16 --decoder_prefix <pad>` with different transformers' versions are belows:

As the results, If I use correctly `decoder_prefix` as \<s>, the results are completely same, but if I use \<pad>, the results are different. I'm not sure why the results are different (and if correction of code is needed), but anyway the main reason was an incorrect use of `decoder_prefix`.
Sorry If I bothered you. Thanks!
<|||||>@soocheolnoh I am very happy that you found the cause and a solution! Also appreciate a lot your effort on the detailed issue description and further investigations!
It's better to use the same decoder start token as the one used in pretraining. Regarding using pad token, it might work in some case, but we should be very careful. I believe in the case of this issue, it might be related to the decoder attention mask. I saw you have
```python
decoder_attention_mask = torch.ne(decoder_input_ids, tokenizer.pad_token_id)
```
So when you used pad token as decoder start token, you prepared a (decoder) attention mask that ignores the decoder start token (which shouldn't be ignored).
Of course, this is just one observation that might be related.
I am going to close the issue. Don't hesitate to reopen if you still have further questions.<|||||>Well, just a remark: you mentioned reverting
```python
# mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf")))
```
also avoided the issue. This might suggests using `-inf` is more robust to this problem, and worth a further investigation from our side.<|||||>Thank you! @ydshieh |
transformers | 18,772 | closed | Fix incomplete outputs of FlaxBert | # What does this PR do?
FlaxBertEncoder seems to return incomplete output (only `hidden_states`) in the case of returning a tuple.
## Who can review?
@sanchit-gandhi @patrickvonplaten
| 08-26-2022 07:59:45 | 08-26-2022 07:59:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,771 | closed | Fix import error of tf 2.4.1 on TPU. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The import of https://github.com/huggingface/transformers/blob/06a6a4bd516f7d0ba7c4966a2d3d9c0bf07797ae/src/transformers/modeling_tf_utils.py#L39
should be
```python
from tensorflow.python.keras.saving.hdf5_format import save_attributes_to_hdf5_group
```
Otherwise, model like CLIP-TensorFlow can't import in TF 2.4.1 on TPU (kaggle env).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-26-2022 03:07:20 | 08-26-2022 03:07:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger
Could you please take a look into this? It's a very short fix.<|||||>@gante, could you take a look at this?<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Hey @innat -- the change looks good, but we still want to default to the public namespace.
Can you wrap the import in a try/except block? (try importing the original, and import the line you added if it fails)<|||||>@gante I think it's done. Please check.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @innat -- can you have a look at Lysandre's comment, to allow CI to run?
Alternatively, you can follow [this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to give me write access over this PR, and I'll take care of the rest :) |
transformers | 18,770 | closed | Possible issue with learning rate scheduler while using Fairscale in Seq2Seq Trainer | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0-1078-aws-x86_64-with-glibc2.27
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 8 GPUs
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@patrickvonplaten, @Narsil I am running a summarization task on Mimic Dataset using T5 v1-large. I am using p3.16x EC2 instance which has 8 V100 GPUs with 16G each. I am using the seq2seq trainer with the config below. However, I am getting the following warning as in the screenshot pasted below. However, turning off fairscale removes this warning. Thank you once again for your help.
<img width="1026" alt="Screen Shot 2022-08-25 at 4 13 40 PM" src="https://user-images.githubusercontent.com/25961440/186759804-312d725a-6910-48e1-ab71-ec531557a8d1.png">
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the MIMIC-III dataset for summarization. Other than some light data preprocessing, everything is unchanged and I follow the exact same functions as in this tutorial https://huggingface.co/docs/transformers/tasks/summarization
'''
# arguments below
seed=42
max_input_length = 512
max_target_length = 128
batch_size = 1
num_train_epochs = 5
gradient_accumulation_steps = 8
logging_steps = 91544 // batch_size #hardcoded to the length of mimic training set
learning_rate=3e-4
per_device_train_batch_size=batch_size
per_device_eval_batch_size=batch_size
weight_decay=0.01
save_total_limit=3
optim=Adafactor
warmup_steps = 1000
model_checkpoint = "google/t5-v1_1-large"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
args = Seq2SeqTrainingArguments(
output_dir="/home/ubuntu/summarization_models/t5_large/",
evaluation_strategy="epoch",
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=weight_decay,
save_total_limit=save_total_limit,
num_train_epochs=num_train_epochs,
predict_with_generate=True,
logging_steps=logging_steps,
push_to_hub=False,
gradient_accumulation_steps=gradient_accumulation_steps,
optim='adafactor',
max_grad_norm=1.0,
ddp_find_unused_parameters=True,
warmup_steps=warmup_steps,
fp16 = True,
sharded_ddp = 'zero_dp_2'
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
'''
### Expected behavior
Expected behavior: Normal training. No warning message. | 08-25-2022 20:21:07 | 08-25-2022 20:21:07 | Hi @arijitthegame ,
I have no idea what is fairscale and the stacktrace you provide is quite incomplete.
Do you mind trying to setup a small script that is reproduceable on a simple desktop/laptop so we can reproduce (not t5-large) ?
Otherwise, reading the warning I think it shouldn't matter that much, at worse you're missing 1 step of the scheduler. It shouldn't change the end result for your model, only if you want reproduceable results with fixed seeds and so on should it matter.
Thank you <|||||>Hi @Narsil,
Thank you so much for your quick reply. Fairscale is a libarary developed by META for DDP + sharding and is integrated in the HF Trainer. Let me attach a MWE. The same warning appears for T5-base and T5-small.
```py
from datasets import load_dataset
import numpy as np
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
from transformers import DataCollatorForSeq2Seq
from transformers import Adafactor
import evaluate
import nltk
from nltk.tokenize import sent_tokenize
seed=42
max_input_length = 1024
max_target_length = 128
batch_size = 4
num_train_epochs = 20
gradient_accumulation_steps = 1
logging_steps = 989 // batch_size #hardcoded to the length of training set
learning_rate=3e-4
per_device_train_batch_size=batch_size
per_device_eval_batch_size=batch_size
weight_decay=0.01
save_total_limit=3
optim=Adafactor
billsum = load_dataset("billsum", split="ca_test")
billsum = billsum.train_test_split(test_size=0.2)
rouge_score = evaluate.load("rouge")
model_checkpoint = "google/t5-v1_1-base"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
prefix = "summarize: " # not important for a single tasking model
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["text"]]
model_inputs = tokenizer(inputs, max_length=1024, truncation=True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["summary"], max_length=128, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def compute_metrics(eval_pred):
predictions, labels = eval_pred
# Decode generated summaries into text
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
# Decode reference summaries into text
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# ROUGE expects a newline after each sentence
decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels]
# Compute ROUGE scores
result = rouge_score.compute(
predictions=decoded_preds, references=decoded_labels, use_stemmer=True
)
# Extract the scores
result = {key: value * 100 for key, value in result.items()}
return result
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [label.strip() for label in labels]
# ROUGE expects a newline after each sentence
preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds]
labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels]
return preds, labels
tokenized_dataset = billsum.map(preprocess_function, batched=True)
tokenized_dataset = tokenized_dataset.remove_columns(
billsum["train"].column_names
)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)
args = Seq2SeqTrainingArguments(
output_dir="/home/ubuntu/summarization_models/test/",
evaluation_strategy="epoch",
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=weight_decay,
save_total_limit=save_total_limit,
num_train_epochs=num_train_epochs,
predict_with_generate=True,
logging_steps=logging_steps,
push_to_hub=False,
gradient_accumulation_steps=gradient_accumulation_steps,
optim='adafactor',
max_grad_norm=1.0,
ddp_find_unused_parameters=True,
# warmup_steps=warmup_steps,
fp16 = True,
sharded_ddp = 'zero_dp_2'
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
data_collator=data_collator,
tokenizer=tokenizer,
# compute_metrics=compute_metrics, # get different issue when uncommenting this line HALP!!
)
trainer.train()
```
Turning on compute metrics gives a different error. I can open a separate issue for that. Thank you once again. <|||||>Hey @arijitthegame,
Sorry could you also attach the error message of the script you've posted above? Also does the error happen when you provide the `compute_metrics` function or not?
<|||||>Hi @patrickvonplaten,
Attached is the complete error message when turning on computing metrics. It crashes and does not train after a few steps.
```
Using cuda_amp half precision backend
***** Running training *****
Num examples = 989
Num Epochs = 20
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 620
5%|ββββ | 31/620 [00:12<02:59, 3.27it/s]***** Running Evaluation *****
Num examples = 248
Batch size = 4
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "test_trainer.py", line 117, in <module>
File "test_trainer.py", line 117, in <module>
File "test_trainer.py", line 117, in <module>
trainer.train()trainer.train()
Traceback (most recent call last):
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
trainer.train() File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
File "test_trainer.py", line 117, in <module>
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "test_trainer.py", line 117, in <module>
File "test_trainer.py", line 117, in <module>
File "test_trainer.py", line 117, in <module>
Traceback (most recent call last):
File "test_trainer.py", line 117, in <module>
trainer.train()
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
trainer.train()trainer.train()trainer.train()trainer.train()
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
return inner_training_loop(return inner_training_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
return inner_training_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
return inner_training_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
return inner_training_loop(return inner_training_loop(return inner_training_loop(return inner_training_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 1834, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2040, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 79, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2760, in evaluate
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
output = eval_loop(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer.py", line 2938, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
generated_tokens = self.model.generate(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)generated_tokens = self.model.generate(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
generated_tokens = self.model.generate(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
generated_tokens = self.model.generate(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 201, in prediction_step
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(generated_tokens = self.model.generate(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
generated_tokens = self.model.generate( File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
generated_tokens = self.model.generate(
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
generated_tokens = self.model.generate( File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
return func(*args, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
return func(*args, **kwargs)model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
return forward_call(*input, **kwargs)model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)return forward_call(*input, **kwargs) return forward_call(*input, **kwargs)
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
inputs_embeds = self.embed_tokens(input_ids)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
inputs_embeds = self.embed_tokens(input_ids)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
inputs_embeds = self.embed_tokens(input_ids)return forward_call(*input, **kwargs)
return forward_call(*input, **kwargs) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
return forward_call(*input, **kwargs) File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 936, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
inputs_embeds = self.embed_tokens(input_ids)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
inputs_embeds = self.embed_tokens(input_ids) return forward_call(*input, **kwargs)inputs_embeds = self.embed_tokens(input_ids)
inputs_embeds = self.embed_tokens(input_ids)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return forward_call(*input, **kwargs)
return forward_call(*input, **kwargs)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
return F.embedding(RuntimeError
: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory. File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return F.embedding(
return F.embedding(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse):
The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.RuntimeError
: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
5%|ββββ | 31/620 [00:12<04:01, 2.44it/s]
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 42297) of binary: /home/ubuntu/nlp_prompting/env/bin/python
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/nlp_prompting/env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test_trainer.py FAILED
------------------------------------------------------------
```<|||||>Without the computing metrics, the model trains but of course I can not compute the rouge metrics with the code I posted above but I get a warning about scheduler.
<img width="1026" alt="Screen Shot 2022-08-25 at 4 13 40 PM" src="https://user-images.githubusercontent.com/25961440/186977962-62c53340-0270-4214-aedf-489e5a6e2f5d.png">
<|||||>Uff I don't really know what's going on here. The error seems to come from:
```
inputs_embeds = self.embed_tokens(input_ids)
```
But don't know `fairscale` well enough here. Maybe cc @sgugger <|||||>Thank you so much. I believe it may be due to this. `This feature is incompatible with --predict_with_generate in the run_translation.py script. in ` https://huggingface.co/docs/transformers/main_classes/trainer#fairscale
This bug does not appear if I use `--fsdp full_shard` instead. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,769 | closed | Generate: `TFNoRepeatNGramLogitsProcessor` is now XLA-compatible | # What does this PR do?
This PR attempts to rewrite `TFNoRepeatNGramLogitsProcessor` so that it is also compatible with XLA.
It does pass the tests, but has a huuuuuuge memory footprint and it is very slow (even with XLA). It also doesn't seem to work in practice. Not putting the PR through the review process as the cost is not worth the benefits. | 08-25-2022 19:45:23 | 08-25-2022 19:45:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,768 | closed | Focus doc around preprocessing classes | This PR refocuses `preprocessing.mdx` around the preprocessor classes themselves instead of focusing on the different modalities. I think this puts more emphasis on what the library offers as opposed to focusing on something more generic like the different modalities. I know we're going to use `ImageProcessor` for image preprocessing soon, so I can update this again whenever that is ready. | 08-25-2022 18:33:10 | 08-25-2022 18:33:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,767 | closed | 'UserWarning: Module is put on CPU' when use FSDP by Accelerate | script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm_no_trainer.py
accelerate launch run_mlm_no_trainer.py \
--model_name_or_path bert-base-chinese \
--train_file $DATA_DIR/train.txt \
--validation_file $DATA_DIR/val.txt \
--line_by_line True \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--output_dir $MODEL_DIR/test-mlm


UserWarning: Module is put on CPU and will thus have flattening and sharding run on CPU, which is less efficient than on GPU. We recommend passing in `device_id` argument which will enable FSDP to put module on GPU device, module must also be on GPU device to work with `sync_module_states=True` flag which requires GPU communication.
**How can I solve this problem?**
| 08-25-2022 17:53:59 | 08-25-2022 17:53:59 | Maybe cc @pacman100 ? :)<|||||>Hello @scuyjzh , as mentioned in https://github.com/huggingface/accelerate/issues/659, you can safely ignore that warning as it is only during model initialization under FSDP. I will look into this later as and when time permits because this doesn't impact model correctness or performance gains through fsdp.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,766 | closed | [ASR Examples] Only filter training samples by audio length criterion | # What does this PR do?
We should only filter training samples by our audio length criterion when fine-tuning ASR systems. The eval and test sets should **not** be filtered. Doing so leads to partial datasets, and thus in-valid results.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @anton-l | 08-25-2022 16:39:52 | 08-25-2022 16:39:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18766). All of your documentation changes will be reflected on that endpoint.<|||||>I'm reluctant to add a cut-off for eval: in filtering the dataset by an audio length criterion, one removes the longest samples in the eval set. Evaluating on a partial eval set yields WER metrics that are not valid for academic purposes. We need to have the full eval set in order to compare WER figures between models. You can't compare a WER for a partial sub-set A with the WER for a different partial sub-set B!
Take the extreme example where you have a model that performs well on short samples <= 5s, and terribly for longer samples > 5s. Filtering with a cut-off of 5s would hide the fact the model breaks down for longer samples. You couldn't then compare this WER metric with another model where you've not filtered the eval samples, and thus included all the samples (including the 'harder' ones > 5s).
IMO, if not filtering eval samples poses an OOM threat, it has to be addressed by a reduction in batch size (chosen by the user depending on their model/hardware/dataset). WDYT @anton-l @patrickvonplaten?<|||||>More or less agree here - just in terms of backwards compatibility, I think this might lead to a couple of unexpected errors. Max length can prevent OOM and min length can prevent weird Conv layer doesn't have enough input values.
So I'd maybe prefer to instead add a `test_min_duration_in_seconds` and `test_max_duration_in_seconds` here, let it default to `min_duration_in_seconds` and `max_duration_in_seconds` (e.g. set to None and if None, set to `min_duration_in_seconds`) and add a warning that this will be changed in the next versions.<|||||>This fine-tuning script is actually used quite a bit and usually the cutting doesn't make a huge difference (also note in the examples, we're not looking for the "perfect" script, but rather an example of how ctc training could be done)
But overall ok with the PR, just think we should make it more backwards compatible<|||||>@sanchit-gandhi, let me know once you want to merge this :-) <|||||>Failing test unrelated - rebasing should fix this. Will request a review when the CI is green π’<|||||>Gently ping here @sanchit-gandhi :-) <|||||>Thanks for the ping, will address in the coming days! <|||||>Think this is worth fixing still, shouldn't take too long <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,765 | closed | Improve torch.fx support to trace non torch objects | # What does this PR do?
Improve `torch.fx` support to be able to track non tensor/parameter/module attributes. For example the new tracing mechanism allows to trace `config`, `num_heads` etc ... attributes that are important if one wants to build transformations on top.
Caveat: tracing such objects prevents `torch.jit.script` to work correctly due to limitations on their end. Consequently we gate the feature behind `trace_non_torch_object` flag upon tracing which defaults to False. (We could default to True, but that's a breaking change)
```
E AssertionError: Could not TorchScript the traced model:
E Module 'GraphModule' has no attribute 'config' (This attribute exists on the Python module, but we failed to convert Python type: 'transformers.models.gpt2.configuration_gpt2.GPT2Config' to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type GPT2Config.. Its type was inferred; try adding a type annotation for the attribute.):
E File "<eval_with_key>.0", line 5
E def forward(self, input_ids : torch.Tensor, token_type_ids : torch.Tensor):
E config = self.config
E ~~~~~~~~~~~ <--- HERE
E config_1 = self.config
E config_2 = self.config
``` | 08-25-2022 15:06:32 | 08-25-2022 15:06:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18765). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,764 | closed | Run tests if skip condition not met | # What does this PR do?
Some tests that had `unittest.skipIf` decorators weren't calling a test if the condition was false. This mean some test would be showing as passing even though they weren't being run.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 08-25-2022 14:41:18 | 08-25-2022 14:41:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Rocketknight1 <|||||>@gante I removed the `(<=2.8)` from all grouped conv comments in the repo |
transformers | 18,763 | closed | Improving the documentation for "word", within the pipeline. | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/18738#issuecomment-1227279234
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 08-25-2022 14:24:26 | 08-25-2022 14:24:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,762 | closed | [ViLT] Remove ViltForQuestionAnswering from check_repo | # What does this PR do?
This PR removes `ViltForQuestionAnswering` from the IGNORE_NON_AUTO_CONFIGURED mapping, because ViLT is now supported by the `AutoModelForVisualQuestionAnswering` class (and corresponding VQA pipeline).
cc @sgugger, `make fixup` currently doesn't check this, would it be possible to automate this? | 08-25-2022 12:36:44 | 08-25-2022 12:36:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,761 | closed | AttributeError: 'T5Config' object has no attribute 'exponential_decay_length_penalty' | File "c:\users\meet vora\appdata\local\programs\python\python38\lib\site-packages\streamlit\scriptrunner\script_runner.py", line 443, in _run_script
exec(code, module.__dict__)
File "C:\Users\MEET VORA\Desktop\fiver\ui.py", line 140, in <module>
pred_summary = summarizeText(input_text)
File "C:\Users\MEET VORA\Desktop\fiver\ui.py", line 120, in summarizeText
generated_ids = model.generate(
File "c:\users\meet vora\appdata\local\programs\python\python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "c:\users\meet vora\appdata\local\programs\python\python38\lib\site-packages\transformers\generation_utils.py", line 1259, in generate
logits_processor = self._get_logits_processor(
File "c:\users\meet vora\appdata\local\programs\python\python38\lib\site-packages\transformers\generation_utils.py", line 727, in _get_logits_processor
else self.config.exponential_decay_length_penalty
File "c:\users\meet vora\appdata\local\programs\python\python38\lib\site-packages\transformers\configuration_utils.py", line 257, in __getattribute__
return super().__getattribute__(key) | 08-25-2022 11:57:32 | 08-25-2022 11:57:32 | Hi,
It's pretty hard for us to figure out what the error is in case there's no code provided that can reproduce your issue.<|||||>Can I send you the code?
On Fri, 26 Aug, 2022, 1:11 pm NielsRogge, ***@***.***> wrote:
> Hi,
>
> It's pretty hard for us to figure out what the error is in case there's no
> code provided that can reproduce your issue.
>
> β
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/18761#issuecomment-1228162236>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ARIATPLZPZGZQPJHYYXQTVTV3BYI3ANCNFSM57S2BFHA>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,760 | closed | Wip_doc_deletion | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-25-2022 08:16:08 | 08-25-2022 08:16:08 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.