repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,874 | closed | fix checkpoint | # What does this PR do?
Uses the correct checkpoints for doctests | 03-01-2023 13:16:31 | 03-01-2023 13:16:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,873 | closed | Add TFVisionTextDualEncoder | This PR uses the new weight crossloading functions to add the missing `TFVisionTextDualEncoder` class. | 03-01-2023 12:54:17 | 03-01-2023 12:54:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good spot, I've added your suggestions and I'll add the modeling file to the documentation check list!<|||||>The failing test is unrelated (OPT generation), merging! |
transformers | 21,872 | closed | Removed BLIP mention from the troubleshooting guide | Now that BLIP has an AutoModel mapping, (see https://github.com/huggingface/transformers/pull/21817), this PR removes mention of BLIP's edge case from the troubleshooting guide. | 03-01-2023 12:36:32 | 03-01-2023 12:36:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,871 | closed | Italian translation of community.mdx | # What does this PR do?
Italian translation of community.mdx
See issue: https://github.com/huggingface/transformers/issues/17459
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @nickprock
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-01-2023 11:47:31 | 03-01-2023 11:47:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @lorenzobalzani , I'll review it in the next few days |
transformers | 21,870 | closed | Prophetnet batch dimension inversion fix | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17455
No longer mixes batch dimension with other dimensions, makes it so that inverting the inputs along the batch dimension also inverts the outputs, and returns the same loss indepent of the batch order.
Currently all tests pass (locally) except integration tests, which I believe are due to the issue at hand as can be seen in this example. Essentially the integration test for generation, returns different outputs based on what other elemnts are in the batch, with this fix it returns the same output as with a batch of 1
[](https://colab.research.google.com/drive/12EAAbXZSemzvuoz5g_3WAe0sH1YAZUwk?usp=sharing)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten @patil-suraj
| 03-01-2023 10:31:28 | 03-01-2023 10:31:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada and @ArthurZucker <|||||>Hi @ArthurZucker, thanks for the feedback.
I've implemented the suggestions you mentioned, adding full text notation of expected tensor dimensions and separating tenosr operations in two multiple lines instead of chaining where requested<|||||>Thanks for the kind feedback @ArthurZucker.
Just some clarity before merging, I should update the integration tests as described by the attached colab, the current version generates different text based on the other elements in the batch. While new version returns the same output as if generated with a batch size of 1<|||||>I've now updated the integration tests, they should pass now |
transformers | 21,869 | closed | [GPT-J] add deprecation warning | # What does this PR do?
Deprecating `position_ids` in GPTJ
Fixes #21114 | 03-01-2023 10:01:04 | 03-01-2023 10:01:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,868 | closed | [`Blip`] Fix blip doctest | # What does this PR do?
This PR fixes the Blip doctest that was failing with the changes proposed in https://github.com/huggingface/transformers/pull/21811
Link to failing job: https://github.com/huggingface/transformers/actions/runs/4299412591/jobs/7494589393
## Why this fix is relevant?
In #21811 the logic of `BlipForConditionalGeneration` forward pass has changed. If a user wants to use this as a standalone class and call `forward`, the text input must be fed to the model to the text decoder to mimic the implementations of encoder-decoder architectures in `transformers`, check for instance what is done to properly call `forward` on `T5`: https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/models/t5/modeling_t5.py#L1641
Hence, the fix of the doctest should be to feed a text input to the decoder by adding a text argument to the processor.
cc @ydshieh @sgugger π― | 03-01-2023 09:29:33 | 03-01-2023 09:29:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,867 | closed | Flax Regnet | # What does this PR do?
Flax Implementation of [facebook/regnet-y-040](https://huggingface.co/facebook/regnet-y-040)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- Flax: sanchit-gandhi | 03-01-2023 08:58:14 | 03-01-2023 08:58:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sanchit-gandhi This is now ready for your review, thanks a lot for your time.<|||||>@sanchit-gandhi All the requested changes have been made and looks ready for next iteration of review, thanks a lot for your time. |
transformers | 21,866 | closed | Fix gradient checkpointing bug Bart | # What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue #21737 for Bart.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (#21737)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @younesbelkada, @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-01-2023 08:07:22 | 03-01-2023 08:07:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your great work!
Can you please run `make fix-copies` ? After that we should be good to merge |
transformers | 21,865 | closed | Running summarization with default model fails. 4.27.0.dev0 | ### System Info
When runining examples/tensorflow/summarization/run_summarization.py
as given in README,
python run_summarization.py \
--model_name_or_path facebook/bart-base \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
it fails as below.

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Copy script in README to a file
2. Run the script
### Expected behavior
Should run without issues as it is example given in README | 03-01-2023 04:17:17 | 03-01-2023 04:17:17 | cc @Rocketknight1 <|||||>Hi @jojivk73, thanks for the bug report! We've reproduced the issue - the cause is that the `transformers` library is currently transitioning to using standardized native Keras layers for as many purposes as possible, and deprecating the previous setup where we often had ad-hoc model-specific solutions.
One consequence of the transition is that BART's embeddings used to store their weights in `embeddings.weight`, but now that they've been swapped to a Keras `Embedding` layer, the weights are in `embeddings.embeddings`. We missed this issue in the example code during the transition, but we're preparing a PR to fix it immediately. I'll ping you as soon as it's ready.<|||||>@jojivk73 the PR is now up at #21881<|||||>@jojivk73 PR is merged. Please install the latest version from `main` and let me know if you have any other problems, and thanks again for the bug report! |
transformers | 21,864 | closed | This line prevent us from using "std" scaling any more. | https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1549 | 03-01-2023 02:42:44 | 03-01-2023 02:42:44 | Hi,
Std scaling wasn't supported until #21020 was merged (only mean scaling is currently supported on the latest PyPi install). So if you install Transformers from source, you can use std scaling.
Cc @kashif <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@zhentaoxuttup were you able to use "std" scaling?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,863 | closed | Initialization of pytorch_util's Conv1D takes long time regardless of init_empty_weights when loading pretrained gpt2 | ### System Info
- `transformers` version: 4.26.1
- `accelerate` version: 0.16.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
gpt2-xl.py
```py
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"gpt2-xl",
torch_dtype=torch.half,
low_cpu_mem_usage=True)
```
```
$ python -m cProfile -s tottime gpt2-xl.py | head
1264809 function calls (1209396 primitive calls) in 24.274 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
194 19.016 0.098 19.016 0.098 {method 'normal_' of 'torch._C._TensorBase' objects}
1256 2.083 0.002 2.083 0.002 {method '_set_from_file' of 'torch._C.StorageBase' objects}
2 0.684 0.342 0.684 0.342 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.355 0.178 0.355 0.178 {method 'read' of '_ssl._SSLSocket' objects}
```
https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/pytorch_utils.py#L105-L110
`w` is constructed in `device: cpu` and actually computes `normal_`.
It's problematic when loading pretrained gpt2 models with larger number of parameters.
### Expected behavior
https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/src/transformers/modeling_utils.py#L2491-L2492
- by change this line to `init_contexts.append(init_empty_weights(include_buffers=True))`
- `w` will be constructed in `device: meta` according to https://github.com/huggingface/accelerate/pull/699
- as a result, actual computation of `normal_` will be skipped and faster model loading time
```
$ python -m cProfile -s time gpt2-xl.py | head
1265651 function calls (1210238 primitive calls) in 4.692 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1256 1.766 0.001 1.766 0.001 {method '_set_from_file' of 'torch._C.StorageBase' objects}
2 0.684 0.342 0.684 0.342 {method 'do_handshake' of '_ssl._SSLSocket' objects}
2 0.357 0.178 0.357 0.178 {method 'read' of '_ssl._SSLSocket' objects}
2 0.342 0.171 0.342 0.171 {method 'connect' of '_socket.socket' objects}
```
Though, I don't know whether it's safe to set `include_buffers=True` for all models. | 03-01-2023 02:18:54 | 03-01-2023 02:18:54 | It would be easier to move the initialization after initializing the parameter (so doing `self.weight = nn.Parameter(torch.empty(nx, nf))` and then apply the init normal. Would you like to make a PR with this change?
Even better, the initialization should be completely left to the `_init_weights` method of the PreTrainedModel using Conv1D and not present in this class at all, but it is a bit more work.<|||||>Thank you for your suggestion for reordering of initialization.
It makes sense to me. I'll make a PR soon. |
transformers | 21,862 | closed | Very slow process when `torch_dtype` is passed. |
When I use `from_pretrained`, the model loads up from my azure cache at almost 1gbps, but when I specify the `torch_dtype` the process is slowed to a crawling 1 tenth of original speed. When looking at resources it appears that its a single core type of process that is doing the disservice of being bottleneck.
Can we have that parallelized? | 03-01-2023 02:16:55 | 03-01-2023 02:16:55 | There is nothing we can do without a clear reproducer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,861 | closed | Make ZeroShotImageClassificationPipeline faster | # What does this PR do?
The pipeline makes separate calls to model for each candidate label. This commit combines all labels into one call.
Original code takes more that 60 seconds to process one image and 1000 candidate labels. Updated code takes less than 2 seconds.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
Library:
- pipelines: @Narsil | 03-01-2023 01:36:04 | 03-01-2023 01:36:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Have you found `batch_size` argument which should take care of that ?
It's even better since with `batch_size` you can adjust the size of the batch independantly of the number of candidate labels, which makes it easier to adapt relative to hardware/model size.
And you can have batch_size=1000 with only 3 candidate labels, they really hare independant.<|||||>We tested batch_size argument, it doesn't work as expected and take long time and a lot of memory.
In the pipeline `batch_size` separates candidate labels and runs one preprocess for each image/candidate_label pair.
We expect batching happening for images and all candidate_labels for each image.
```
pipe = transformers.pipeline(
task='zero-shot-image-classification',
model='openai/clip-vit-large-patch14-336',
framework='pt',
device="cuda:0"
)
with open('labels.json', 'r') as f:
l = f.read()
labels = json.loads(l)
res = pipe(images=['/home/user/cat_dog.jpg'], candidate_labels=labels[:250], batch_size=250)
```
It produces this:
```
OutOfMemoryError: CUDA out of memory. Tried to allocate 4.96 GiB (GPU 0; 10.76 GiB total capacity; 4.66
GiB already allocated; 4.98 GiB free; 4.71 GiB reserved in total by PyTorch) If reserved memory is >>
allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory
Management and PYTORCH_CUDA_ALLOC_CONF
```
Running with this using main branch transformers took 56 seconds vs 2 seconds on fast-zero-shot-image.
```
res = pipe(images=['/home/user/cat_dog.jpg'], candidate_labels=labels[:1000], batch_size=100)
```
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ohh now I remember.
I dove a bit deeper into the code.
The issue lies within `CLIP` itself and how it's working. There's essentially 2 batch sizes the image batch size, and text batch size.
And CLIP is returning an object being TEXT_BS * IMAGE_BS. Which means in this case we're doing a cross product of what we really need.
In addition to that, the current pipeline does batch the same image over and over (meaning it's going to pass several times in the vision encoder.
What we want in an ideal world, would be that the images get batched on their own, and get their representation encoded, and independently so do the `candidate_labels` (since we're also calculating them way too many times currently, once for each image in the pipeline.).
We **need** to keep `batch_size` functioning, which this PR currently silently breaks.
Now since this pipeline is only implemented for CLIP as of now, I think we can clean this up by breaking up the CLIP model into pieces. I'll try to figure out another solution.<|||||>I have created another PR with you as co-author to try and find a fix which could keep the performance you get here (potentially a bit better since I calculate candidate labels only once).
Would the other approach work for you ?
https://github.com/huggingface/transformers/pull/21897 <|||||>Closing in favor of #21897 |
transformers | 21,860 | closed | Change the way tensor is reshaped in BartAttention (from .view to .reshape) | # What does this PR do?
Fixes #21813 | 02-28-2023 23:59:59 | 02-28-2023 23:59:59 | @younesbelkada For some reason fix-copies is not fixing the prophetnet copy, Not sure how to fix this.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @raghavanone !
Hum very weird, can you try to merge this branch with the `main` branch of `transformers` and see if this fixes the issue?<|||||>> Hi @raghavanone ! Hum very weird, can you try to merge this branch with the `main` branch of `transformers` and see if this fixes the issue?
Indeed weird, It is already on top of the latest main. Stranger thing is both of these checks pass on my machine.<|||||>Can you try `pip install --upgrade -e .["quality"]` + `make fixup` + `make fix-copies` ? |
transformers | 21,859 | closed | [doc] deepspeed tests | added instructions on how to run deepspeed tests for deepspeed PR contributors. | 02-28-2023 22:14:57 | 02-28-2023 22:14:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,858 | closed | cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' | ### System Info
macbook pro m2 with anaconda, python 3.9
I'm running transformers on an m1 mac and am getting the following error when I import
`from transformers import OwlViTProcessor, OwlViTForObjectDetection`
File ~/opt/anaconda3/envs/nd1/lib/python3.9/site-packages/transformers/__init__.py:26
23 from typing import TYPE_CHECKING
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
46 logger = logging.get_logger(__name__) # pylint: disable=invalid-name
File ~/opt/anaconda3/envs/nd1/lib/python3.9/site-packages/transformers/dependency_versions_check.py:36
33 if pkg in deps:
34 if pkg == "tokenizers":
35 # must be loaded here, or else tqdm check may fail
---> 36 from .utils import is_tokenizers_available
...
---> 10 from charset_normalizer.md import mess_ratio
11 from charset_normalizer.models import CharsetMatches, CharsetMatch
12 from warnings import warn
AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install transformers on a macbook m2
from transformers import OwlViTProcessor, OwlViTForObjectDetection
### Expected behavior
it should import but instaed it gives the above error message | 02-28-2023 21:56:56 | 02-28-2023 21:56:56 | Please run `transformers-cli env` and paste the results here, as requested in the issue template. In particular, do you have the tokenizers module installed and which version?<|||||>### System Info
macbook air m2 with anaconda, python 3.9
I got a similar bug :bug:
`ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'`
When I encountered this I used:
```
pip install chardet
```<|||||>> ### System Info
> macbook m2 with anaconda, python 3.9
>
> I got a similar bug π `ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant'`
>
> When I encountered this I used:
>
> ```
> pip install chardet
> ```
Encountered the same error message when importing transformers. Installing chardet solved the issue.
Output for transformers-cli env
- `transformers` version: 4.28.0.dev0
- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no<|||||>Could you please provide us with the full traceback? To potentially fix this, we need to know which module raises the error and neither Transformers nor Tokenizers import anything from charset directly.<|||||>```code
Traceback (most recent call last):
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/__init__.py", line 30, in <module>
from .generic import (
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/generic.py", line 29, in <module>
from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 32, in <module>
from . import logging
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/logging.py", line 35, in <module>
import huggingface_hub.utils as hf_hub_utils
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py", line 32, in <module>
from ._errors import (
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 3, in <module>
from requests import HTTPError, Response
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer/md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)
```<|||||>Ok so this looks like it stems from the `requests` so it may be worth raising the issue there (it looks like `from requests import HTTPError, Response` fails in your env, if you want a minimal reproducer).
@Wauplin We might also need to put something in the dependencies of `huggingface_hub` to have the `chardet` dep installed on MacOS?<|||||>@sgugger I'm not against adding the dependency but as you said, it really seems to be an issue on `requests` side that has nothing to do with `huggingface_hub`/`transformers` (except the fact we use `requests`). I would first try to:
1. isolate a minimal reproducible code. Maybe just 1 line is enough:
```py
from requests import HTTPError
# or
from requests import Response
```
2. list all installed deps (in particalar, `requests`, `charset` and `charset-normalizer`) + python version + os
3. open an issue on https://github.com/psf/requests
4. once that's done, open an issue in [huggingface_hub](https://github.com/huggingface/huggingface_hub) and decide what's the best solution (add chardet as deps for macos for example?)<|||||>Yes, we can try to have it solve in requests first indeed. It's if that takes too much time or is not deemed important we should fix it in hf hub.
@ani0075saha Could you try the two lines given by Wauplin and do step 2 and 3?<|||||>1.
```code
>>> from requests import HTTPError
Traceback (most recent call last):
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer/md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)
```
```code
>>> from requests import Response
Traceback (most recent call last):
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)
```
2.
```code
(huggingface-bug-test) anisaha1:~$ conda list
# packages in environment at /<redacted>/anaconda3/envs/huggingface-bug-test:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl
brotlipy 0.7.0 py39h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.12.7 py39h06a4308_0
cffi 1.15.1 py39h5eee18b_3
charset-normalizer 3.1.0 pypi_0 pypi
cryptography 39.0.1 py39h9ce1e76_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.10.0 pypi_0 pypi
flit-core 3.6.0 pyhd3eb1b0_0
freetype 2.12.1 h4a9f257_0
giflib 5.2.1 h5eee18b_3
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
huggingface-hub 0.13.3 pypi_0 pypi
idna 3.4 py39h06a4308_0
intel-openmp 2021.4.0 h06a4308_3561
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.6.0.25 0 nvidia
libcurand 10.3.2.56 0 nvidia
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.16.0 h27cfd23_0
libtiff 4.5.0 h6a678d5_2
libunistring 0.9.10 h27cfd23_0
libwebp 1.2.4 h11a3e52_1
libwebp-base 1.2.4 h5eee18b_1
lz4-c 1.9.4 h6a678d5_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7f8727e_0
mkl_fft 1.3.1 py39hd3c417c_0
mkl_random 1.2.2 py39h51133e4_0
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
numpy 1.24.2 pypi_0 pypi
numpy-base 1.23.5 py39h31eccc5_0
openh264 2.1.1 h4ff587b_0
openssl 1.1.1t h7f8727e_0
packaging 23.0 pypi_0 pypi
pillow 9.4.0 py39h6a678d5_0
pip 23.0.1 py39h06a4308_0
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 23.0.0 py39h06a4308_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.16 h7a1cb2a_2
pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_3 pytorch
pytorch-mutex 1.0 cuda pytorch
pyyaml 6.0 pypi_0 pypi
readline 8.2 h5eee18b_0
regex 2022.10.31 pypi_0 pypi
requests 2.28.2 pypi_0 pypi
setuptools 65.6.3 py39h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.1 h5eee18b_0
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.2 pypi_0 pypi
torchaudio 0.13.1 py39_cu117 pytorch
torchvision 0.14.1 py39_cu117 pytorch
tqdm 4.65.0 pypi_0 pypi
transformers 4.28.0.dev0 pypi_0 pypi
typing-extensions 4.5.0 pypi_0 pypi
typing_extensions 4.4.0 py39h06a4308_0
tzdata 2022g h04d1e81_0
urllib3 1.26.15 pypi_0 pypi
wheel 0.38.4 py39h06a4308_0
xz 5.2.10 h5eee18b_1
zlib 1.2.13 h5eee18b_0
zstd 1.5.2 ha4553b6_0
```
```code
(huggingface-bug-test) anisaha1:~$ transformers-cli env
Traceback (most recent call last):
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 11, in <module>
import chardet
ModuleNotFoundError: No module named 'chardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/<redacted>/anaconda3/envs/huggingface-bug-test/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/__init__.py", line 30, in <module>
from .generic import (
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/generic.py", line 29, in <module>
from .import_utils import is_flax_available, is_tf_available, is_torch_available, is_torch_fx_proxy
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 32, in <module>
from . import logging
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/transformers/utils/logging.py", line 35, in <module>
import huggingface_hub.utils as hf_hub_utils
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/__init__.py", line 32, in <module>
from ._errors import (
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 3, in <module>
from requests import HTTPError, Response
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/__init__.py", line 45, in <module>
from .exceptions import RequestsDependencyWarning
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/exceptions.py", line 9, in <module>
from .compat import JSONDecodeError as CompatJSONDecodeError
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/requests/compat.py", line 13, in <module>
import charset_normalizer as chardet
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer/md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/<redacted>/anaconda3/envs/huggingface-bug-test/lib/python3.9/site-packages/charset_normalizer/constant.py)
```
```code
(huggingface-bug-test) anisaha1:~$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.7 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.7"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.7 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.7
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.7"
(base) anisaha1:~$ uname -r
4.18.0-425.13.1.el8_7.x86_64
```
3. https://github.com/psf/requests/issues/6384<|||||>`python -m pip install charset-normalizer==2.1.0`
solves the issue<|||||>> Yes, we can try to have it solve in requests first indeed. It's if that takes too much time or is not deemed important we should fix it in hf hub.
>
> @ani0075saha Could you try the two lines given by Wauplin and do step 2 and 3?
Hi @sgugger @Wauplin, the issue I made in requests library was closed. Any thoughts on next steps?<|||||>You can try opening an issue at [charset_normalizer](https://github.com/Ousret/charset_normalizer) and point out that their 3.1.0 release seems broken on MacOS (but 2.1.0 works apparently, from the comment above).
From your traceback, the simple line `import charset_normalizer` should fail in your environment (it doesn't in mine, but I'm not on MacOS).<|||||>I got the above error and did `python -m pip install charset-normalizer==2.1.0`. This gave me another error which went away after doing `pip install chardet `.
The error after 2.1.0 was as below but it was solved. I'm using M2 MAX and the packages below.
`ImportError: cannot import name 'KO_NAMES' from 'charset_normalizer.constant' (/opt/anaconda3/envs/mlenv/lib/python3.8/site-packages/charset_normalizer/constant.py)`
Package Version
------------------------ ----------
anyio 3.6.2
appnope 0.1.2
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.0.5
attrs 22.2.0
backcall 0.2.0
beautifulsoup4 4.11.2
bleach 6.0.0
brotlipy 0.7.0
certifi 2022.12.7
cffi 1.15.1
chardet 5.1.0
charset-normalizer 2.1.0
click 8.1.3
comm 0.1.2
contourpy 1.0.7
cryptography 39.0.1
cycler 0.11.0
debugpy 1.6.6
decorator 5.1.1
defusedxml 0.7.1
executing 0.8.3
fastjsonschema 2.16.2
filelock 3.9.0
flit_core 3.6.0
fonttools 4.39.0
fqdn 1.5.1
future 0.18.2
gmpy2 2.1.2
huggingface-hub 0.12.1
idna 3.4
importlib-metadata 6.0.0
importlib-resources 5.12.0
ipykernel 6.21.2
ipython 8.10.0
ipython-genutils 0.2.0
ipywidgets 8.0.4
isoduration 20.11.0
jedi 0.18.1
Jinja2 3.1.2
joblib 1.2.0
jsonpointer 2.3
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 8.0.3
jupyter-console 6.6.1
jupyter_core 5.2.0
jupyter-events 0.6.3
jupyter_server 2.3.0
jupyter_server_terminals 0.4.4
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.5
kiwisolver 1.4.4
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
mistune 2.0.5
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
mpmath 1.3.0
nbclassic 0.5.2
nbclient 0.7.2
nbconvert 7.2.9
nbformat 5.7.3
nest-asyncio 1.5.6
networkx 3.0
nltk 3.8.1
notebook 6.5.2
notebook_shim 0.2.2
numpy 1.23.5
packaging 23.0
pandas 1.5.3
pandocfilters 1.5.0
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pkgutil_resolve_name 1.3.10
platformdirs 3.0.0
portalocker 2.7.0
prometheus-client 0.16.0
prompt-toolkit 3.0.36
psutil 5.9.4
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
Pygments 2.11.2
pyOpenSSL 23.0.0
pyparsing 3.0.9
pyrsistent 0.19.3
PySocks 1.7.1
python-dateutil 2.8.2
python-json-logger 2.0.7
pytorch-crf 0.7.2
pytz 2023.2
PyYAML 6.0
pyzmq 25.0.0
qtconsole 5.4.0
QtPy 2.3.0
regex 2022.10.31
requests 2.28.2
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
scikit-learn 1.2.2
scikit-plot 0.3.7
scipy 1.10.1
seaborn 0.12.2
Send2Trash 1.8.0
sentence-transformers 2.2.2
sentencepiece 0.1.97
setuptools 65.6.3
six 1.16.0
sklearn 0.0.post1
sniffio 1.3.0
soupsieve 2.4
stack-data 0.2.0
sympy 1.11.1
terminado 0.17.1
threadpoolctl 3.1.0
tinycss2 1.2.1
tokenizers 0.12.1
torch 2.0.0
torchaudio 2.0.0
torchdata 0.6.0
torchtext 0.13.0
torchvision 0.15.0
tornado 6.2
tqdm 4.64.1
traitlets 5.7.1
transformers 4.27.4
typing_extensions 4.4.0
uri-template 1.2.0
urllib3 1.26.15
wcwidth 0.2.5
webcolors 1.12
webencodings 0.5.1
websocket-client 1.5.1
wheel 0.38.4
widgetsnbextension 4.0.5
zipp 3.14.0<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am currently having this issue with one anaconda environment and not another. This is very confusing. |
transformers | 21,857 | closed | Flax beam search fix | # What does this PR do?
Makes it so you can pass `decoder_attention_mask` into `model.generate` for flax models when doing beam search. This is helpful for models like Whisper where there may be variable length decoder prefixes across a batch, so you'd have to define a `decoder_attention_mask`.
@sanchit-gandhi
@sgugger
@frmccann97 | 02-28-2023 20:37:56 | 02-28-2023 20:37:56 | cc @gante<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,856 | closed | Add an utility file to get information from test files | # What does this PR do?
**Add an utility file to get information from test files.**
There are more places where we need to access the information contained in test files:
- tiny model creation script: need model tester in order to create (tiny) configuration
- pipeline testing:
- need some information so we can create `pipeline_model_mapping` in a systematic way (to avoid human error, to avoid time consuming manual edits) from the existing `model_mapping` and `tf_model_mapping` under `xxxPipelineTests` classes (which use AUTO mappings)
- need information so we can develop some checks to make sure no pipeline tests are missing
I think it's good if we have a centralized place (file) providing ways to get this information, therefore this PR comes.
This new file will be under development.
## One example usage
### code snippet
```
test_file = "tests/models/blip/test_modeling_blip.py"
test_file = f"{os.path.sep}".join(test_file.split("/"))
model_test_mapping = get_model_to_test_mapping(test_file)
model_tester_mapping = get_model_to_tester_mapping(test_file)
print(json.dumps(to_json(model_test_mapping), indent=4))
print(json.dumps(to_json(model_tester_mapping), indent=4))
```
### model to test classes
```python
{
"BlipForConditionalGeneration": [
"BlipTextImageModelTest"
],
"BlipForImageTextRetrieval": [
"BlipTextRetrievalModelTest"
],
"BlipForQuestionAnswering": [
"BlipTextImageModelTest",
"BlipVQAModelTest"
],
"BlipModel": [
"BlipModelTest"
],
"BlipTextModel": [
"BlipTextModelTest"
],
"BlipVisionModel": [
"BlipVisionModelTest"
]
}
```
### model to tester classes
```python
{
"BlipForConditionalGeneration": [
"BlipTextImageModelsModelTester"
],
"BlipForImageTextRetrieval": [
"BlipTextRetrievalModelTester"
],
"BlipForQuestionAnswering": [
"BlipModelTester",
"BlipTextImageModelsModelTester"
],
"BlipModel": [
"BlipModelTester"
],
"BlipTextModel": [
"BlipTextModelTester"
],
"BlipVisionModel": [
"BlipVisionModelTester"
]
}
``` | 02-28-2023 19:44:26 | 02-28-2023 19:44:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> It's all looking great! Can you add a couple of unit tests in `tests/repo_utils` for this file? It may help you catch some bugs and make sure any future PRs don't break anything.
Hi! I added a test file under `tests/repo_util/`, but as you know, the newly introduced methods require some libraries to be there, so we can import modules and get the list of model tester/test/classes. Therefore we can't test against some expected values on CircleCI `repo_utils_job`, where even `torch` or `vision` is not there.
Do you have any suggestion, say, moving this new test file outside `tests/repo_util/`?
~~Or maybe I should create some dummy model/test/tester classes dynamically inside the test file, and use them for testing?~~ |
transformers | 21,855 | closed | Move common properties to BackboneMixin | # What does this PR do?
First of a series of PRs to enable loading timm checkpoints using the `AutoBackbone` API. This PR moves common logic e.g. `channels` property to the `BackboneMixin` class. This is for two main reasons:
* Reduce duplicated code
* Enable using similar logic across the transformer and timm backbones
## Series of PRs
- [x] Moving common logic into the `BackboneMixin` class (this PR)
- [ ] Add `out_indices` to backbones - [PR](https://github.com/amyeroberts/transformers/pull/109/files)
Note: This is an optional design choice and not necessary for loading the timm backbones
- [ ] Add tests for backbone models - [PR](https://github.com/amyeroberts/transformers/pull/110/files)
- [ ] Add `TimmBackbone` model that can be loaded through `AutoBackbone` - [PR](https://github.com/amyeroberts/transformers/pull/111/files)
This is where all the important stuff happens πͺ
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 02-28-2023 18:58:41 | 02-28-2023 18:58:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,854 | closed | Running squad with GPT-J-6B fails due to issue in tokenizer. | ### System Info
Hi,
I am tryiing to use the run_qa.py script under examples/tensorflow/question-answering
Model EleutherAI/gpt-j-6B.
dataset squad.
it fails as below.

### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformer from source
2. python examples/tensorflow/question-answerin/run_qa.py with --model_name EleutherAI/gpt-j-6B.
--- \
--dataset_name squad --do_train --do_eval
### Expected behavior
The script to run the finetuning task | 02-28-2023 18:25:07 | 02-28-2023 18:25:07 | This example does not support GPT-J out of the box since GPT-J has no CLS token (compared to BERT or XLNet). You will need to adapt the preprocessing as a result.<|||||>@sgugger Can you please elaborate.
I tired using distillbert tokenizer and it is running.
I am not sure if that is good.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,853 | closed | [GPT2] Propose fix for #21080 | # What does this PR do?
Propose fix for #21080.
Modify the default generation of the positional ids for the `gpt-2` as well as the `decidion_transformer` model.
In the issue @LysandreJik proposed to update the doc, but when I checked it seemed like this would only affect 2 models, and is backward compatible:
- potential impact is only for people that where using batched padded input when generating with a single sentence. This means that their output will now be corrected
- default behavior is kept if not attention mask is given.
If this is not acceptable, I am also glad to add a warning when creating the positional ids. | 02-28-2023 17:36:55 | 02-28-2023 17:36:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'd like to summon the generation expert @gante to ask what he thinks of the PR<|||||>The summon has been heard π£
### TL;DR
I approve the change π But we need to have a second look at the position embeddings, I suspect there are several bugs in the codebase (see why below)
### Context
This PR actually took me on a long trip, whose findings I summarize below:
1. I saw that @ArthurZucker wrote `this would only affect 2 models`, and my first thought was `what about GPT-J`?
2. Then I saw #21869 (remove `position_ids` input from GPT-J). From the TF XLA `.generate` transition, I remember that getting the `position_ids` to work with GPT-J was a nice piece of work, and I remember that it made a difference in the outputs. See the example below
<details>
<summary>GPT-J + positions_ids</summary>
```py
from transformers import TFAutoModelForCausalLM, AutoTokenizer
import tensorflow as tf
tf.keras.backend.set_floatx('float16')
tok = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", padding_side="left")
model = TFAutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", from_pt=True)
tok.pad_token = tok.eos_token
model.generation_config.pad_token_id = model.generation_config.eos_token_id
inputs = tok(["and the prime minister"], return_tensors="tf", padding=True)
out_1 = model(**inputs)
out_2 = model(**inputs)
position_ids = tf.math.cumsum(inputs.attention_mask, axis=-1, exclusive=True)
out_3 = model(**inputs, position_ids=position_ids + 10)
print(tf.reduce_max(tf.abs(out_1.logits[:, -1, :] - out_2.logits[:, -1, :]))) # tf.Tensor(0.0, shape=(), dtype=float16)
print(tf.reduce_max(tf.abs(out_1.logits[:, -1, :] - out_3.logits[:, -1, :]))) # tf.Tensor(0.01563, shape=(), dtype=float16)
```
</details>
3. Despite the above, and looking at this PR, I think what @ArthurZucker wrote here is the way to go. In `prepare_inputs_for_generation`, we were computing `position_ids` from the `attention_mask` anyways. If we pass the logic from there to the forward pass, we cut a source of bugs (users trying to generate without `.generate()`) π
4. GPT-J needs to be fixed (see TF/FLAX) :p To be clear, the issue is long-standing and not a result of #21869 !
5. We should double-check at least the main models. Some models, like `OPT`, are okay, as they compute the position embedding directly from the attention mask. Others, like `Codegen`, suffer from the same problem as `GPT-J`.
<|||||>@gante I was coincidentally just having this problem with codegen and so have opened #22069 following your hints above.<|||||>Thanks for the in depth review @gante ! <|||||>Will re-open this with a fix for the cross PT-TF tests. The TF code has to be modified as otherwise the default positional ids are wrong. <|||||>Sorry for missing that the tf version also needed an update on this one! <|||||>Thank you @ArthurZucker . No worry - we didn't detect this because the PT/TF cross tests in the corresponding TF model test files are not fetched by the test fetcher script. The current version will only detect the tests in the (modified + involved indirectly) PyTorch modeling/test files. |
transformers | 21,852 | closed | TrOCR comment change | Thanks for the TrOCR modules @NielsRogge !
I noticed something very small in one of the comments - if you copy pasted the lines from the comment block, they wouldn't work, because the default image size in `ViTConfig` is 224, while the `processor` is expecting the image to be resized to 384x384. | 02-28-2023 17:28:25 | 02-28-2023 17:28:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi,
Thanks for your PR! We actually test that code snippet as TrOCR is [included in the doc tests](https://github.com/huggingface/transformers/blob/b29e2dcaff114762e65eaea739ba1076fc5d1c84/utils/documentation_tests.txt#L190). So the code runs fine. Of course, if you would use that model in combination with TrOCRProcessor, you would have to instantiate the image processor's size to be {"height": 224, "width": 224}, like so:
```
from transformers import RobertaTokenizer, ViTImageProcessor, TrOCRProcessor, ViTConfig, TrOCRConfig, VisionEncoderDecoderModel
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
image_processor = ViTImageProcessor(size={"height": 224, "width": 224})
processor = TrOCRProcessor(tokenizer=tokenizer, image_processor=image_processor)
config_encoder = ViTConfig()
config_decoder = TrOCRConfig()
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = VisionEncoderDecoderModel(config=config)
```<|||||>Thanks for your quick reply!
I believe that particular bit of code in the docstring runs fine because `model` gets overwritten a couple of lines below it. Do you think it is worth it to put the lines of code you've written in your comment in the docstring as well in case someone wants to try to train a model without the weights from microsoft?
If not, it's not a big deal, I'll just close this PR. |
transformers | 21,851 | closed | [WIP] Flax pipeline support :pickup_truck: | # What does this PR do?
Adds flax support in pipelines + corresponding flax changes of https://github.com/huggingface/transformers/pull/21516 by [ydshieh](https://github.com/ydshieh)
> Right before I was preparing to open pull request, with pretty much everything ready for review (including tests, ~ all tasks working), I was looking for issues this PR will fix, accidently, I found a PR [[WIP] Adding support for flax for pipelines.](https://github.com/huggingface/transformers/pull/14356/) (nearly 1-2 year old) by [Narsil](https://github.com/Narsil) previously unknown to me. I am adding this to alleviate any confusion, and after the commit `bug fixes & multiple tasks support`, I still continued with this PR since (I guess) a lot of `transformers` codebase has changed/updated, but I did used narsil's PR to add anything I missed in this PR, and to make it more polished.
> So, I deeply thanks narsil and patrickvonplaten for the PR & review of the narsil's PR, as it helped this PR become more polished, and apologies as I should have checked if any similar PR is WIP.
*Not mentioning due to currently in WIP*
## Examples
```py
# Image Classifcation
vision_classifier = pipeline(task="image-classification", framework="flax", device="cuda:0") #cpu:0
print(vision_classifier(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"))
# Translation
en_fr_translator = pipeline("translation_en_to_fr", framework="flax")
print(en_fr_translator("How old are you?"))
# Image to Text
captioner = pipeline(model="ydshieh/vit-gpt2-coco-en", framework="flax")
print(captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"))
```
## Currently Supported tasks
Task |PT| TF| Flax| Reason for no Flax support|
|---|---|---|---|---|
audio_classification |:heavy_check_mark: |:x:| :x: |`AudioClassificationPipeline` is only available in PyTorch.|
automatic_speech_recognition| :heavy_check_mark:| :x:| :x: |The `AutomaticSpeechRecognitionPipeline` is only available in PyTorch.|
conversational |:heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:|
depth_estimation| :heavy_check_mark:| :x:| :x: |No Flax model available.|
document_question_answering| :heavy_check_mark:| :x:| :x:| No `FlaxAutoModelForDocumentQuestionAnswering` class available.|
feature_extraction |:heavy_check_mark:| :heavy_check_mark: |:heavy_check_mark:|
fill_mask| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark: |
image_classification| :heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:|
image_segmentation| :heavy_check_mark:| :x: |:x: |No Flax model available.|
image_to_text| :heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:
object_detection| :heavy_check_mark:| :x: |:x:| No Flax model available.
question_answering| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
summarization| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
table_question_answering| :heavy_check_mark:| :heavy_check_mark:| :x:| No Flax model available.
text2text_generation| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
text_classification |:heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
text_generation |:heavy_check_mark: |:heavy_check_mark: |:heavy_check_mark:
token_classification| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:
translation |:heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:
video_classification| :heavy_check_mark: |:x: |:x:| No Flax model available.
visual_question_answering| :heavy_check_mark: |:x:| :x: |No Flax model available.
zero_shot_classification| :heavy_check_mark: |:heavy_check_mark:| :heavy_check_mark:
zero_shot_object_detection| :heavy_check_mark: |:x:| :x: |No Flax model available.
## Custom models link used in testing
~Could probably fix this by adding `from_pt=True` in `model_kwargs` in testing.~ Nope, during flax testing, PyTorch doesn't seems to be available.
- [Shubhamai/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/Shubhamai/distilbert-base-uncased-finetuned-sst-2-english) < [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
- [Shubhamai/tiny-random-distilbert](https://huggingface.co/Shubhamai/tiny-random-distilbert) < [hf-internal-testing/tiny-random-distilbert](https://huggingface.co/hf-internal-testing/tiny-random-distilbert)
- [Shubhamai/tiny-random-vit](https://huggingface.co/Shubhamai/tiny-random-vit) < [hf-internal-testing/tiny-random-vit](https://huggingface.co/hf-internal-testing/tiny-random-vit)
- [Shubhamai/tiny-mbart](https://huggingface.co/Shubhamai/tiny-mbart) < [sshleifer/tiny-mbart](https://huggingface.co/)
- [Shubhamai/tiny-bert-for-token-classification](https://huggingface.co/Shubhamai/tiny-bert-for-token-classification) < [hf-internal-testing/tiny-bert-for-token-classification](https://huggingface.co/hf-internal-testing/tiny-bert-for-token-classification)
- [Shubhamai/tiny-random-clip-zero-shot-image-classification](https://huggingface.co/Shubhamai/tiny-random-clip-zero-shot-image-classification) < [hf-internal-testing/tiny-random-clip-zero-shot-image-classification](https://huggingface.co/hf-internal-testing/tiny-random-clip-zero-shot-image-classification)
- [Shubhamai/tiny-distilbert-base-cased-distilled-squad](https://huggingface.co/Shubhamai/tiny-distilbert-base-cased-distilled-squad) < [sshleifer/tiny-distilbert-base-cased-distilled-squad](https://huggingface.co/sshleifer/tiny-distilbert-base-cased-distilled-squad)
## Few questions for maintainers and users.
- Should the framework name `pipeline(..., framework=)` be `jax` or `flax`, in this PR I used `flax` because we have been using them as a prefix in flax models, although we use `jax` as the alias in tokenizer functions, and in Huggingface Hub, so I am unable a make a final decision on this.
- If we use `framework="flax"`, a bit of a inconvenient code emerges, `self.tokenizer(inputs, return_tensors=self.framework if self.framework != "flax" else "jax")`, because tokenizer/image preprocessor doesn't identify `flax` framework, we have to change it to `jax`. Although it's a very small code, I get a bit uncomfortable presenting it, which could be a source of bugs. Solution would either rename `framework` to jax or add flax alias in tokenizer/image preprocessor, which function's exactly as jax.
- JIT the model by default or another function argument?
Fixes
- https://github.com/huggingface/transformers/issues/12627
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- flax: sanchit-gandhi
- pipelines: Narsil
## TODO
- [ ] Profiling & Benchmarking.
- [ ] Updating docs. | 02-28-2023 17:17:52 | 02-28-2023 17:17:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21851). All of your documentation changes will be reflected on that endpoint.<|||||>@Narsil , @sanchit-gandhi It looks good for an initial review. @sanchit-gandhi I am not also super confident if I have gotten the `jit` part right, I would appreciate if you can take a look and given any feedbacks on that.
Thanks a lot for your time. <|||||>Awesome to see this! Nice timing. automatic_speech_recognition would be cool to have too with `FlaxWhisper` that was merged recently!<|||||>Thanks for your PR @Shubhamai . At this stage, we do not have plans to have a pipeline in Flax similar to the ones in PyTorch and TensorFlow, which is why the original PR from @Narsil was not continued. There are several reasons for that:
1. The `pipeline` object is aimed at software engineers not necessarily familiar with machine learning, and Flax users are more researchers
2. `pipeline`s are meant to quickly try out a task, which does not work well in Flax/JAX where you have to compile and jit to get nice performance.
We can leave the PR open so that users try out your branch if they want something like a pipeline for Flax, but we don't want to commit to maintain this code, so we won't merge it in the main branch. What we might add in the future is a way to use large models on TPUs with Jax in a pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,850 | closed | ValueError: Please make sure you have `sentencepiece` installed in order to use this tokenizer. | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <No>
### Who can help?
@Narsil @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run in colab the example in this documentation https://huggingface.co/tasks/translation:
```
!pip install transformers transformers[sentencepiece]
```
```
from transformers import pipeline
model_checkpoint = "Helsinki-NLP/opus-mt-en-fr"
translator = pipeline("translation", model=model_checkpoint)
translator("How are you?")
# [{'translation_text': 'Comment allez-vous ?'}]
```
### Expected behavior
output
```
# [{'translation_text': 'Comment allez-vous ?'}]
```
as signaled in this documentation https://huggingface.co/tasks/translation | 02-28-2023 17:08:41 | 02-28-2023 17:08:41 | Following the steps you indicate to reproduce does not give me any error in Colab. Is it possible you installed sentencepiece after installing transformers and did not restart your environment?<|||||>You are right, thanks @sgugger
(reloading google colab didn't fix my issue but creating a new colab did the trick) |
transformers | 21,849 | closed | [ConvBert] Fix #21523 | # What does this PR do?
Fixes #21523, the invalid reshaping of the context layer and adds a test to make sure we support different head ratios.
Made sur that the slow test all pass.
| 02-28-2023 16:27:39 | 02-28-2023 16:27:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,848 | closed | Token classification for a non-textual data | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.2+cu113 (True)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm looking for an implementation of an architecture that performs token classification, but the input is not an integer that represents the vocabulary but a vector of numbers.
Basically, each token in the input is represented by a vector. Each token is already an embeddings vector.
How can this be achieved?
Best,
Vitaly
### Expected behavior
Input vector of size 768 for each token. A sequence of such tokens of up to 512.
Maybe it is as simple as removing the layer
(word_embeddings): Embedding(50265, 768, padding_idx=1)?
In any case a link to the solution would be most helpful.
Best,
Vitaly
| 02-28-2023 13:34:28 | 02-28-2023 13:34:28 | You should probably ask this open question on the [forums](https://discuss.huggingface.co/).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,847 | closed | fsdp bf16 enable autocast | # What does this PR do?
1. Fixes #21560 wrt FSDP integration.
| 02-28-2023 12:15:26 | 02-28-2023 12:15:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @sgugger, officially, there is not much info on `mixed_precision` support of FSDP. In the official docs here https://pytorch.org/docs/stable/fsdp.html, it doesn't mention anything regarding `bf16` and `fp16` nuances.
In the official tutorial: https://github.com/lessw2020/transformer_central/tree/main/mixed_precision, for `bf16` they don't specify the need for autocasting. Here, it is mentioned the need for `ShardedGradScaler` for `fp16`. And the issue https://github.com/pytorch/pytorch/issues/75676 is still open.
However, `bf16` support with FSDP does work for few models, notably T5 as mentioned in this github comment from my experiments: https://github.com/pytorch/pytorch/issues/79605#issuecomment-1184410231
In T5, attention probs are casted back to `bf16` explicitly by this line of code: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L560-L562 .
This avoids the error that happens for BERT in #21560. With this observation, I just tried enabling autocast and observed no errors and expected performance. Hence, this PR.
<|||||>using this PR when I run:
```bash
torchrun --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --overwrite_output_dir --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir $TASK_NAME/ --fsdp "full_shard auto_wrap" --fsdp_config "fsdp_config.json" --bf16
```
wirh fsdp_config.json contents being:
```
{
"fsdp_transformer_layer_cls_to_wrap": "BertLayer",
"fsdp_backward_prefetch": "backward_pre",
"fsdp_forward_prefetch": true,
"limit_all_gathers": true
}
```
output logs:
```
wandb: Run summary:
wandb: eval/accuracy 0.84804
wandb: eval/combined_score 0.87039
wandb: eval/f1 0.89273
wandb: eval/loss 0.36461
```
without fsdp run gave below results:
```
torchrun --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --overwrite_output_dir --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3 --output_dir $TASK_NAME/ --bf16
```
```
wandb: Run summary:
wandb: eval/accuracy 0.84804
wandb: eval/combined_score 0.87057
wandb: eval/f1 0.8931
wandb: eval/loss 0.36857
```
So, similar performance between them. |
transformers | 21,846 | closed | [time series] Add Time series inputs tests | # What does this PR do?
This PR adds tests to make sure that the appropriate inputs are being created for the time series transformer for the training and generation use-cases.
cc @NielsRogge
| 02-28-2023 12:11:32 | 02-28-2023 12:11:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,845 | closed | Copy models back to CPU before merging them after evaluation. | # What does this PR do?
During evaluation, move the predictions from the `eval_accumulation_steps` to CPU before merging them to avoid an out-of-memory error. | 02-28-2023 11:54:35 | 02-28-2023 11:54:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR. This is done by the function `nested_numpify` on the line just below.<|||||>I see... I still have a problem when evaluating my model (out-of-memory). I will open an issue about it.
I'm closing this pull request, thanks! |
transformers | 21,844 | closed | Fix gradient checkpointing bug BioGpt | # What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue #21737 for BioGpt.
cc @younesbelkada, @gante | 02-28-2023 11:18:51 | 02-28-2023 11:18:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,843 | closed | [`T5`] Fix torchquant issue | # What does this PR do?
Fixes #21839
This PR fixes a bug that was introduced with https://github.com/huggingface/transformers/pull/21281 - before this PR, the snippet below was working:
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "google/flan-t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = model.generate(input_ids)
```
On the `main` branch the snippet does not work anymore due to the check `self.wo.weight.dtype` since ` torch.quantization.quantize_dynamic` converts all `nn.Linear` layers to a bound function, leading to an error.
Since the users were able to run this snippet on previous versions, I think that we should support this feature.
Added also a cool test for that
cc @sgugger
| 02-28-2023 09:38:24 | 02-28-2023 09:38:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can confirm t5 & int8 slow tests are passing, merging! |
transformers | 21,842 | closed | Fix gradient checkpointing bug marian | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-28-2023 09:26:16 | 02-28-2023 09:26:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,841 | closed | Fix gradient checkpointing bug M2M 100 | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-28-2023 09:23:46 | 02-28-2023 09:23:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,840 | closed | Fix gradient checkpointing bug LED | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-28-2023 09:20:09 | 02-28-2023 09:20:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,839 | closed | quantize_dynamic on T5 model results in `AttributeError: 'function' object has no attribute 'dtype'` | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Paste the following into colab:
```
!pip install transformers sentencepiece accelerate
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
model_name = "google/flan-t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = model.generate(input_ids)
```
### Expected behavior
No Error
# Bug
```
AttributeError Traceback (most recent call last)
[<ipython-input-3-96b349bbc122>](https://localhost:8080/#) in <module>
1 input_text = "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
2 input_ids = tokenizer(input_text, return_tensors="pt").input_ids
----> 3 output = model.generate(input_ids)
10 frames
[/usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, hidden_states)
314 # See https://github.com/huggingface/transformers/issues/20287
315 # we also make sure the weights are not in `int8` in case users will force `_keep_in_fp32_modules` to be `None``
--> 316 if hidden_states.dtype != self.wo.weight.dtype and self.wo.weight.dtype != torch.int8:
317 hidden_states = hidden_states.to(self.wo.weight.dtype)
318
AttributeError: 'function' object has no attribute 'dtype'
```
# Fix
When I commented out the lines 316-317 in `transformers/models/t5/modeling_t5.py`, the model runs.
`quantize_dynamic` converts `self.wo.weight` into a bound function which when called, returns the weights. Hence `self.wo.weight` is a function with no attribute `dtype`.
Bug is introduced by #21281 | 02-28-2023 09:18:30 | 02-28-2023 09:18:30 | Hello @gerhean
Thanks for the issue!
https://github.com/huggingface/transformers/pull/21843 should fix your problem, you can already use the fix by checking out on the branch<|||||>Hi @gerhean
Now it's on the `main` branch, if you should be able to use it without any issue! |
transformers | 21,838 | closed | Unable to convert BioGpt slow tokenizer to fast: token out of vocabulary | ### System Info
I was trying to use BioGpt model in my QA task for fine-tuning. I would like to construct a fast tokenizer class based on the BioGptTokenizer, so that I could use the offsets_mapping to know from which words the tokens do origin. But unfortunately, when creating a BiogptTokenizerFast from the PreTrainedTokenizerFast by `convert_slow_tokenizer`, following error occurs: Error while initializing BPE: Token `-@</w>` out of vocabulary.
#### Error trace
```
Traceback (most recent call last):
File "run.py", line 124, in <module>
trainer, predict_dataset = get_trainer(args)
File "***/tasks/qa/get_trainer.py", line 31, in get_trainer
tokenizer = BioGptTokenizerFast.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1801, in from_pretrained
return cls._from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1956, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "***/model/biogpt/tokenization_biogpt_fast.py", line 117, in __init__
super().__init__(
File "***/model/biogpt/tokenization_utils_fast.py", line 114, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "***/model/biogpt/convert_slow_tokenizer.py", line 1198, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "***/model/biogpt/convert_slow_tokenizer.py", line 273, in converted
BPE(
Exception: Error while initializing BPE: Token `-@</w>` out of vocabulary
```
### Who can help?
@ArthurZucker @younesbelkada @kamalkraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I copy the code related to colab.This is the link : https://colab.research.google.com/drive/1IMhiDz45GiarBLgXG9B2rA_u0ZOmmjJS?usp=sharing
### Expected behavior
According to this issue [https://github.com/huggingface/transformers/issues/9290](url), this problem might be caused by some missing tokens in `vocab.json` or `merge.txt`. Could you please check it? Thank you very much! | 02-28-2023 06:40:06 | 02-28-2023 06:40:06 | Hey! It would be a bit difficult to say that this is a bug as we do not have an implementation of the `BioGPTConverter` which would be cool btw. In order to properly create a fast tokenizer you need to have the `pre_normalizer` the `decoder` etc. Take a look at `convert_slow_tokenizers` for more details! Feel free to open a PR and ping me π <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,837 | closed | transformers == 4.26.0 has a bug | ### System Info
Traceback (most recent call last):
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
ImportError: cannot import name 'BridgeTowerProcessor' from 'transformers' (lib/python3.7/site-(packages/transformers/__init__.py),could you fix this bugγi use transformers == 4.26.0(pip )
/cc @xianbaoqian
### Who can help?
@xianbaoqian
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, 1].item()
### Expected behavior
fix this bug | 02-28-2023 05:40:10 | 02-28-2023 05:40:10 | If you want to use this model, you should build from source. It was merged to main on the 25th of January, the release does not include it. Next release will make it available.
Closing as I can import on main. |
transformers | 21,836 | closed | HF's Flan-T5 implementation doesn't support Chinese or code despite being trained on it | ### System Info
transformers == 4.26.1
pytorch == 1.13.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xxl")
tokenizer.decode(tokenizer("δ½ ε₯½δ½ ε₯½ε").input_ids)
```
returns `<unk></s>`.
Similarly, the tokenizer can't encode curly braces (`{` or `}`) or `\n` or `\t`, making it useless for code. Is the tokenizer included with the model the right one?
### Expected behavior
The tokenizer should be able to encode Asian languages (including Chinese) as well as code. The model was trained on both according to the paper. Did you port the proper tokenizer from the T5x repo?
I would appreciate your help. | 02-28-2023 03:11:50 | 02-28-2023 03:11:50 | Hey! Thanks for posting. The original tokenizer does not support chinese (it only supports 4 language I think) either.
Here is a minimal reproducing script using the vocabulary path provided in the `t5_1_1_base.gin` that is used for all of the Flan T5 (according to github).
```python
>>> import seqio
>>> vocabulary = seqio.SentencePieceVocabulary("gs://t5-data/vocabs/cc_all.32000.100extra/sentencepiece.model")
>>> vocabulary.tokenizer.encode("δ½ ε₯½δ½ ε₯½ε")
[3, 2]
>>> vocabulary.tokenizer.decode(vocabulary.tokenizer.encode("δ½ ε₯½δ½ ε₯½ε"))
' β '
```
We probably made a mistake in the `tags` of the model that should not include these. The paper does not mention anything else, and I tested with the mT5 tokenizer without avail.
Will try too look a bit more into this. <|||||>They probably did not release the multilingual finetune checkpoints : we only have token vocabulary of 32 000 instead of 250 000 used for mT5 their multilingual tokenizers.<|||||>Thanks for the clarification @ArthurZucker. But the paper did mention the model being fine-tuned on code -- but I don't see how that is possible if the model can't support newlines, brackets, or tabs as tokens.<|||||>Yes! The paper seems to mention multilingual and code, but I dug and could not reproduce anything... Again a minimal reproduction script:
```python
>>> import seqio
>>> vocabulary = seqio.SentencePieceVocabulary("gs://t5-data/vocabs/cc_all.32000.100extra/sentencepiece.model")
>>> vocabulary.tokenizer.encode("if True:\n\tprint('Wow')")
[3, 99, 10998, 10, 2281, 599, 31, 518, 2381, 31, 61]
>>> vocabulary.tokenizer.decode(vocabulary.tokenizer.encode("if True:\n\tprint('Wow')"))
"if True: print('Wow')"
```
I might not be using some special arguments but I am not familiar with the black box seqio π
<|||||>Right -- this is exactly what I'm talking about. Is there any way to reach out to the authors and figure this out? Seems pretty important that core functionality isn't working. And in the meantime, perhaps the model card should remove mentions to multilingual capabilities that it can't actually support.<|||||>We merged a lot of PRs on the hub to fix this. Marking as resolved! Thanks for reporting<|||||>@ArthurZucker
hi, I have the same problem with flan-t5 to segment Chinese.
is there some thing I should do to resolve this? like upgrade pip package.
or other things?<|||||>You should just read this issue π nothing we can do about it unfortunately <|||||>@ArthurZucker ok, I read that commit,and I get it, thank you |
transformers | 21,835 | closed | Troubleshooting AttributeError: 'Seq2SeqTimeSeriesPredictionOutput' object has no attribute 'sequences' | Hi, I am reporting this issue as recommended
Context: I am trying to setup/test transformers library, I managed to get the installation and followed the steps in this specific time series transformer tutorial: https://huggingface.co/docs/transformers/model_doc/time_series_transformer
I received an error when I tried to invoke:
`mean_prediction = outputs.sequences.mean(dim=1)`
The error: AttributeError: 'Seq2SeqTimeSeriesPredictionOutput' object has no attribute 'sequences'
by:
`print(outputs.keys())`
I got:
`odict_keys(['loss', 'params', 'encoder_last_hidden_state', 'scale', 'static_features'])`
I searched the documentations but can not find any match.
`import transformers as tfs
print(tfs.__version__)`
4.26.1
I would really appreciate some feedback in regards to this issue.
Thank you
| 02-28-2023 01:59:20 | 02-28-2023 01:59:20 | Hey. I don't think you are using the correct version of transformers. I can't reproduce this and our documentation tests ensure that this works. Also see here for the `sequence` [output](https://github.com/ArthurZucker/transformers/blob/main/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L535) . As you can see, the `generate` method of `TimeSeriesTransformerForPrediction` return this class.<|||||>Hi Arthur, thank you for the response. I revisited the tutorial an =d it turned out I missed on a step. It all good now and this issue can be closed. |
transformers | 21,834 | closed | Fix tf random token masking probability in data collator | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> @sgugger | 02-28-2023 01:28:56 | 02-28-2023 01:28:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker Do I need to merge it now? |
transformers | 21,833 | closed | Fixed gradient_checkpointing/use_cache bug in blenderbot | # What does this PR do?
Fixes #21737 for blenderbot
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada | 02-27-2023 23:03:07 | 02-27-2023 23:03:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada Thanks for your help! It should be all good |
transformers | 21,832 | closed | Illegal memory access when using Trainer API on GPU with PyTorch 2.0's Inductor backend | ### System Info
### System Information
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 2.0.0a0+git8693604 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (A10G/A100)
- Using distributed or parallel set-up in script?: No
### More information
1. Training is successful in eager mode with batch size of 128
2. Training is successful in dynamo + eager mode with batch size of 128
3. Training is only able to succeed with dynamo + inductor with batch size of 2
4. [Dynamo benchmarks](https://github.com/pytorch/pytorch/blob/master/benchmarks/dynamo/huggingface.py) which use the same HF models without Trainer API are able to succeed.
### Error Signature
| 1677525140847 | Traceback (most recent call last): File "./run_mlm.py", line 694, in <module> |
| 1677525140847 | main() File "./run_mlm.py", line 635, in main |
| 1677525140847 | train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1631, in train |
| 1677525140847 | return inner_training_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1898, in _inner_training_loop |
| 1677525140848 | tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2640, in training_step |
| 1677525140848 | loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2672, in compute_loss |
| 1677525140848 | outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in __call__ return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn |
| 1677525140848 | return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl |
| 1677525140848 | return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1324, in forward |
| 1677525140848 | @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn |
| 1677525140848 | return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2816, in forward |
| 1677525140848 | return compiled_fn(full_args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1222, in g |
| 1677525140848 | return f(*args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2383, in debug_compiled_function |
| 1677525140848 | return compiled_function(*args) File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1895, in runtime_wrapper |
| 1677525140848 | all_outs = call_func_with_args( File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1256, in call_func_with_args |
| 1677525140848 | out = normalize_as_list(f(*args)) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply |
| 1677525140848 | return super().apply(*args, **kwargs) # type: ignore[misc] File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 2148, in forward |
| 1677525140848 | fw_outs = call_func_with_args( File "/opt/conda/lib/python3.8/site-packages/torch/_functorch/aot_autograd.py", line 1247, in call_func_with_args |
| 1677525140848 | out = normalize_as_list(f(args)) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/compile_fx.py", line 248, in run |
| 1677525140848 | return model(new_inputs) File "/tmp/torchinductor_root/rj/crjch2m3bp6tuhd3s6n2apgbibxay4o6o5jlrfbwsfiokrv2rkep.py", line 4483, in call |
| 1677525140848 | triton__49.run(primals_208, buf513, buf514, 128, 128, grid=grid(128), stream=stream0) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 184, in run |
| 1677525140848 | self.autotune_to_one_config(*args, grid=grid) File "/opt/conda/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper |
| 1677525140848 | r = func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 171, in autotune_to_one_config timings = { File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 172, in <dictcomp> |
| 1677525140848 | launcher: self.bench(launcher, *cloned_args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/_inductor/triton_ops/autotune.py", line 153, in bench return do_bench(kernel_call, rep=40, fast_flush=True) File "/opt/conda/lib/python3.8/site-packages/triton/testing.py", line 144, in do_bench |
| 1677525140848 | torch.cuda.synchronize() File "/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py", line 711, in synchronize |
| 1677525140848 | return torch._C._cuda_synchronize() |
| 1677525140848 | RuntimeError: CUDA error: an illegal memory access was encountered
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Training Command: ['python', './run_mlm.py', '--model_name_or_path', 'bert-base-uncased', '--output_dir', '/opt/ml/model', '--fp16', '--dataloader_drop_last', '--dataset_config_name', 'wikitext-2-raw-v1', '--dataset_name', 'wikitext', '--do_train', '--evaluation_strategy', 'no', '--logging_strategy', 'epoch', '--max_seq_length', '128', '--num_train_epochs', '50', '--overwrite_output_dir', '--per_device_train_batch_size', '128', '--save_strategy', 'no', '--torch_compile_backend', 'inductor']
### Expected behavior
No exceptions when using Inductor backend with the trainer API.
| 02-27-2023 22:29:43 | 02-27-2023 22:29:43 | Pretty much the same as #21826 , inductor backend is not yet supported.<|||||>@ArthurZucker Can you elaborate on this ?
Do you mean the Trainer class is using a piece of code that is unsupported by inductor backend ?
Is this something that you will wait on PyTorch to fix or are you amenable to workarounds in the Trainer class for the inductor backend ?
Is there already a root cause that boils down to a few pieces of code ?
<|||||>Hi @Lokiiiiii We haven't investigated the issue yet, to make sure if it comes from a bug in PyTorch or in Transformers. Stay tuned!<|||||>I can reproduce this issue and have something smaller:
```py
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling
def main():
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
texts = ["This is a text for the example."] * 16
tokenized_texts = tokenizer(texts, padding="max_length", truncation=True, max_length=128, return_tensors="pt")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer)
batch = tokenized_texts
batch["input_ids"], batch["labels"] = data_collator.torch_mask_tokens(batch["input_ids"])
model = torch.compile(model, backend="inductor")
model.to("cuda")
batch = {k: v.to("cuda") for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward()
if __name__ == "__main__":
main()
```
Reaching out to the PyTorch folks as it is raised in the model forward, so not something in Transformers at first glance.<|||||>See https://github.com/pytorch/pytorch/issues/95794<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,831 | closed | HF pipeline throws error | ### System Info
HF Pipeline actually trying to generate the outputs on CPU despite including the device_map=auto as configuration for GPT_NeoX 20B model.
Workaround is to use model.generate method by manually converting the input_ids to GPU.
```
[INFO ] PyProcess - prediction = self.hf_pipeline(data, **parameters)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/text_generation.py", line 187, in __call__
[INFO ] PyProcess - return super().__call__(text_inputs, **kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1063, in __call__
[INFO ] PyProcess - outputs = [output for output in final_iterator]
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1063, in <listcomp>
[INFO ] PyProcess - outputs = [output for output in final_iterator]
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
[INFO ] PyProcess - item = next(self.iterator)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 112, in __next__
[INFO ] PyProcess - processed = self.infer(item, **self.params)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 990, in forward
[INFO ] PyProcess - model_outputs = self._forward(model_inputs, **forward_params)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/text_generation.py", line 229, in _forward
[INFO ] PyProcess - generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
[INFO ] PyProcess - return func(*args, **kwargs)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 1422, in generate
[INFO ] PyProcess - return self.sample(
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py", line 2049, in sample
[INFO ] PyProcess - next_token_scores = logits_warper(input_ids, next_token_scores)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_logits_process.py", line 92, in __call__
[INFO ] PyProcess - scores = processor(input_ids, scores)
[INFO ] PyProcess - File "/usr/local/lib/python3.8/dist-packages/transformers/generation_logits_process.py", line 233, in __call__
[INFO ] PyProcess - indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None]
[INFO ] PyProcess - RuntimeError: "topk_cpu" not implemented for 'Half'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/deepjavalibrary/djl-serving/blob/master/engines/python/setup/djl_python/huggingface.py#L129 - This is the code we used to test.
### Expected behavior
This error is happening for only GPT Neox 20B https://huggingface.co/EleutherAI/gpt-neox-20b. It worked for Bloom 7B amd gptj models. | 02-27-2023 21:54:45 | 02-27-2023 21:54:45 | Hey! Would you mind providing a minimal reproducing script? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,830 | closed | Temporarily fix ONNX model exporting error | β¦issues/143
# What does this PR do?
Fix the following error while trying to export ONNX model
**TypeError: '<=' not supported between instances of 'tuple' and 'Tensor'**
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-27-2023 20:27:55 | 02-27-2023 20:27:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @amyeroberts <|||||>Thanks for opening this PR @SatyaJandhyalaAtMS !
Could you share a link to the issue this resolves? I'm getting a 404 error for the link in the commit / PR title: https://github.com/microsoft/onnx-converters-private/issues/143<|||||>The python code to reproduce the error is:
```
import onnxruntime as ort
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
import numpy as np
import torch.onnx
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
options = ort.SessionOptions()
# options.log_severity_level = 0
torch.onnx.export(model, inputs['pixel_values'],"swinv2.onnx", export_params=True, opset_version=11, do_constant_folding=True, input_names=["input"], output_names=["output"])
ort_sess = ort.InferenceSession("swinv2.onnx", providers=["CUDAExecutionProvider"], sess_options=options)
ort_outputs=ort_sess.run(None, {"input":inputs['pixel_values'].numpy()})
ort_prediction=int(np.argmax(np.array(ort_outputs[0]).squeeze(), axis=0))
if ort_prediction == predicted_class_idx:
print("Test passed")
else:
print("Test failed")
```
The error is as follows:
Traceback (most recent call last):
File "test_with_ort.py", line 19, in <module>
torch.onnx.export(model, inputs['pixel_values'],"swinv2.onnx", export_params=True, opset_version=11, do_constant_folding=True, input_names=["input"], output_names=["output"])
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 1274, in forward
outputs = self.swinv2(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 1078, in forward
encoder_outputs = self.encoder(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 907, in forward
layer_outputs = layer_module(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 821, in forward
layer_outputs = layer_module(
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 723, in forward
self.set_shift_and_window_size(input_dimensions)
File "/home/sajandhy/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/swinv2/modeling_swinv2.py", line 670, in set_shift_and_window_size
if input_resolution
**TypeError: '<=' not supported between instances of 'tuple' and 'Tensor'**<|||||>My Python environment is
onnxruntime 1.14.1
torch 1.13.1+cu116
torchaudio 0.13.1+cu116
torchvision 0.14.1+cu116
transformers 4.26.1
My platform is
Ubuntu 22.04<|||||>Thanks for updating @SatyaJandhyalaAtMS !
The tests are currently failing on the code quality checks. As the lines of code that have been modified are in a class with a `# Copied from` header, the original code source the comment points to will need to be updated i.e. the equivalent line in `transformers.models.swin.modeling_swin.SwinOutput`.
Then run `make fix-copies` to propogate the change across the repo.
To get the other styling tests pass, run `make style` to have the code formatted in the expected style
|
transformers | 21,829 | closed | Add: task guide for zero shot object detection | This PR adds a task guide for zero-shot object detection. Unlike other task guides, there is no fine-tuning and complex preprocessing of custom data. However, the task illustrates different ways of inferencing with OWL-ViT, such as using the pipeline, manual inference with text queries, manual inference for a batch of examples, and image-guided object detection.β¨The task guide is based on [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) with some additions (e.g. pipeline example) and some modifications. | 02-27-2023 19:36:26 | 02-27-2023 19:36:26 | Images are in this PR https://huggingface.co/datasets/huggingface/documentation-images/discussions/49<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Nice work, thanks for adding this! I especially like the brief intro to the OWL-ViT model. Maybe we can embed one of the OWL-ViT demos (like this [one](https://huggingface.co/spaces/adirik/OWL-ViT)) directly on the page so users can play with it?
Thanks for the suggestion! I embedded demo :) |
transformers | 21,828 | closed | Fix quality with `ruff==0.0.253` | # What does this PR do?
Fix quality with `ruff==0.0.253`.
Merged to avoid CI failing due to the new version of `ruff==0.0.253`.
- The change with this new version is valid so I decide to go with it instead of pining older version.
- The change also works with previous `ruff` versions (`0.0.252` and `0.0.243`), so contributors don't need to upgrade their ruff.
cc @sgugger for comments if any. | 02-27-2023 18:32:33 | 02-27-2023 18:32:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21828). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,827 | closed | Add: task guide for zero shot object detection | This PR adds a task guide for zero-shot object detection. Unlike other task guides, there is no fine-tuning and complex preprocessing of custom data. However, the task illustrates different ways of inferencing with OWL-ViT, such as using the pipeline, manual inference with text queries, manual inference for a batch of examples, and image-guided object detection.
The task guide is based on [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) with some additions (e.g. pipeline example) and some modifications. | 02-27-2023 18:32:26 | 02-27-2023 18:32:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,826 | closed | Faketensor issue when using torch inductor as backend with Trainer API | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-1030-aws-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 2.0.0a0+git45d775c (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the error:
```
pip3 install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
cd examples/pytorch/image-classification
pip install -r requirements.txt
python run_image_classification.py \
--dataset_name food101 --output_dir ./food101_outputs/ \
--remove_unused_columns False --do_train --learning_rate 2e-5 \
--num_train_epochs 1 --report_to none --per_device_train_batch_size 1 \
--logging_strategy steps --logging_steps 10 --save_strategy epoch \
--overwrite_output_dir --torch_compile_backend inductor
```
### Expected behavior
In the forward pass we saw
```
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in __call__
return self.dynamo_ctx(self._orig_mod.__call__)(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 215, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 343, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 530, in transform_code_object
transformations(instructions, code_options)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 311, in transform
tracer.run()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1862, in run
super().run()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 619, in run
and self.step()
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 583, in step
getattr(self, inst.opname)(inst)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1119, in STORE_ATTR
self.output.compile_subgraph(
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 579, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 626, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/_dynamo/output_graph.py", line 713, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.convolution.default(*(FakeTensor(FakeTensor(..., device='meta', size=(1, 3, 224, 224)), cuda:0), tensor([[[[ 1.5585e-02, 5.1153e-02, 5.5507e-02, ..., 8.7095e-02,
1.0724e-01, 1.1972e-01],
[ 1.2528e-02, 1.3805e-02, 1.9514e-02, ..., 8.3470e-02,
3.8538e-02, 8.5673e-02],
[ 1.6835e-02, 1.1395e-02, -4.2227e-03, ..., 7.6267e-02,
3.2913e-02, 3.5811e-02],
...,
[-1.3910e-02, -5.2311e-03, -5.4728e-02, ..., -5.9774e-02,
-6.8372e-02, 1.9745e-02],
[ 3.9740e-02, -4.4819e-02, -2.2511e-02, ..., 1.0378e-02,
-4.0710e-02, 8.1610e-02],
[ 6.9672e-02, 8.1102e-02, 3.7697e-02, ..., 9.5118e-02,
1.1510e-01, 1.5316e-01]],
[[-6.2993e-02, -3.8614e-02, -4.2429e-02, ..., -5.9603e-02,
-2.5599e-02, -2.1458e-02],
[-8.3695e-03, -1.0892e-02, -8.3388e-03, ..., -3.2603e-02,
-4.5211e-02, -3.2238e-04],
[ 4.6061e-02, 3.2819e-02, 1.3283e-02, ..., -3.5023e-02,
-3.9311e-02, -3.1661e-02],
...,
[ 3.6763e-02, 2.0482e-02, -5.9279e-02, ..., -2.3277e-02,
-3.5168e-02, 4.1823e-02],
[ 5.9616e-02, -4.6340e-02, -5.6928e-02, ..., 2.1146e-02,
-2.8068e-02, 8.1620e-02],
[ 3.2009e-02, 1.2772e-02, -5.2345e-02, ..., 4.5677e-02,
6.9799e-02, 9.9707e-02]],
[[-1.1234e-01, -3.7040e-02, 2.3741e-03, ..., -2.5491e-02,
-2.3041e-02, -4.9780e-02],
[-8.1879e-02, -1.8625e-02, 2.3212e-02, ..., 2.8488e-02,
-8.9298e-03, -8.8660e-03],
[-5.3666e-02, -8.0642e-03, 3.1249e-04, ..., 2.4810e-02,
1.9670e-03, -2.3820e-02],
...,
[-2.9322e-02, 2.7237e-02, 7.6372e-03, ..., -5.7884e-03,
-4.6314e-02, -2.3849e-02],
[-1.5772e-02, -6.9045e-02, -3.1001e-02, ..., -1.5394e-02,
-9.2838e-02, -1.6544e-02],
[-6.0532e-02, -5.9463e-02, -1.1458e-01, ..., -8.0818e-02,
-5.7851e-02, -3.6628e-02]]],
[[[-2.9805e-02, 2.8396e-02, 2.7912e-02, ..., 4.0566e-03,
-3.9771e-02, -3.0932e-02],
[-3.8447e-02, 7.3079e-02, 8.8570e-02, ..., 5.9645e-02,
7.1729e-02, -2.3781e-02],
[-6.7763e-02, -9.5788e-03, 2.2610e-02, ..., 6.9609e-02,
6.6766e-02, 2.1204e-02],
...,
[-9.7874e-02, 3.7861e-02, 1.4769e-01, ..., 4.2415e-02,
2.4433e-02, -1.9694e-02],
[-1.0486e-01, 9.1625e-02, 6.1902e-02, ..., 5.4587e-02,
6.1803e-02, -7.3962e-02],
[-8.3595e-02, -3.1282e-02, 4.5430e-03, ..., 1.0579e-01,
9.3442e-03, -3.1741e-02]],
[[-1.3991e-02, 3.7737e-02, 2.1331e-02, ..., -5.3170e-04,
-4.6416e-02, -3.0628e-02],
[-2.4599e-02, 7.4020e-02, 8.8880e-02, ..., 6.2086e-03,
2.5795e-02, -5.7271e-02],
[-3.8102e-02, 1.2179e-02, 4.9777e-02, ..., -1.9636e-03,
-7.1444e-03, -4.5585e-02],
...,
[-1.0808e-01, 1.5686e-02, 1.1100e-01, ..., 5.2311e-02,
4.0869e-02, 2.2863e-02],
[-9.2463e-02, 1.0301e-01, 7.3106e-02, ..., 5.4267e-02,
7.1780e-02, -5.0014e-02],
[-5.0767e-02, 1.1884e-02, 5.5425e-02, ..., 1.0279e-01,
1.1630e-02, -2.9744e-02]],
[[-1.2795e-02, 2.0818e-02, -1.5076e-02, ..., 5.3417e-02,
3.3919e-02, 8.3990e-02],
[-3.6925e-02, 1.5391e-02, -2.9710e-03, ..., 1.6262e-02,
4.1795e-02, 2.0504e-02],
[-6.3379e-02, -4.8387e-02, -3.3549e-02, ..., 1.4104e-02,
4.7213e-03, 1.5434e-02],
...,
[-5.8642e-02, 1.2647e-02, 7.3773e-02, ..., -3.3242e-02,
2.6407e-03, 4.4312e-02],
[-5.1496e-02, 1.1767e-01, 5.7953e-02, ..., -2.6282e-02,
3.3894e-02, -3.7081e-02],
[-3.9964e-02, 3.5065e-02, 7.8275e-02, ..., 2.7511e-02,
-3.0828e-02, -4.1590e-02]]],
[[[-1.5722e-02, -4.7355e-04, -1.0641e-02, ..., 1.7975e-03,
8.2088e-03, 2.2358e-03],
[ 1.3006e-02, 2.2377e-02, 4.6318e-03, ..., -8.6258e-03,
-9.6003e-03, -1.8025e-02],
[ 1.1088e-02, 2.8006e-02, 1.0182e-02, ..., -1.2203e-02,
-1.4415e-02, -2.4993e-02],
...,
[ 1.8237e-02, 1.0154e-02, 4.7651e-03, ..., -3.4567e-03,
2.9223e-03, 1.2099e-02],
[ 2.3695e-02, 5.8175e-03, 6.3596e-03, ..., -5.2218e-03,
-2.5360e-04, 2.2794e-02],
[ 6.2578e-03, -4.3371e-03, -1.8502e-02, ..., -1.2459e-02,
5.3634e-03, 2.5850e-02]],
[[-1.6234e-02, 8.7551e-03, 6.5956e-03, ..., 3.4186e-02,
4.4762e-02, 4.3195e-02],
[-8.9584e-04, 2.0318e-02, 8.5710e-03, ..., 1.4135e-02,
1.1658e-02, 8.4339e-03],
[-9.6351e-03, 2.2278e-02, 8.8168e-03, ..., 3.5157e-03,
3.5562e-03, -5.8602e-03],
...,
[ 2.7858e-03, 2.6561e-03, 1.9480e-03, ..., -2.0423e-03,
1.7712e-03, 9.1982e-03],
[-3.3212e-03, -5.4636e-03, -2.3775e-03, ..., -4.5449e-03,
-1.1336e-03, 1.2337e-02],
[-2.9789e-02, -2.6257e-02, -2.9101e-02, ..., -1.4381e-02,
-8.4865e-04, 9.4145e-03]],
[[-3.9820e-02, -2.2362e-03, -7.1783e-03, ..., -6.4696e-03,
-9.6364e-04, -8.4148e-03],
[-1.7378e-02, 1.6933e-02, 7.9727e-03, ..., -1.1679e-03,
-6.7231e-03, -1.6826e-02],
[-2.1956e-02, 1.7214e-02, 1.0109e-02, ..., 5.0059e-03,
-2.2617e-03, -1.9757e-02],
...,
[ 1.0005e-02, 1.8804e-02, 1.7285e-02, ..., 1.2554e-02,
8.4794e-03, -1.2349e-03],
[ 3.7699e-03, 1.2804e-02, 2.3074e-02, ..., 8.5308e-03,
4.7621e-03, 3.3861e-03],
[-1.5654e-02, 1.4163e-03, 1.6649e-03, ..., 1.0598e-03,
5.9754e-03, 1.9494e-03]]],
...,
[[[-4.0359e-02, 1.7736e-02, 6.0642e-02, ..., -3.0918e-02,
-4.0575e-02, 8.9583e-03],
[-4.5315e-02, 2.3188e-02, 7.3064e-02, ..., -4.0640e-02,
-3.9918e-02, -1.0623e-02],
[-2.8732e-02, 3.8204e-03, 6.9796e-02, ..., -3.4637e-02,
-4.3759e-02, -2.8479e-02],
...,
[ 3.1028e-03, -1.0449e-02, 1.7222e-02, ..., 1.2165e-01,
4.6210e-02, -4.2192e-02],
[ 7.6139e-03, 2.3000e-02, 9.2082e-03, ..., 4.5240e-02,
2.7699e-02, -5.6189e-02],
[ 2.3990e-02, -1.4030e-02, -1.0267e-02, ..., -5.2204e-02,
-6.0534e-02, -7.1400e-02]],
[[-1.1183e-01, -4.2477e-02, 6.3755e-03, ..., -1.0254e-01,
-1.1569e-01, -6.7280e-02],
[-9.4214e-02, -8.5278e-03, 5.2520e-02, ..., -6.8358e-02,
-7.9058e-02, -5.9130e-02],
[-7.1334e-02, -1.6083e-02, 7.3228e-02, ..., -2.8043e-02,
-5.2757e-02, -4.9763e-02],
...,
[ 2.0692e-02, 7.6735e-03, 3.0739e-02, ..., 1.7656e-01,
8.8030e-02, -1.7911e-02],
[ 2.6301e-02, 4.7867e-02, 2.6775e-02, ..., 8.4734e-02,
6.1141e-02, -3.5701e-02],
[ 2.9517e-02, 3.1440e-04, 2.3788e-03, ..., -2.9185e-02,
-4.0404e-02, -5.2867e-02]],
[[-6.7530e-02, -2.4295e-02, -5.6700e-03, ..., -6.6664e-02,
-5.8257e-02, -1.5316e-02],
[-4.9842e-02, -1.2484e-03, 2.4207e-02, ..., -5.5344e-02,
-4.6049e-02, -1.7707e-02],
[-3.7976e-02, -1.2565e-02, 3.6258e-02, ..., -3.3223e-02,
-4.0654e-02, -1.9860e-02],
...,
[ 2.3920e-04, -1.2323e-02, -2.5662e-03, ..., 6.3977e-02,
2.5765e-02, -2.7954e-02],
[ 5.6197e-03, 2.9101e-02, -2.5730e-03, ..., 1.9924e-02,
2.6986e-02, -2.9991e-02],
[ 9.6282e-03, -3.3303e-03, 4.9659e-04, ..., -2.7544e-02,
-3.6161e-02, -4.3468e-02]]],
[[[ 6.2803e-02, 7.9755e-02, 8.1322e-02, ..., -3.7440e-03,
4.9733e-03, -3.9191e-02],
[-4.2880e-02, -4.4536e-02, -4.7230e-02, ..., -7.2959e-02,
-7.5629e-02, -6.4150e-02],
[-6.8180e-02, -5.6262e-02, -6.1507e-02, ..., -4.4115e-02,
-3.9107e-02, -6.6999e-02],
...,
[ 1.7710e-02, 8.7420e-02, 3.3490e-02, ..., 1.2845e-02,
4.8843e-02, 2.8497e-02],
[ 3.1498e-02, 2.7061e-02, 7.8619e-03, ..., 6.8658e-02,
5.4993e-02, 6.3310e-02],
[-9.4789e-02, -6.8985e-02, -1.4324e-01, ..., -6.7484e-03,
4.5337e-02, 2.0077e-02]],
[[ 8.4717e-02, 5.7770e-02, 3.7002e-02, ..., -1.3588e-02,
-7.0228e-03, -6.0760e-02],
[-1.5979e-02, -6.4630e-02, -9.1545e-02, ..., -9.8412e-02,
-9.4532e-02, -9.2499e-02],
[-2.8392e-02, -6.5403e-02, -1.0047e-01, ..., -8.3756e-02,
-6.3615e-02, -9.5850e-02],
...,
[ 5.9997e-02, 1.0705e-01, 4.9321e-02, ..., 3.5627e-02,
5.4496e-02, -1.2566e-02],
[ 6.3830e-02, 2.6649e-02, 7.0561e-03, ..., 9.2897e-02,
5.9108e-02, 2.4363e-02],
[-4.8049e-02, -5.9966e-02, -1.3561e-01, ..., 2.1068e-02,
5.2889e-02, 1.5655e-03]],
[[ 6.2001e-02, 2.0778e-02, 1.1975e-02, ..., -2.0197e-03,
6.9852e-03, -4.6863e-02],
[ 1.3178e-02, -3.5991e-02, -5.1943e-02, ..., -2.9544e-02,
-2.5600e-02, -3.3762e-02],
[ 2.7143e-02, -2.0521e-02, -6.2057e-02, ..., -1.6369e-02,
1.0089e-02, -2.7409e-02],
...,
[ 5.1592e-02, 8.0635e-02, 4.0372e-02, ..., -1.1472e-02,
-1.3918e-02, -9.3905e-02],
[ 8.2424e-02, 2.7950e-02, 3.8630e-02, ..., 2.2609e-02,
-1.9679e-02, -5.3972e-02],
[ 5.0708e-02, 1.6264e-02, -4.1324e-02, ..., -2.9146e-02,
-2.8712e-03, -4.6670e-02]]],
[[[-1.3285e-02, -1.1488e-03, 3.0550e-03, ..., -9.4483e-03,
-8.8926e-03, -9.0441e-04],
[ 2.3043e-03, -4.1523e-03, -5.2203e-03, ..., 9.1216e-05,
-9.0951e-03, 2.5220e-03],
[-6.0988e-03, -1.1074e-02, -5.9025e-03, ..., 1.6161e-03,
-8.8638e-03, -1.3972e-03],
...,
[-5.4505e-03, -4.5738e-03, -1.0316e-03, ..., 3.4947e-04,
-6.1689e-03, -4.7928e-03],
[ 2.6408e-03, -2.8769e-03, -5.0605e-03, ..., 1.3172e-04,
-1.3570e-03, -2.5045e-03],
[-5.3083e-04, -9.0542e-03, -7.9351e-03, ..., 4.1531e-03,
-2.9358e-03, -7.6401e-03]],
[[ 7.8418e-03, 1.8411e-02, 2.0441e-02, ..., 5.7535e-03,
8.6325e-03, 1.8935e-02],
[ 1.7873e-02, 8.4761e-03, 7.8354e-03, ..., 8.8990e-03,
5.7851e-03, 1.6686e-02],
[ 6.2208e-03, -3.4983e-03, 1.7564e-03, ..., 4.4534e-03,
-3.9801e-03, 4.1031e-03],
...,
[ 2.9908e-03, 3.0588e-03, 1.2445e-03, ..., 4.4925e-04,
1.0658e-03, 7.1672e-03],
[ 1.6074e-02, 8.3498e-03, 5.7661e-03, ..., 9.4217e-03,
1.0780e-02, 1.6278e-02],
[ 4.4408e-03, 1.5656e-03, -2.5045e-04, ..., 9.6282e-03,
6.9736e-03, 6.3294e-03]],
[[-7.2360e-03, -1.6965e-03, 3.0952e-03, ..., 1.5861e-03,
-6.5436e-03, 4.5501e-03],
[ 2.8352e-03, -9.9491e-03, -6.2758e-03, ..., -2.5671e-03,
-1.2278e-02, -1.3398e-03],
[ 1.9139e-04, -8.5651e-03, -1.9316e-03, ..., 2.4267e-04,
-1.0509e-02, -5.0672e-03],
...,
[-5.4899e-03, -6.1185e-03, 1.0800e-03, ..., 3.4343e-03,
-4.8832e-03, 7.7482e-04],
[-6.7859e-03, -1.3454e-02, -8.1208e-03, ..., 5.4349e-04,
-7.4328e-03, 3.7456e-04],
[-1.5332e-02, -1.7777e-02, -1.3543e-02, ..., -1.2068e-03,
-1.5574e-02, -8.6474e-03]]]], device='cuda:0',
grad_fn=<BroadcastBackward>), tensor([-1.6090e-02, 1.2174e-02, 2.2797e-01, 4.6908e-02, -1.5499e-01,
8.7547e-02, 1.3502e-01, 1.2800e-02, 8.2932e-02, -4.2087e-01,
3.8552e-03, -2.6860e-02, 1.4235e-02, -5.7877e-03, -3.6805e-02,
-3.5540e-02, 5.5290e-03, 9.2909e-02, 1.2771e-02, -3.0965e-02,
-6.3441e-02, -8.4934e-04, -5.3447e-03, -2.5515e-02, -4.1445e-03,
2.5515e-02, 1.8479e-02, -2.5615e-02, 4.0568e-02, -2.0309e-02,
-2.8299e-03, -5.2244e-03, -2.2531e-02, -1.0226e-03, -1.7576e-02,
1.1028e-03, -2.5013e-02, -3.2375e-02, 1.4297e-02, 1.2362e-02,
-3.5994e-02, 4.0352e-02, -3.7467e-02, -5.9556e-03, -4.2830e-01,
3.5867e-01, 1.1702e-02, -1.1946e-02, 1.0084e-01, -1.4606e-02,
-1.0271e-02, 4.5181e-01, 5.3849e-03, -1.2856e-02, 1.4235e-03,
-1.7222e-03, -2.2668e-02, -2.2627e-03, 3.5256e-02, -1.5487e-01,
-3.0596e-02, 1.7638e-02, -1.6498e-02, -2.2896e-03, 1.1674e-01,
1.4380e-02, 7.1714e-02, 6.4757e-03, 1.4729e-02, 1.0424e-01,
2.1318e-02, 7.3172e-01, -2.7957e-03, 1.0743e-02, -5.3283e-01,
3.7074e-03, -8.4154e-03, -1.1694e-02, -1.3869e-02, -1.8652e-02,
2.4801e-02, 3.8952e-03, 3.9827e-02, 1.4166e-02, -7.6014e-01,
4.1432e-01, -1.9246e-01, 1.1102e-01, -1.4642e-02, 5.7801e-03,
2.8745e-03, -1.7657e-02, -7.7463e-02, -6.4326e-02, 1.2057e-02,
-1.1613e-02, -4.0119e-02, -1.8979e-02, -2.7809e-02, -2.3785e-02,
2.2651e-02, -2.4268e-02, 5.2309e-03, 1.9109e-02, -3.7954e-03,
-1.9735e-02, -2.8117e-02, -5.8240e-02, -1.0201e-01, 6.7421e-04,
4.4703e-02, -3.0308e-04, -3.6796e-02, 2.3805e-04, 3.2304e-01,
-3.5212e-01, -4.4832e-02, 5.7270e-03, 5.8176e-03, -1.7689e-02,
5.5221e-03, -6.2864e-03, 2.8066e-02, 3.8595e-02, -1.3975e-02,
-2.1706e-02, 1.7213e-02, -1.1424e-02, 1.0731e-02, -5.9880e-04,
2.5505e-02, -7.9028e-03, -3.2949e-02, 3.8501e-03, -8.8245e-03,
-3.4819e-03, -1.4605e-02, -9.3169e-03, -6.8412e-02, 9.3665e-03,
-3.4788e-03, 6.8371e-03, 6.4590e-03, -6.6017e-02, 8.7025e-01,
-4.8855e-02, 2.0873e-02, 3.9021e-04, 9.8548e-03, 5.1253e-03,
1.0060e-02, -5.7132e-02, -2.5164e-03, 7.5240e-01, -6.7990e-03,
-4.7859e-01, -6.1399e-03, -1.1962e-03, 1.1866e-03, 3.5237e-03,
4.2073e-02, 5.0811e-03, 3.5187e-02, 7.6161e-03, -1.8199e-02,
-2.5168e-02, 1.0928e-02, -1.8055e-02, -8.7963e-03, 1.2136e-02,
2.9134e-02, -4.7957e-02, 1.9883e-03, -2.1588e-02, -1.2183e-03,
-5.0964e-02, -1.0120e-02, -5.1278e-03, 9.2718e-03, -1.0521e-02,
-9.1500e-01, -6.9778e-03, -1.1852e-05, -2.7351e-02, -6.3665e-03,
2.6623e-02, -1.0697e-02, 1.3013e+00, -4.2087e-03, 3.5010e-03,
-9.0631e-03, -1.2495e-02, -1.1143e-02, -6.4557e-02, -7.4548e-03,
2.5077e-02, 6.0618e-03, 3.8729e-02, 1.9512e-03, -1.5695e-02,
4.1066e-02, 7.1876e-02, 1.1941e-02, 1.1479e-02, -2.1025e-02,
-4.4586e-02, -1.2142e-01, 3.6297e-01, -1.1479e-02, -7.0550e-03,
-3.0732e-03, -1.0475e-01, -3.1307e-02, 2.6413e-02, 1.8010e-04,
-2.9713e-03, 6.5173e-03, -9.2936e-03, -2.7098e-02, 3.4719e-02,
-5.4723e-02, -4.5547e-02, -2.4742e-02, -3.0855e-03, 8.1147e-03,
1.3071e-02, 6.4248e-02, 8.1228e-04, 2.4757e-02, 1.4534e-02,
2.5024e-02, 8.5607e-02, 6.8466e-03, 5.1933e-03, 8.6611e-02,
-4.6842e-01, -5.6616e-02, 1.6048e-01, 6.1547e-03, 2.4924e-02,
-3.3897e-02, 7.5299e-01, -2.8823e-02, 5.1894e-03, 1.5493e-02,
-2.8037e-02, -3.5234e-02, -6.3110e-01, 2.6496e-02, -2.0996e-03,
8.1190e-01, 5.2454e-01, -9.2737e-03, 1.5295e-02, 7.9412e-03,
-4.2548e-02, -9.7852e-03, -2.3154e-02, -9.9202e-03, 8.1742e-03,
2.4342e-05, -8.2740e-03, -5.5718e-03, 6.1778e-01, -2.5267e-02,
1.5646e-02, -4.1497e-02, -1.9631e-02, 8.7076e-02, 8.0120e-03,
-2.8781e-03, -5.4196e-02, -5.0515e-01, -3.8567e-03, -5.6713e-03,
-8.2977e-03, 5.7573e-03, -3.0785e-02, -8.8448e-03, 5.0190e-01,
2.7427e-02, -2.2315e-02, 3.7897e-03, 5.3910e-03, -1.7446e-02,
-6.7904e-02, -6.4136e-02, 3.0637e-02, 1.7055e-02, -4.0486e-03,
4.0048e-04, 8.4181e-03, -2.9910e-03, -1.7335e-02, 2.8932e-02,
3.4725e-02, -3.8263e-02, -6.6158e-03, 2.3055e-02, 3.3611e-03,
-4.1740e-02, 5.7093e-01, 1.0084e-02, -1.0451e-02, 5.0783e-02,
-1.5320e-02, -1.3032e-02, 3.7760e-02, 1.7139e-02, 5.1414e-02,
-1.8133e-02, 4.9087e-03, -1.3765e-01, 1.8895e-02, 1.6932e-02,
2.7277e-03, 1.8141e-02, -6.8402e-03, -1.1778e-01, 1.0169e-02,
1.6853e-01, -1.3144e-04, 4.2427e-02, -2.6230e-02, -2.1131e-02,
4.9175e-02, -1.0456e-02, -2.5849e-02, 4.4606e-02, -1.7229e-05,
-1.1238e-02, 1.5116e-03, 1.2077e-02, 2.6054e-03, 6.9948e-02,
1.5055e-02, -1.9745e-02, 1.9165e-03, -4.6319e-02, -3.2234e-02,
-1.7079e-01, -3.5179e-02, -7.9092e-02, 1.5927e-02, -7.5581e-03,
3.9424e-01, 4.5398e-02, 1.0123e-03, -3.0889e-02, 8.9127e-03,
1.3117e-04, -1.2365e-02, 1.4814e-01, -2.2632e-02, -1.2881e-02,
-3.4497e-02, 8.9060e-03, -3.0747e-01, 1.1530e-02, -5.1192e-03,
2.6323e-02, 3.6447e-03, -9.0179e-02, -1.0916e-03, 1.1193e-02,
-1.7883e-02, -1.8083e-02, 4.2735e-03, -8.7878e-03, 7.8851e-01,
-8.4941e-03, -3.0741e-02, 2.3667e-02, 1.3072e-02, 4.4558e-02,
-5.2301e-03, 8.5091e-05, -2.0978e-02, 5.6533e-03, -4.4536e-02,
-2.2528e-02, 2.7267e-03, 2.7645e-01, -3.3284e-02, -1.3758e-01,
-3.8739e-02, -2.1205e-02, 1.4522e-02, 3.3279e-02, -4.9861e-01,
-2.6866e-03, 4.7418e-02, -1.0412e-02, -5.0259e-03, 5.9364e-02,
1.5291e-02, 1.2832e-02, 9.9840e-04, -4.6879e-01, 6.3109e-03,
-6.6550e-03, -4.7005e-01, 1.1081e-02, 6.3798e-03, -2.3701e-02,
-6.1321e-04, 9.4518e-03, -1.8441e-02, 1.9444e-02, -3.2495e-02,
-3.9736e-02, 1.2732e-02, -1.5738e-03, 1.7463e-02, -1.2696e-01,
3.1286e-01, -5.8906e-02, 1.7822e-02, -5.9794e-02, -1.9892e-02,
6.0129e-03, -2.2110e-02, 8.3146e-02, 1.1826e-03, -1.5764e-01,
-6.6654e-03, -2.4652e-02, 2.9888e-02, 1.3662e-02, 2.2539e-02,
-4.5877e-02, 1.7355e-02, 3.1344e-03, 3.8609e-03, 4.8026e-03,
1.1118e-02, 1.0949e-01, -1.5221e-02, 1.9277e-02, -6.6302e-03,
-7.1109e-03, 7.4188e-04, -6.3697e-03, -3.1777e-02, 1.2641e-03,
5.6136e-03, -1.3858e-02, 5.9672e-01, -2.9905e-02, -7.8330e-01,
8.2061e-04, 1.9875e-03, 4.8229e-02, -4.0310e-03, -7.4141e-01,
-8.2589e-02, 4.6838e-03, -1.1728e-03, -6.8659e-02, -2.1787e-02,
-2.3413e-02, 3.8247e-03, -3.5208e-02, 8.0469e-03, -6.0692e-03,
-1.8382e-02, -6.8114e-04, 8.2483e-01, 4.2232e-02, 2.6186e-02,
-3.5071e-02, 2.9950e-01, -3.9087e-02, 2.8266e-02, -3.4465e-02,
4.0329e-04, -2.1029e-02, 1.3218e-02, 5.7879e-01, -1.1618e-02,
-1.7881e-02, -5.4950e-02, -1.3292e-03, 2.9873e-02, -3.3105e-03,
-1.5792e-01, -1.4564e-02, 3.8663e-02, -8.6169e-02, -1.5880e-01,
-4.9658e-02, -1.6112e-03, 1.9248e-02, -4.6184e-03, -4.1773e-02,
-2.6493e-03, 2.0980e-02, 1.2356e-02, 1.5418e-02, 8.0534e-01,
1.7136e-02, -1.9514e-02, 3.1425e-03, 4.3635e-03, 9.3242e-03,
5.9385e-02, -1.6063e-02, 9.7715e-01, -1.0699e-01, 3.2006e-02,
-1.7259e-02, -2.2930e-02, -8.4327e-01, -1.0589e-02, 4.8237e-02,
9.1879e-03, 8.5088e-01, -3.9914e-03, -6.3607e-02, 2.3347e-02,
-2.6078e-02, -1.2018e-02, -3.9092e-01, 1.0435e-01, -4.1202e-02,
1.4983e-02, 1.4928e-02, 1.6260e-02, 2.1786e-02, 6.1203e-01,
-4.3203e-03, 2.1811e-02, 2.8324e-02, -1.6943e-02, 1.0289e-03,
-4.2235e-03, -3.9832e-02, -3.0189e-02, -3.2481e-03, 1.6847e-02,
-4.3612e-02, 5.2142e-01, -2.0684e-02, 3.4631e-02, 1.9373e-03,
1.2069e-01, -1.9904e-02, 9.1863e-03, 4.1725e-03, -8.4121e-02,
-2.2211e-03, 5.6895e-03, -1.3401e-02, -1.5374e-02, -2.5099e-02,
1.9727e-02, -1.7755e-01, 2.9352e-03, -2.2476e-01, 2.6460e-03,
1.8885e-02, 1.3120e-02, -3.8352e-02, -5.8401e-03, -1.1220e-02,
-1.9525e-02, 1.6225e-02, 1.8265e-02, -1.9113e-02, 4.5447e-01,
8.7089e-03, -6.5228e-03, -9.9888e-03, 6.5381e-02, -1.6652e-02,
-3.9072e-03, -1.2806e-01, -6.4722e-03, 6.2611e-05, -4.1967e-02,
-2.6738e-02, 8.5272e-02, -6.5992e-03, 2.0409e-02, -1.6775e-02,
-8.2542e-02, -9.1505e-03, 2.3265e-01, -2.0393e-02, -2.5408e-02,
3.7184e-01, 1.1421e-02, -1.4207e-04, -1.0234e-01, 1.9930e-03,
6.7073e-02, -1.1965e-02, 6.2451e-04, -4.2905e-02, 1.9731e-03,
-2.7423e-02, -1.0493e-02, -5.5997e-02, 3.5540e-03, 5.0565e-03,
1.5316e-04, 1.9976e-02, -6.1887e-02, -1.9239e-02, -2.8151e-02,
1.4671e-02, 3.8599e-03, -1.0113e-01, 3.7388e-02, 1.2063e-02,
9.1354e-03, 1.3958e-02, 1.5136e-03, -1.1466e-02, -1.6999e-02,
-7.6677e-02, -4.8566e-03, -2.4719e-01, 1.0974e-02, 8.4056e-03,
5.8064e-03, -2.4864e-01, -3.7818e-01, 5.0208e-02, -3.0094e-02,
2.8905e-04, 4.9162e-03, 1.2000e-02, -1.7560e-02, -1.5956e-01,
-9.3537e-03, -3.1023e-02, 7.7834e-01, -8.3942e-01, -9.3088e-03,
-1.3086e-02, 5.0306e-01, -1.4037e-03, -2.2345e-01, 1.4390e-03,
3.5904e-01, -1.2583e-02, 2.1788e-02, 3.1457e-03, -1.3125e-02,
-3.9513e-02, -1.5309e-02, -7.3315e-04, -2.7821e-02, 6.5916e-04,
9.3677e-03, -1.9807e-02, -6.9219e-03, 2.0447e-01, -1.7606e-02,
-2.0071e-02, -1.5081e-02, 3.1485e-01, 8.7187e-03, -2.3410e-02,
-3.4283e-03, -5.5052e-03, -5.2397e-02, 4.0907e-02, 1.0465e-03,
-4.0619e-01, 2.4019e-02, 5.6711e-01, -8.7051e-02, -5.8143e-03,
-6.2758e-03, 6.5046e-03, 5.2147e-01, 2.7435e-01, -1.7838e-03,
1.8702e-02, 9.8643e-03, 2.2815e-02, -1.3442e-02, -1.9221e-03,
-2.5178e-02, 2.8828e-01, -9.8848e-03, -8.6106e-03, -1.6368e-01,
8.5245e-03, 2.3074e-02, 1.0927e-02, 2.3362e-03, -5.0312e-03,
-1.4515e-03, 6.0329e-03, 1.7541e-03, 2.2712e-02, -6.2920e-02,
1.9621e-02, -5.9920e-03, 9.3095e-03, -1.2057e-02, -3.4766e-02,
7.0962e-02, 3.8835e-01, -6.9094e-03, 1.4411e-02, 2.9177e-02,
5.3793e-03, -2.1615e-02, 2.6376e-02, -1.1585e-03, 4.0472e-02,
1.1851e-02, 1.5705e-02, 3.8531e-02, 6.8253e-03, 1.0445e-02,
-9.5043e-03, -1.3340e-02, 6.6893e-03, 7.8108e-03, 8.2606e-03,
7.3462e-02, 4.5651e-01, -1.9310e-02, -5.4200e-02, -3.6836e-03,
8.5394e-03, 5.6307e-03, -2.4418e-02, -7.0981e-03, -2.5901e-01,
-2.2678e-02, 6.2244e-02, 2.3416e-02, 1.8405e-03, -1.6070e-02,
-2.2669e-03, -3.3542e-02, 5.1731e-01, -2.7881e-02, -7.9796e-02,
4.5613e-03, -4.0021e-03, 4.8400e-03, -3.7174e-03, 2.0248e-02,
2.0703e-02, 6.0141e-03, 3.3654e-02, -1.0999e-02, -4.6471e-01,
-2.7094e-01, -3.0672e-02, 6.7943e-03, 4.8118e-02, -1.1301e+00,
2.7343e-03, -3.5806e-02, -1.0150e-01, -5.5286e-03, -5.1283e-03,
-1.7647e-03, -1.4113e-02, -4.1544e-01], device='cuda:0',
grad_fn=<BroadcastBackward>), [16, 16], [0, 0], [1, 1], False, [0, 0], 1), **{})
While executing %self_vit_embeddings_patch_embeddings_projection : [#users=1] = call_module[target=self_vit_embeddings_patch_embeddings_projection](args = (%pixel_values,), kwargs = {})
Original traceback:
File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 175, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 117, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 573, in forward
embedding_output = self.embeddings(
| File "/home/ubuntu/anaconda3/envs/py38_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
| File "/home/ubuntu/transformers/src/transformers/models/vit/modeling_vit.py", line 787, in forward
outputs = self.vit(
``` | 02-27-2023 18:23:45 | 02-27-2023 18:23:45 | Diving into this issue, the forward pass of a ViT model (or also ResNet) segfaults on my side (I don't have your error message, but you are using multiple GPUs with DataParallel if I read the traceback correctly, which is probably not supported). I'll reach out to the PyTorch team.<|||||>Ok, the segmentation fault actually came from a mix of torchvision stable and torch nightlies. With
```
pip3 install --pre torch torchvision --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117
```
I don't get the segfaults when running the forward passes and I can run the example on one GPU with torch inductor.<|||||>Thanks for checking in. I just tried to restrict the use to a single GPU, the original Faketensor issue is gone and training goes normal.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,825 | closed | Rename `MobileViTModelTest` to `TFMobileViTModelTest` | # What does this PR do?
@sayakpaul Let's give TF a bit more love β€οΈ π.
(joking aside, having consistent and proper prefixes makes things easier) | 02-27-2023 17:51:10 | 02-27-2023 17:51:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>No worry :-) |
transformers | 21,824 | closed | TTS fine-tuning for SpeechT5 | # What does this PR do?
Adds fine-tuning support for SpeechT5, in particular the TTS model.
The loss function is a combination of L1 loss for the mel-spectrograms, BCE for the stop token prediction, and (optionally) guided attention loss to persuade the cross-attentions to be diagonal.
The STFT feature extraction has been sped up, which also means it currently assumes the frame size is a power of two and throws an error otherwise.
The feature extractor no longer outputs a `stop_labels` target. Padded areas in the spectrogram target are assumed to have the value -100 during training; from this the stop labels are computed automatically.
Various other small fixes to the tokenizer, processor, etc to support fine-tuning.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-27-2023 16:06:00 | 02-27-2023 16:06:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Requesting review from @ArthurZucker for the custom STFT / log-Mel feature extraction components (`feature_extraction_speecht5.py` is the file of interest)<|||||>Gently pinging @ArthurZucker :)<|||||>Will review in 1h! Sorry for the delay <|||||>> * Have the slow integration tests for the SpeechT5 models been run to check outputs are the same with the processing updates?
The outputs are not the same because the processing of the labels changed. But that's OK since the labels weren't used up to this point anyway.
> * Am I right in understanding `stop_labels` were never used (and so removal doesn't affect things?)
Correct.
> * With `reduction_factor` being moved to `shift_spectrograms_right`, does this effectively mean the `input_values` output from the processor has changed for the same config?
It didn't affect the `input_values`, only the labels. So nothing changed there for the normal operation of the model.<|||||>@amyeroberts If you're OK with the changes, I think this can be merged now. The failing tests seem unrelated to SpeechT5.<|||||>I'm pretty sure no one was using any of these properties before, since we only released SpeechT5 very recently and no one would have used it for training yet. Adding deprecation warnings seems excessive to me in this case.<|||||>OK, put frame_signal_scale and reduction_factor back and added a deprecation warning.<|||||>If you're all happy with it, feel free to merge (I don't have rights for that). π <|||||>@hollance - sorry, my bad, I thought you did! |
transformers | 21,823 | closed | Make Slack CI reporting stronger | # What does this PR do?
Make Slack CI reporting stronger.
The most important change in this PR is to **use a token when we need to grab some GitHub workflow/jobs information** using api call, like
```python
https://api.github.com/repos/huggingface/transformers/actions/runs
```
to get all job links.
**This could avoid reaching the rate limit in CI runs and keep the CI reporting working.**
Such error occurred once on 2023/02/24, see [this run](https://github.com/huggingface/transformers/actions/runs/4258755021/jobs/7424107028). The log shows `Unknown error, could not fetch links. 'jobs'`, but the underlying reason (I strongly believe) is the rate limit is reached, and the api call returns 2 keys `message` and `documentation` without the key `jobs`.
The other changes are just to make things better too.
| 02-27-2023 15:56:49 | 02-27-2023 15:56:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,822 | closed | Extend Callback API for remote execution of ClearML Experiments | ### Feature request
Hi!
Not that long ago, we added support for the ClearML experiment manager as a training callback, and the general feedback has been good so far! However, ClearML is more than an experiment manager alone.
In order to allow users to quickly experiment, ClearML can clone an existing experiment from the UI and then override the originally captured hyperparameters. It does this by injecting the new parameter values into the code at runtime. A user can then schedule and run this edited experiment clone on a remote machine.
But in order for this functionality to work properly, ClearML has to be able to initialize, access and overwrite the training parameters even before they are first used by the Trainer. The current callback implementation does not allow this.
Do you think this is something worth considering to add? I suspect it's rather easy to add a new, very early callback route, but it sounds harder to me to allow a callback to override the training arguments. What do you think?
### Motivation
We received user feedback on our own support slack channel of users trying to run Transformers remotely, but the parameters not being overridden. More advanced ClearML functionality like pipelines and HPO depend on this functionality to properly work.
### Your contribution
We'd be very willing to make a PR, this issue is meant to discuss if you agree that it can be properly added and if so, how you would like to see it practically :) Thank you for the consideration! | 02-27-2023 14:31:33 | 02-27-2023 14:31:33 | Hi there. The `Trainer` is not allowed to modify its own `TrainingArguments`, a design choice we made so that reproducibility and resuming from a checkpoint work properly. This is why the callbacks are also not allowed to change the training arguments. It's probably best for this use case if you simply subclass the `Trainer` API and either add a new method or override what you need.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,821 | closed | Two tfevent files are being generated for each run of trainer | ### System Info
Each run of the trainer two tfevent files are being generated, it looks like this:
/runs
--Feb27_09-46-42_...
----events.out.tfevents....0
----/1677491207.0429652
------events.out.tfevents....1
When reading these files with TensorBoard I don't get any output from the .1 file, how can I get rid of it (because these clutter my TensorBoard) or get actual data from it?
Thanks in advance
@sgugger
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply starting any training.
### Expected behavior
One tfevents file or a valid output from the .1 file. | 02-27-2023 11:16:48 | 02-27-2023 11:16:48 | I have no idea. Let us know if you find the reason/how to fix it!<|||||>also face this issue by adding the tensorboard callback in the examples/language_modeling/run_mlm.py<|||||>Face the same issue:
- transformers: 4.26.1
- tensorboard: 2.12.0<|||||>Also faces this issue
As a workaround, I use this command to delete the duplicated directory if somebody is really annoyed by this.
```bash
find . -type d -name "*[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].[0-9][0-9][0-9][0-9][0-9][0-9]" -exec rm -rv {} \;
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,820 | closed | rag-end2end-retriever Training Time Results | ### Feature request
Hi,
I would suggest including the training runtime for the new version of [RAG](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag-end2end-retriever) that is end-to-end. The hyperparameters for the results at the end of the README.md (under the "Comparison of end2end RAG (including DPR finetuning) VS original-RAG" section) are that below, but I've no idea the training time for these experiments on SQUAD. Currently my runtime on 2 GPUs is looking like months, so I want to know if there's something I've missed that causing training to be so slow or if it takes this long to run RAG end-to-end.
--gpus 4
--train_batch_size 4
--eval_batch_size
--max_source_length 128
--max_target_length 25
--val_max_target_length 25
--test_max_target_length 25
--label_smoothing 0.1
--dropout 0.1
--attention_dropout 0.1
--weight_decay 0.001
--adam_epsilon 1e-08
--max_grad_norm 0.1
--lr_scheduler polynomial
--learning_rate 3e-05
--num_train_epochs 10
--warmup_steps 500
--gradient_accumulation_steps 4
--distributed_retriever ray
--num_retrieval_workers 4
Thanks, James
### Motivation
Saves a lot of time before researchers decide if they've enough resources to work and build on this work or not. Also, suggestions on how to run training with less resources would also be useful.
### Your contribution
Not sure it requires much help, apart from the original authors (or at least whoever ran the experiments corresponding to the results in the readme) including training time numbers in the readme results section. | 02-27-2023 11:12:45 | 02-27-2023 11:12:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,819 | closed | Add Seaformer model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21668
Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer for semantic segmentation on mobile devices.
<br>
Supersedes #21774
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #21668
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@alaradirik thanks for offering help with this PR, please let me know about any changes required.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-27-2023 11:01:29 | 02-27-2023 11:01:29 | Hi @inderpreetsingh01, thank you! You can ping me once the PR is ready is to be reviewed.
You can follow the [official guidelines](https://huggingface.co/docs/transformers/add_new_model) to learn how to prepare the configuration, image processor and modeling files to replicate the original work such that forward propagating an image through the HF and original implementation yields the same results.<|||||>> # What does this PR do?
> Fixes #21668 Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer for semantic segmentation on mobile devices. Supersedes #21774
>
> ## Before submitting
> * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
> * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
> Pull Request section?
> * [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [Add SeaFormer modelΒ #21668](https://github.com/huggingface/transformers/issues/21668)
> * [x] Did you make sure to update the documentation with your changes? Here are the
> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
> * [ ] Did you write any new necessary tests?
>
> ## Who can review?
> @alaradirik thanks for offering help with this PR, please let me know about any changes required.
The PR is just initialized using SegFormer, I can do a review once the SeaFormer model is implemented.<|||||>Hi @alaradirik, I have added seaformer implementation in modeling file and updated the conversion and configuration scripts, I have ran a forward pass in notebook and output is same as the original seaformer model. Can you please review it and let me know of any changes required? I am yet to do the testing part. <|||||>Hi @alaradirik thanks for the detailed review :) I have uploaded the converted model to the hub here Inderpreet01/seaformer-semantic-segmentation-large, will work on your comments and update the pr.
Thanks <|||||>> Hi @alaradirik thanks for the detailed review :) I have uploaded the converted model to the hub here Inderpreet01/seaformer-semantic-segmentation-large, will work on your comments and update the pr. Thanks
Thank you! Feel free to ping me when you'd like me to do the final review<|||||>Hi @alaradirik I have worked on the changes you mentioned, two tests are failing in test_modeling_seaformer.py
SeaformerModelTest::test_initialization - AssertionError: -6.169999778649071e-06 not found in [0.0, 1.0]
I have normally initialized the parameters so negative values are expected.
SeaformerModelTest::test_config - ValueError: The following keys were not properly set in the config:
label2id and id2label are having 150 items but it is expecting 1 item in test_configuration_common.py [config_common_kwargs](https://github.com/huggingface/transformers/blob/c612628045822f909020f7eb6784c79700813eda/tests/test_configuration_common.py#L78-L79) dictionary is having id2label and label2id key dictionary with one item as value.
Can you please help me with them thanks.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21819). All of your documentation changes will be reflected on that endpoint.<|||||>Also I have worked on the checks and most of them are successful, will need your help with the remaining three checks. thanks.<|||||>> Also I have worked on the checks and most of them are successful, will need your help with the remaining three checks. thanks.
Hi @inderpreetsingh01, I'll be taking a look shortly!<|||||>Hi @inderpreetsingh01, I took a look at the code and failed tests and saw that some of the failures are due to unrelated models. Could you rebase to main by clicking on the _Synch fork_ button on your [branch](https://github.com/inderpreetsingh01/transformers/tree/add_seaformer_model)?
The modeling test failure stemming from the label mapping is probably just due to setting a `num_labels` attribute within `SeaformerConfig`. All config classes inherit from the `PretrainedConfig` class, which computes the `num_labels` based on the `id2label` and `label2id` attributes, which are initialized to have 2 labels by default. You should remove the `num_labels` attribute and overwrite the default `id2label` and `label2id` attributes within the conversion script. You can take a look at the configuration, conversion and test scripts of MaskFormer and Mask2Former to see how that's done.
Hope this helps!
<|||||>Hi @alaradirik, thanks for your response, removing `num_labels` from config has resolved that testcase, can you please help with this test case as well
`SeaformerModelTest::test_initialization - AssertionError: -6.169999778649071e-06 not found in [0.0, 1.0]`
I have normally initialized the parameters so negative values are expected.
I have looked at maskformer and segformer but not able to figure this out.<|||||>actually this test is getting skipped in segformer model which also initializes weights normally.<|||||>> actually this test is getting skipped in segformer model which also initializes weights normally.
Hi @inderpreetsingh01, sorry for my late reply, I was off due to moving. You can overwrite the test by creating a test with the same name - `test_initialization` - as the weight initialization is inline with the original model. You can take a look at common test functions defined over [here](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_common.py#L510) to see what this test does.<|||||>Hi @alaradirik thanks for reply, where should i create this test with the same name?<|||||>Hi @alaradirik can you please do the final review? thanks<|||||>@inderpreetsingh01 Thanks for adding this model! Ping me when the PR is ready for review (once all of @alaradirik's comments have been addressed and tests are passing). <|||||>@alaradirik thanks for the review, @amyeroberts sure will ping you once model is ready<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,818 | closed | Fix gradient checkpointing bug in git | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-27-2023 10:53:14 | 02-27-2023 10:53:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,817 | closed | [`Blip2`] Add `Blip2Model` | # What does this PR do?
This PR adds a new class `Blip2Model`, so that the model can be mapped in `AutoModel` mapping and also used by users that wants to conveniently extract text, image, & the so-called q-former features from the model.
This PR also addresses this comment: https://github.com/huggingface/transformers/pull/21708#pullrequestreview-1308704909
I decided to still keep `AutoModelForCausalLM` & `AutoModelForSeq2SeqLM` for `self.language_model` as using `AutoModel` there leads to keys that are not properly loaded from the Hub. Let me know if you think that this is a mistake and should be addressed differently
cc @sgugger @MKhalusova @stevhliu
| 02-27-2023 10:39:48 | 02-27-2023 10:39:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! Once it's merged, I'll create a small PR to update the troubleshooting section. |
transformers | 21,816 | closed | Fix gradient checkpointing imagegpt | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-27-2023 10:04:58 | 02-27-2023 10:04:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,815 | closed | Fix gradient checkpointing bug in gptneox | # What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 02-27-2023 09:34:41 | 02-27-2023 09:34:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Apologies, hand't pushed the latest commit. All done now!<|||||>Awesome, thank you for the contribution <3 |
transformers | 21,814 | closed | [DETR and friends] Remove is_timm_available | # What does this PR do?
This PR:
- removes the `is_timm_available` dependency check for DETR and friends, uses `is_torch_available` instead and uses `requires_backends["timm"]` in case `config.use_timm_backbone=True`.
- adapts DETR's conversion script to make DETR work with our `AutoBackbone` class, rather than the timm backbone. This way one can use DETR by only installing `Transformers`.
To do:
- [x] upload checkpoint to the hub
- [x] add integration test | 02-27-2023 09:34:14 | 02-27-2023 09:34:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There is a pretty big conflict in the test modeling DETR file, can you fix it Niels? |
transformers | 21,813 | closed | Error when using BART for Prefix Tuning. Replace `view` with `reshape` in `BartAttention`? | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using Prefix Tuning with BART in PEFT, an error occurs for some edge cases, see [#129](https://github.com/huggingface/peft/issues/129#issue-1598538584), with a suggestion to replace `view` with `reshape` in `BartAttention`.
### Expected behavior
I would expect the example code provided in [#129](https://github.com/huggingface/peft/issues/129#issue-1598538584) to work regardless of the length of the input. | 02-27-2023 09:17:29 | 02-27-2023 09:17:29 | Hello! Thanks a lot for reporting. Will open a PR to fix this π
<|||||>@ArthurZucker It seems in GPTJ there also have the same problem when using prefix-tuning trained model to generate text
```
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/generation/utils.py", line 1391, in generate
return self.greedy_search(
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/generation/utils.py", line 2179, in greedy_search
outputs = self(
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 813, in forward
transformer_outputs = self.transformer(
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 575, in forward
position_ids = position_ids.view(-1, input_shape[-1])
RuntimeError: shape '[-1, 108]' is invalid for input of size 128
```
___________________________________________________________________________
update
when not passing 'attention_mask', the error changed to:
```
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 302, in forward
attn_outputs = self.attn(
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 251, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/root/miniconda3/envs/gpt_fine_tune/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 176, in _attn
attn_weights = attn_weights + attention_mask
RuntimeError: The size of tensor a (128) must match the size of tensor b (108) at non-singleton dimension 3
```
and I use num_virtual_tokens=20, which seems is a problem of `PEFT`?<|||||>Hey, not really sure this is the same, the error does not involve having to replace `view` with `reshape`. You seem to have a problem with the positional ids. They are deprecated see #21869. <|||||>> Hey, not really sure this is the same, the error does not involve having to replace `view` with `reshape`. You seem to have a problem with the positional ids. They are deprecated see #21869.
yeah it seems not the same root cause, I will turn to PEFT to find resolution, thanks for your reply!
-------------------------------------------------------------------------------------------------------------------
the problem solved withou any code changing but just install transformers' main branch from source |
transformers | 21,812 | closed | update FSDP and add XLA-FSDP documentation | # What does this PR do?
1. update FSDP and add XLA-FSDP documentation | 02-27-2023 06:46:34 | 02-27-2023 06:46:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @pacman100 and @sgugger ! This is exciting! |
transformers | 21,811 | closed | Fix the issue of blip model returning loss even when the label is not provided. | # What does this PR do?
Fixes #21510
@NielsRogge | 02-27-2023 05:45:38 | 02-27-2023 05:45:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada and @amyeroberts <|||||>>
@younesbelkada
One of the test (test_inference_image_captioning_fp16) is failing, not sure if it is related to my changes .
<|||||>Hi @raghavanone
Hum I just went through the daily CI report and this test seems to be not failing on our end, can you share with us the traceback of the error?<|||||>> Hi @raghavanone Hum I just went through the daily CI report and this test seems to be not failing on our end, can you share with us the traceback of the error?
```
tests/models/blip/test_modeling_blip.py:1113 (BlipModelIntegrationTest.test_inference_image_captioning_fp16)
self = <tests.models.blip.test_modeling_blip.BlipModelIntegrationTest testMethod=test_inference_image_captioning_fp16>
def test_inference_image_captioning_fp16(self):
model = BlipForConditionalGeneration.from_pretrained(
"Salesforce/blip-image-captioning-base", torch_dtype=torch.float16
).to(torch_device)
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
image = prepare_img()
# image only
inputs = processor(images=image, return_tensors="pt").to(torch_device, torch.float16)
> predictions = model.generate(**inputs)
tests/models/blip/test_modeling_blip.py:1124:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27: in decorate_context
return func(*args, **kwargs)
src/transformers/models/blip/modeling_blip.py:1068: in generate
vision_outputs = self.vision_model(
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/blip/modeling_blip.py:694: in forward
hidden_states = self.embeddings(pixel_values)
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/blip/modeling_blip.py:241: in forward
patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/conv.py:463: in forward
return self._conv_forward(input, self.weight, self.bias)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
input = tensor([[[[ 0.8647, 0.9229, 0.9375, ..., 1.7549, 1.7549, 1.7549],
[ 0.9082, 0.9375, 0.9521, ..., 1....2856, -0.3569],
[-0.3142, -0.3425, -0.3569, ..., -0.3000, -0.3569, -0.3994]]]],
dtype=torch.float16)
weight = Parameter containing:
tensor([[[[ 3.3875e-03, 1.4102e-04, 7.0906e-04, ..., -4.6539e-03,
1.2560e-03, -5....3.7727e-03, ..., -1.3084e-03,
4.8304e-04, 7.3357e-03]]]], dtype=torch.float16,
requires_grad=True)
bias = Parameter containing:
tensor([ 7.6477e-02, 7.6233e-02, 2.6343e-01, 2.6718e-02, 3.8727e-02,
1.2962e-02, -2....6932e-02, -9.2529e-02,
7.5012e-02, 6.4812e-03, -1.7303e-02], dtype=torch.float16,
requires_grad=True)
def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
if self.padding_mode != 'zeros':
return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
weight, bias, self.stride,
_pair(0), self.dilation, self.groups)
> return F.conv2d(input, weight, bias, self.stride,
self.padding, self.dilation, self.groups)
E RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'
/opt/homebrew/Caskroom/miniforge/base/envs/hf_dev/lib/python3.8/site-packages/torch/nn/modules/conv.py:459: RuntimeError
```<|||||>I see, you are not running the tests on GPU, if you don't have access to any GPU I can run the slow test for you
(Also we might need to add `require_torch_gpu` decorator on this test, if you could also add it in this PR it would be great π )<|||||>> test_inference_image_captioning_fp16
Oh, I just realised that, I am adding the tag. <|||||>@younesbelkada Did that test fail in your setup ? Is there something I have to fix ? |
transformers | 21,810 | closed | fsmt Tokenizer.save_vocabulary Bug | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using FSMT and try to reproduce the behavior of basic transformer on WMT16 translation task with [run_translation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation). It turns out that during the training, both valid loss and training loss go down normally, and valid BLEU scores increase gradually at the beginning, but suddenly dropped a lot and corrupt to ~0. After some investigation, I found the fsmt_tokenizer keeps complaining each time I try to save the training states.
You should be able to reproduce the complaint as easy as running the short script below:
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
tokenizer = AutoTokenizer.from_pretrained("allenai/wmt16-en-de-12-1")
tokenizer.save_vocabulary("./tmp")
```
### Expected behavior
```
Saving vocabulary to ./tmp/merges.txt: BPE merge indices are not consecutive. Please check that the tokenizer is not corrupted!
```
And the training curves valid BLEU looks like [this](https://api.wandb.ai/links/alanlee/2zddajjr) and valid loss looks like [this](https://api.wandb.ai/links/alanlee/h77vb3bf), which show that there should be no gradient explosion but still broken. | 02-27-2023 03:44:25 | 02-27-2023 03:44:25 | Hey, apparently, it is a known issue that the vocabulary has some holes. The problem with your training is probably not related as you mention that it `suddendly` drops and corrupts. Meaning up until some point everything works well no? π <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,809 | closed | How to set language in Whisper pipeline for audio transcription? | ### Problem
Hello,
I followed this notebook for Whisper pipelines. https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Ca4YYdtATxzo
Here, I want to use speech transcription with openai/whisper-large-v2 model using the pipeline. By using WhisperProcessor, we can set the language, but this has a disadvantage for longer audio files than 30 seconds. I used the below code and I can set the language here.
```
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to(device)
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
inputs = processor.feature_extractor(speech_data, return_tensors="pt", sampling_rate=16_000).input_features.to(device)
generate_ids = model.generate(inputs, max_length=480_000, language="<|tr|>", task="transcribe", return_timestamps=True)
results = processor.tokenizer.decode(generate_ids[0], decode_with_timestamps=True, output_offsets=True)
```
Long audio files can be processed in the pipeline by setting chunk_length as below. But in the pipeline, I couldn't set the language. Therefore, I have gotten English results in my Turkish speech data.
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cpu')
pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32)
```
Is there a way to set the language?
### System Info
docker image:
- pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime
Transformers Version:
`transformers==v4.27dev`
### Who can help?
@sanchit-gandhi @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cpu')
pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32)
```
### Expected behavior
```
Label: "BazΔ± TΓΌrkΓ§e kelimeler."
Prediction: "Some Turkish words."
``` | 02-26-2023 17:41:14 | 02-26-2023 17:41:14 | @ArthurZucker <|||||>You can add `generate_kwargs = {"language":"<|tr|>","task": "transcribe"},` to your pipeline initialization and it should work. <|||||>Updated the notebook with the following new line :
> `pipe(speech_file, generate_kwargs = {"task":"transcribe", "language":"<|fr|>"} )`<|||||>Voila! I am able to set the language by using `generate_kwargs = {"language":"<|tr|>","task": "transcribe"}` in pipeline initialization. Thanks.<|||||>Hello, I got same problem. But `generate_kwargs = {"language":"<|tr|>","task": "transcribe"}` is not work for me.
```python
ValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)
```
Here is the code:
```python
from transformers import WhisperProcessor,WhisperForConditionalGeneration
import whisper
from transformers import pipeline
model = WhisperForConditionalGeneration.from_pretrained("./whisper_tiny_pytorch_model.bin",config="./config.json").to("cuda:0")
processor = WhisperProcessor.from_pretrained("./")
audio = whisper.load_audio("./a.flac")
i = processor(audio,return_tensors="pt").input_features.to("cuda:0")
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
device="cuda:0",
)
r = pipe(av, generate_kwargs = {"task":"transcribe", "language":"japanese"})
```
Could you help me?
Env:
pytorch==2.1.0.dev20230302+cu117
transformer==4.26.1
whisper model is download on huggingface.<|||||>Hey @AnestLarry, the language tag that you are using is wrong!
As you can see in the `generation_config.json`, the `lang_to_id` defines the mapping from language token to the actual input ids.
What you should be using (and there is an example of this in the notebook [here ](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=Ca4YYdtATxzo)) is the following:
```python
...
pipe( av, generate_kwargs = {"language"= "<|ja|>"}
```<|||||>Hey @ArthurZucker ,
```python
r = pipe(audio, generate_kwargs = {"language":"<|ja|>"})
ValueError: The following `model_kwargs` are not used by the model: ['language'] (note: typos in the generate arguments will also show up in this list)
```
I still got the same error. When I using `{"language": "<|ja|>"}` to `get_decoder_prompt_ids` (in a way direct to using model generate), I got a error tips to change my arg.
```python
processor.get_decoder_prompt_ids(language="<|ja|>",task="transcribe")
ValueError: Unsupported language: <|ja|>. Language should be one of: ['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese', 'turkish', 'polish', 'catalan', 'dutch', 'arabic', 'swedish', 'italian', 'indonesian', 'hindi', 'finnish', 'vietnamese', 'hebrew', 'ukrainian', 'greek', 'malay', 'czech', 'romanian', 'danish', 'hungarian', 'tamil', 'norwegian', 'thai', 'urdu', 'croatian', 'bulgarian', 'lithuanian', 'latin', 'maori', 'malayalam', 'welsh', 'slovak', 'telugu', 'persian', 'latvian', 'bengali', 'serbian', 'azerbaijani', 'slovenian', 'kannada', 'estonian', 'macedonian', 'breton', 'basque', 'icelandic', 'armenian', 'nepali', 'mongolian', 'bosnian', 'kazakh', 'albanian', 'swahili', 'galician', 'marathi', 'punjabi', 'sinhala', 'khmer', 'shona', 'yoruba', 'somali', 'afrikaans', 'occitan', 'georgian', 'belarusian', 'tajik', 'sindhi', 'gujarati', 'amharic', 'yiddish', 'lao', 'uzbek', 'faroese', 'haitian creole', 'pashto', 'turkmen', 'nynorsk', 'maltese', 'sanskrit', 'luxembourgish', 'myanmar', 'tibetan', 'tagalog', 'malagasy', 'assamese', 'tatar', 'hawaiian', 'lingala', 'hausa', 'bashkir', 'javanese', 'sundanese', 'burmese', 'valencian', 'flemish', 'haitian', 'letzeburgesch', 'pushto', 'panjabi', 'moldavian', 'moldovan', 'sinhalese', 'castilian'].
```
And I can get valid result with model generate.
```python
forced_decoder_ids = processor.get_decoder_prompt_ids(language="japanese",task="transcribe")
r = model.generate(i,forced_decoder_ids = forced_decoder_ids)
out: ['<|startoftranscript|><|ja|><|transcribe|><|notimestamps|>ε€γιγεΊγ...<|endoftext|>']
```<|||||>Sorry I guess I should have been clearer:
`pipe( av, generate_kwargs = {"language"= "<|ja|>", "task"="transcribe"}`
(I was just sharing how to fix the language)
Moreover, this is not on the latest release, as the notebook mentions you have to use the `main` branch<|||||>Thank you for notion me the version problem ignored by me. I had run success (without error message) after install `main` branch. But `fix the language` still not work.
```python
model = WhisperForConditionalGeneration.from_pretrained("./whisper_tiny_pytorch_model.bin",config="./config.json").to("cuda:0")
processor = WhisperProcessor.from_pretrained("./")
audio = whisper.load_audio("./a.mp3")
i = processor(audio,return_tensors="pt").input_features.to("cuda:0")
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
chunk_length_s=30,
device="cuda:0",
)
r = pipe(audio, generate_kwargs = {"language":"<|ja|>","task":"transcribe"})
{'text': " I'm not going bit ...}
```
I fixed `ja` and got a English result. (`audio` is a japanese song.
Is the code wrong though?<|||||>Try using the notebook I provided, your custom model might not be working and I can't debug it for you π
Could you try using the `openai/whisper-small` model as shown in the notbook? Then you can compare the configuration file and generation config
<|||||>Very thank you. My model is download from huggingface without change anything from me. Just used `openai/whisper` to successfully complete the task. And I found that model file name look like effect the result. π
Change model file name `whisper_tiny_pytorch_model.bin` to `pytorch_model.bin`, and no problem now.<|||||>Great that you no longer have an issue! Thanks for bearing with me π€ <|||||>When I am installing the newest Transformers, I am now getting the following error setting language in the pipeline:
```
File "/Users/me/miniconda3/envs/torch-gpu/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1570, in generate
if generation_config.language in generation_config.lang_to_id.keys():
AttributeError: 'GenerationConfig' object has no attribute 'lang_to_id'
```<|||||>I had this same issue with our finetuned [whisper-large-rixvox](https://huggingface.co/KBLab/whisper-large-rixvox/tree/main) @peregilk .
I think what happens is that finetuned Whisper models typically are already configured to predict a specific language during finetuning. When the people who train these models save a checkpoint, there is no "GenerationConfig" generated, as the model is still hardcoded to predict a specific language.
E.g. see [generation_config.json from OpenAI/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2/blob/main/generation_config.json) and compare against a finetuned version of whisper where [generation_config.json is missing](https://huggingface.co/KBLab/whisper-large-rixvox/tree/main).
If the person who trains a finetuned whisper follows [Huggingface's finetuning instructions](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/fine-tune-whisper-non-streaming.ipynb), there will be no GenerationConfig for the model.
Perhaps there should be a better error message for this @ArthurZucker .
The solution is simply to not specify `generate_kwargs` at all for any finetuned model where `generation_config.json` is missing. The finetuned model will predict in the language it was finetuned on without the `generate_kwargs`.<|||||>Thanks for reporting @peregilk and @Lauler! This is probably quite a good fix right @ArthurZucker? We don't use any of the `generation_config` logic unless `generation_config.json` is present on the Hub?<|||||>I believe the current workaround is to update the generation config according to this comment: https://github.com/huggingface/transformers/issues/21878#issuecomment-1451902363
This should fix both issues described above. It's cumbersome though and ideally we'd have a way of handling it in transformers! |
transformers | 21,808 | closed | Using Bloom with int8 generate unreadable outputs | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
When I use the int8 type of bloom to generate outputs on 8*Tesla V100(32GB), I find all of the tokens generated by the model are "unk". Are their any ideas to help me solve this problem?
This phenomenon doesn't appear in the bloom-7b1 model.
### Who can help?
@sgugger @muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My code is here.
`from transformers import AutoModelForCausalLM, AutoTokenizer`
`checkpoint = "model_path"`
`max_memory_mapping = {0: "25GB", 1: "25GB", 2: "25GB", 3: "25GB", 4: "25GB", 5: "25GB", 6: "25GB", 7: "25GB"}`
`tokenizer = AutoTokenizer.from_pretrained(checkpoint)`
`model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", max_memory=max_memory_mapping, load_in_8bit=True)`
`inputs = tokenizer.encode('''Hello ''', return_tensors="pt").to("cuda")`
`outputs = model.generate(inputs, max_new_tokens=10)`
`print(tokenizer.decode(outputs[0]))`
And the output is "Hello unk unk unk unk unk unk unk unk unk unk "
### Expected behavior
I expect the model outputs some meaningful results, such as "Hello, I am a young woman of 28 years old who has just arrived in New Braunfels for" from the API in the [https://huggingface.co/bigscience/bloom?text=Hello](url) or "Hello I am a newbie in python and I am" -- use the "bloom-7b1' model (int8) inference on a single Tesla V100 | 02-26-2023 16:28:36 | 02-26-2023 16:28:36 | cc @younesbelkada <|||||>the V100 series were not supported by `bitsandbytes` but now they should be compatible since the `0.37.0` relase. What is your `bitsandbytes` version? Can you try to update `bitsandbytes` ? `pip install --upgrade bitsandbytes`<|||||>> the V100 series were not supported by `bitsandbytes` but now they should be compatible since the `0.37.0` relase. What is your `bitsandbytes` version? Can you try to update `bitsandbytes` ? `pip install --upgrade bitsandbytes`
I have used their latest version 0.37.0, and the int8 type of "bloom-7b1" seems work well on a single Tesla V100, albeit it have repetitions at the end of the outputs.<|||||>@SAI990323
Are you still facing the issue? Can you try an approach that is similar to: https://github.com/huggingface/transformers/issues/21987#issuecomment-1458231709 and let us know if this works?
Also make sure to use `bitsandbytes==0.37.1`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,807 | closed | Conversion of OWL-ViT model fails | ### System Info
I've trained OWL-ViT model on my data using [training code from original repo](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#fine-tuning) and trying to use it in HuggingFace pytorch OWL-ViT implementation.
As far as I understand, I need to convert it using [convert_owlvit_original_flax_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py), first.
But, when I invoke:
`python3 convert_owlvit_original_flax_to_hf.py --owlvit_version clip_b32 --owlvit_checkpoint ~/scenic/training/checkpoint_16000 --hf_config vit_b32 --pytorch_dump_folder_path .`
it fails with:
```
Traceback (most recent call last):
File "~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py", line 406, in <module>
variables = checkpoints.restore_checkpoint(args.owlvit_checkpoint, target=None)["optimizer"]["target"]
KeyError: 'optimizer'
```
How to fix that?
P.S. the dict, returned by checkpoints.restore_checkpoint() has next keys: ['opt_state', 'params', 'global_step', 'model_state', 'rng', 'metadata'
### Who can help?
@amyeroberts @alaradirik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`python3 convert_owlvit_original_flax_to_hf.py --owlvit_version clip_b32 --owlvit_checkpoint ~/scenic/training/checkpoint_16000 --hf_config vit_b32 --pytorch_dump_folder_path .`
### Expected behavior
successful script running with converted model as output | 02-26-2023 13:35:48 | 02-26-2023 13:35:48 | P.S. if that could help, when I'm changing ['optimizer']['target'] to ['params'], the script is failing with:
```
Traceback (most recent call last):
File "~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py", line 411, in <module>
pt_backbone_params, clip_pt, attn_params = convert_clip_backbone(flax_params, torch_config)
File "~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py", line 281, in convert_clip_backbone
flax_clip_params = flatten_nested_dict(flax_params["backbone"]["clip"])
File "~/transformers/src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py", line 85, in flatten_nested_dict
if isinstance(v, collections.MutableMapping):
AttributeError: module 'collections' has no attribute 'MutableMapping'
```<|||||>You almost certainly will have to fork the Transformers library and adapt the conversion script a bit to make it work for your use case.
In this case, it seems that you're not properly reading the parameters of the model into a dictionary. I'd check which keys are in `checkpoints.restore_checkpoint(args.owlvit_checkpoint)` => you apparently already found that it should be 'params'. Next, you can check what's exactly in `flax_clip_params`, this should be in dictionary with key-value pairs of parameter names and their corresponding values.<|||||>Hi @alexey-chaykin, I can convert the official checkpoints but the training script was not available when I added OWL-ViT to transformers and their original script is probably a little different than the released one. You would need to find out what key the parameters are stored under and edit the conversion script.
As for the second error you're getting, I think it's just a version issue as `collections.MutableMapping` has been moved to `collections.abc.MutableMapping` in newer versions.<|||||>Thanks, Niels, Alara. Will do that way.
Do you have any plans to implement pytorch training for HuggingFace OWL-ViT?<|||||>No problem @alexey-chaykin, we are planning to implement PyTorch training as the training code is released. We will probably be releasing a tutorial / blog post on it in the next few weeks :)<|||||>Thanks! Looking forward to that. |
transformers | 21,806 | closed | Tokenizer call function gives an error when using the "target_text "argument without using "text" argument. | ### System Info
Name: transformers
Version: 4.21.2
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: [email protected]
License: Apache
Location: /databricks/python3/lib/python3.9/site-packages
Requires: packaging, pyyaml, filelock, numpy, regex, tokenizers, tqdm, huggingface-hub, requests
@ArthurZucker
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. tokenizer_label = AutoTokenizer.from_pretrained(base_model)
2. labels = tokenizer_label(text_target=targets, padding=False, truncation=True)
### Expected behavior
TypeError: __call__() missing 1 required positional argument: 'text'
| 02-26-2023 05:32:52 | 02-26-2023 05:32:52 | You version of Transformers is too low, you need to upgrade it as `target_text` is a somewhat recent feature.<|||||>Should I install tranformers from source ? Because I also tried PIP and
didnβt work.
On Mon, 27 Feb 2023 at 9:02 PM, Sylvain Gugger ***@***.***>
wrote:
> You version of Transformers is too low, you need to upgrade it as
> target_text is a somewhat recent feature.
>
> β
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/21806#issuecomment-1445870119>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGUREYYFIHEIQHKINCDWZRNSHANCNFSM6AAAAAAVIIOMAI>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>This change was introduced in `transformers==v4.22.0`. Try `pip install --upgrade transformers` as `pip install transformers` will do nothing if you already have the library. <|||||>it worked. Thanks |
transformers | 21,805 | closed | libssl.so.10: cannot open shared object file: No such file or directory | ### System Info
I am setting up a brand new machine with Ubuntu 22.04, pytorch 1.13.1/pytorch-cuda 11.7 and transformers 4.24.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I installed transformers using the following command, as suggested by huggingface docs:
`conda install -c huggingface transformers --y`
I'm running the following command: `from transformers import pipeline`
I'm getting the following exception:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1075 try:
-> 1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
~/anaconda3/lib/python3.9/importlib/__init__.py in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _gcd_import(name, package, level)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _find_and_load(name, import_)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _load_unlocked(spec)
~/anaconda3/lib/python3.9/importlib/_bootstrap_external.py in exec_module(self, module)
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/__init__.py in <module>
32 from ..feature_extraction_utils import PreTrainedFeatureExtractor
---> 33 from ..models.auto.configuration_auto import AutoConfig
34 from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
~/anaconda3/lib/python3.9/site-packages/transformers/models/__init__.py in <module>
18
---> 19 from . import (
20 albert,
~/anaconda3/lib/python3.9/site-packages/transformers/models/mt5/__init__.py in <module>
39 if is_tokenizers_available():
---> 40 from ..t5.tokenization_t5_fast import T5TokenizerFast
41 else:
~/anaconda3/lib/python3.9/site-packages/transformers/models/t5/tokenization_t5_fast.py in <module>
22
---> 23 from ...tokenization_utils_fast import PreTrainedTokenizerFast
24 from ...utils import is_sentencepiece_available, logging
~/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in <module>
24
---> 25 import tokenizers.pre_tokenizers as pre_tokenizers_fast
26 from tokenizers import Encoding as EncodingFast
~/anaconda3/lib/python3.9/site-packages/tokenizers/__init__.py in <module>
78
---> 79 from .tokenizers import (
80 Tokenizer,
ImportError: libssl.so.10: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_121111/4287807559.py in <module>
----> 1 from transformers import pipeline
~/anaconda3/lib/python3.9/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in __getattr__(self, name)
1064 value = self._get_module(name)
1065 elif name in self._class_to_module.keys():
-> 1066 module = self._get_module(self._class_to_module[name])
1067 value = getattr(module, name)
1068 else:
~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
-> 1078 raise RuntimeError(
1079 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1080 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
libssl.so.10: cannot open shared object file: No such file or directory
β
```
### Expected behavior
Please note that I'm running the official install instructions on a brand new machine!
There are two other tickets with the same issue:
https://github.com/huggingface/transformers/issues/18549
https://github.com/huggingface/transformers/issues/19844
Both are closed because the user simply switched to using pip. But the problem remains with conda installs.
This error also resolves for me if I use `pip install transformers --force-reinstall`. | 02-26-2023 03:17:16 | 02-26-2023 03:17:16 | This is not a library used by Transformers per se but Python. There is something wrong with your Python install via Conda, Python installed like this does not find the libssl.so.10 library.<|||||>I meet the exact same issue here while `pip` install cannot solve the problem.<|||||>tl;dr; `conda update tokenizers` solved the problem for me.
---
I think I had the same problem and this is how I solved it.
I noticed that the error was related to the `Tokenizers` package:
```
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libssl.so.10: version `libssl.so.10' not found (required by /home/silas/miniconda3/envs/llama/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)
```
So I decided to check who was providing this library and if I was using the latest version. [PyPi](https://pypi.org/project/tokenizers/) shows that the latest version is 0.13.02 and the library is by Hugging Face (so we are in the right place LOL).
After running `conda list`, I saw that I was using version 0.13.0.dev0. So I checked [Conda-Forge](https://anaconda.org/conda-forge/tokenizers) and found that they had the new version. Then I ran `conda update tokenizers` and that solved the problem for me.
I hope that solves the problem for you. =)<|||||>I have the exact same issues after I used conda to install transformers. Pip is working fine, however.<|||||>
>
My tokenizer version is 0.13.0.dev0, but conda update tokenizers doesn't work for me. I also tried conda install -c conda-forge tokenizers on [Conda-Forge](https://anaconda.org/conda-forge/tokenizers), it doesn't work either. How can I update the tokenizers version?
<|||||>@lilyq I had the same issue. I uninstalled transformers/tokenizers first and then pip reinstalled from source using `pip install git+https://github.com/huggingface/transformers` (all within my conda env). This installed the right version of tokenizers as a dependency and now it works. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>>
conda update tokenizers worked great for me, thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,804 | closed | introduce `logger.warning_once` and use it for grad checkpointing code | This PR:
1. introduces a new `warning_once` logger method - to prevent repeating the same warning more than once
2. use it for gradient_checkpointing functionality - where one would get thousands of these warnings at the moment should they have `use_cache==True` - the other solution is to assert
(I did this for m4, so thought to sync here as well)
The rename was done automatically with:
```
perl -0777 -pi -e 's|(logger.warning)(\(\W+\S\Suse_cache=True)|logger.warning_once$2|msg' src/transformers/models/*/mode*py
``` | 02-25-2023 23:13:18 | 02-25-2023 23:13:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,803 | closed | Masking ratio incorrect for DataCollatorForLanguageModeling | ### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It seems that there is an error in the implementation of the [torch_mask_tokens()](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L750) method in the [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L609) class.
According to the documentation, 80% of tokens are masked, 10% being replaced with the original tokens and 10% with random tokens. <s>However, the implementation for replacing tokens with random ones sets the probability at 50% instead of the intended 10%.</s>
``` python
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
```
EDIT: After a second look, it seems the logic here is that 80% tokens are masked, out of the remaining 20% tokens, half of them need to be replaced by random tokens. So the probability 0.5 is correct. However, for [tf_mask_tokens()](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L661), the probability is 0.1, which seems incorrect. Let me know if I understand it correctly!
``` python
indices_random = self.tf_bernoulli(input_shape, 0.1) & masked_indices & ~indices_replaced
random_words = tf.random.uniform(input_shape, maxval=vocab_size, dtype=tf.int64)
inputs = tf.where(indices_random, random_words, inputs)
```
See the [link](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L776) here.
Furthermore, I believe that the ratios for masked/original/random tokens should be configurable parameters that are accessible to users. At present, I have to inherit the `DataCollatorForLanguageModeling` class and override the `torch_mask_tokens` function in order to modify the ratio.
I can submit a pull request to address the bug and update the ratio parameter. Thank you very much! | 02-25-2023 19:54:19 | 02-25-2023 19:54:19 | Note that the Transformers library is primarily a library of models, not data collators. This particular data collator should never have been added to the library proper but only in the example that uses it (it's also quite buggy and only works for BERT models). We welcome a PR with bug fixes (for the TF one apparently) but won't add more functionality to it.<|||||>@sgugger Thanks for the clarification. I will submit a PR for fixing the bug. <|||||>@sgugger I have created a PR #21834 to fix the typo! Thank you! |
transformers | 21,802 | closed | Add BLIP and BLIP-2 to image-to-text pipeline | # What does this PR do?
This PR adds BLIP and BLIP-2 to the image-to-text pipeline.
Usage is as follows:
```
from transformers import pipeline
from transformers import AutoProcessor, BlipForConditionalGeneration, Blip2ForConditionalGeneration
# BLIPv1
# processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
# model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
# BLIPv2
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
pipe = pipeline("image-to-text", model=model, image_processor=processor.image_processor, tokenizer=processor.tokenizer)
print(pipe("http://images.cocodataset.org/val2017/000000039769.jpg"))
``` | 02-25-2023 18:00:58 | 02-25-2023 18:00:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Are these tested automatically now ? Or should we add something to make sure they are tested too ?<|||||>Hey @Narsil !
My PR #21516 is not merged yet, and on `main` it is still using `metaclass`, so the pipeline tests (in theory) are generated on the fly (just as you did before).
But the problem is that we don't generate the tiny models offline yet for newly add models in `transformers`, and therefore those model classes are not tested in pipeline testing on current `main`.
I plan to re-run the tiny model generation ASAP. If it's urgent for this PR, I can do it!<|||||>@NielsRogge Do you mind adding a tiny model and a small test in the `tests/pipelines/` directly for this PR maybe ? (The PR looks good, I'd just like to make sure it's not breaking on small models within tests if possible).<|||||>FYI: I tried to create tiny models for `blip` and `blip-2` using the existing script, but they both failed to create.
- `blip-2`: there is no `Blip2ModelTest` class
- (there is `Blip2ForConditionalGenerationTest`, as well as `Blip2VisionModelTest`)
- but the creation script needs to know from `Blip2ModelTest`
- ~~(I am not really in favor to further complicating the creation script - it's super complex already)~~
- it's better if we can manage to have some naming convention in modeling test files
- (I agree that the current test names in `blip-2` make sense however)
- `blip`: processor fails to be created (due to `feature_extractor` attribute)
- I will try to fix this and create tiny model for `blip`
<|||||>I will try to work on enhancing the script. But if you somehow manage to create them manually (model/tokenizer/processor), go ahead.<|||||>Close this one as the task is completed in #21904
Thank you @NielsRogge for take the initiative. |
transformers | 21,801 | closed | Adding additional terms to the Transformers glossary | ### Feature request
Adding definitions to the [Transformers glossary](https://huggingface.co/docs/transformers/glossary) for each of the following terms:
- **encoder, decoder and encoder/decoder:** You already have definitions for autoencoding and autoregressive models but it's not immediately clear from the glossary that those are synonymous to encoder and decoder. Could point to relevant sections of the [Summary of the models](https://huggingface.co/docs/transformers/model_summary) article.
- **finetuned model:** There already is a glossary term for pretrained model, can link to the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) article
- **inference**
- **pipeline:** Could link to the [Pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) article
- **preprocessing:** Can link to [the preprocess document](https://huggingface.co/docs/transformers/preprocessing)
- **supervised and unsupervised learning**
A few other definitions which might be worth defining (even if they don't show up as much in your documentation) is **representation learning**, **semi-supervised learning**, **feature extraction**, **Large Language Models (LLM)** (mainly because it is such a popular term now) and **transfer learning**.
Finally, it might be worth putting acronyms beside glossary terms like **natural language processing/understanding** and **recurrent neural networks** (i.e. NLP/U and RNN) both for brevity and because they are so commonly used.
### Motivation
The glossary page has been quite helpful for me in understanding certain overloaded terms in deep learning, and I feel that adding these terms would be beneficial to others. It also could help link people to useful articles as many of the above terms have been explained already in one of your articles which helps with keeping things organized.
### Your contribution
I would be happy to help with coming up with the definitions and submitting a PR with the added changes π | 02-25-2023 15:59:59 | 02-25-2023 15:59:59 | cc @MKhalusova and @stevhliu <|||||>Thanks for the great suggestions, and for adding links to other docs for some of the definitions. Feel free to open a PR for these new definitions/changes! π
> encoder, decoder and encoder/decoder
I think for encoder and decoder, we could combine those with the existing definitions for autoencoding/autoregressive, instead of having two separate definitions that basically explain the same thing. The summary of the models [guide](https://huggingface.co/docs/transformers/main/en/model_summary) has actually been recently updated such that you can't really link to it.
> Finally, it might be worth putting acronyms beside glossary terms
Great idea!<|||||>Excellent, I will get started on this over the weekend, thanks for the feedback! π
> I think for encoder and decoder, we could combine those with the existing definitions for autoencoding/autoregressive, instead of having two separate definitions that basically explain the same thing.
Just to be clear here, would this be renaming existing autoencoding/autoregressive entries as encoder/decoder & mentioning that autoencoding/autoregressive are synonyms?
<|||||>Awesome, looking forward to your contribution! π€
> Just to be clear here, would this be renaming existing autoencoding/autoregressive entries as encoder/decoder & mentioning that autoencoding/autoregressive are synonyms?
Yeah, I think having entries for encoder/decoder would be better than autoencoding/autoregressive.<|||||>Thanks for your help! π I've got a draft complete locally with the proposed changes and will make a PR in the next day or so. I'll close this issue for the time being though! |
transformers | 21,800 | closed | [deepspeed] check whether model is NLP one instead of counting on input type | # What does this PR do?
This PR intends to fix an issue when training of NLP model fails if input dtype isn't int64.
My dataset had dtype = int32. Everything was ok until I decided to add deepspeed.
It turned out that trainer relies on dtype and does input data convertion into hf_deepspeed_config.dtype if it isn't int64.
I guess it has to check whether first layer isn't Embedding instead.
I think this PR also needs tests but I need an advice on how we can cover this case.
@stas00 could you be so kind and review this PR and give an advice on whether tests are necessary and their implementation ? | 02-25-2023 11:02:29 | 02-25-2023 11:02:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@izapolsk, I think the fix is much simpler. Just check if it's any type of `int` - I didn't think anybody would use a different int type than int64 when I coded this, so as new use cases come in, we can just adapt for it.
```
if self.deepspeed and data.dtype not in [torch.in32, torch.int64]:
# NLP models inputs are int32 or int64 and those get adjusted to the right dtype of the
```
If it resonates and works you can go ahead and apply that fix, or I can do it as well. It'd be easier for you since you have an application you can already test with.
and if this will be the way, I don't think we need any additional tests.<|||||>Actually, I wonder why were we using int64 in the first place when vocabs are so small. int32 should work always and for smaller vocabs even int16 should be enough (max `32767`).
Probably since there is very little saving in using a more compact dtype as inputs, if they are tokenized on the fly are very short.<|||||>@stas00, done.
I added more sophisticated check based on checking first layer because vocab could be int16 - 64 and there could be other non NLP models I'm not aware of having int input.
Thank you for reviewing this PR.<|||||>Let me re-run the offline tests first<|||||>Nope, the tests were failing. The logic was incorrect. I pushed the fix.
@izapolsk, please check that with my fix it still works for you and then we can merge it.
Thank you.
p.s. Also since it seems that this is not the last of your deepspeed improvements, here is how you can test that your future PRs work:
```
RUN_SLOW=1 pytest tests/deepspeed
```
this is because CircleCI has no GPUs, so we only run those tests requiring gpus on a different CI nightly. <|||||>Oh good catch, missed that not. We'll need a quick rebase on main to get the quality job passing if possible (the failure seen here is fixed on main).<|||||>good catch, my bad, sorry. I'll do |
transformers | 21,799 | closed | Fix en documentation typos | # What does this PR do?
Fix a wrong URL as well as typos
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
| 02-25-2023 05:22:42 | 02-25-2023 05:22:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,798 | closed | Fix resume_from_checkpoint for deepspeed | something is borked with CircleCI when a contributor has a CircleCI account that isn't set up to some requirements - we don't know what - so I re-created the PR from the original https://github.com/huggingface/transformers/pull/21735
------------------
This PR overcomes a possible issue with the using deepspeed resume when a non-deepspeed checkpoint file structure isn't there.
The original code comes from @mosheber and I had to apply a few more adjustments for tests to work after this change. The tests had to be run manually since they require gpus.
Credits to contributor's work have been correctly imported into this new PR. | 02-25-2023 03:28:02 | 02-25-2023 03:28:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>ok, looks like we figured out the original so closing this one. |
transformers | 21,797 | closed | How to prune a transformer? | ### System Info
Hi, I am trying to reduce memory and speed up my own fine-tuned transformer. I came across the [tutorial ](https://huggingface.co/docs/optimum/intel/optimization_inc) for pruning on the huggingface site. I am referring to the following snippet. The trainer.train() is missing, so I added it. It ran without error, however, there is no reduction in memory (I used model.get_memory_footprint() and before and after pruning it was Model memory footprint: 503695916 bytes). Same for inference speed. I also tried out different pruning configurations (global pruning, different pruning types or target sparsities) but it did not help. Can someone help me?
```
from optimum.intel.neural_compressor import INCTrainer
from neural_compressor import WeightPruningConfig
from transformers import TrainingArguments, Trainer
from transformers.data.data_collator import default_data_collator
pruning_config = WeightPruningConfig(
pruning_type="magnitude",
start_step=0,
end_step=15,
target_sparsity=0.2,
pruning_scope="local",
)
from transformers import TrainingArguments, Trainer
save_dir="prunedModel"
trainer = INCTrainer(
model=model,
pruning_config=pruning_config,
args=TrainingArguments(save_dir, max_steps=500,num_train_epochs=1.0, do_train=True, do_eval=True,metric_for_best_model="f1",greater_is_better=True),
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=processor,
data_collator=default_data_collator,
)
train_result = trainer.train() # <-- Added by me
trainer.save_model(save_dir) # <-- Added by me
optimized_model = AutoModelForSequenceClassification.from_pretrained(save_dir)
memory_footprint = optimized_model.get_memory_footprint()
print(f"Model memory footprint: {memory_footprint} bytes")
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Expected behavior
As per the model should be pruned and the actual model without pruned and the pruned model should have different sizes but they have the Model memory footprint: | 02-24-2023 23:51:00 | 02-24-2023 23:51:00 | Please do not tag so many people, especially for an issue which is linked to the optimum repo (where you found this tutorial) and not the Transformers one.<|||||>okay Thank you so much for the suggestions I will remove from the task. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,796 | closed | LLaMA | ### Model description
New model series from Facebook (7B, 33B, 66B) that is broadly competitive with Flan-PALM-540B.
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 02-24-2023 22:00:33 | 02-24-2023 22:00:33 | Hello, @michaelroyzen , I want to work on this issue, can you please clarify this:-
1. The objective of this issue is to add the Llama model to the π€ models section right ?
The inference code for the Llama models is open sourced and weights and tokenizers are available as you mentioned.
I can try to work on this issue, Please let me know if this issue is open for working and should I proceed or not. <|||||>Hello @sayantan1410. At this moment the code for inference is available, but to get the weights you need to fill out the request form from their github. It'd be great for you to work on this, but it would require doing so with a hypothethical set of weights, given that they have not started actually releasing weights to people who asked for it just yet.<|||||>Hello @Eric-Wallace-WebHost , I have actually filled up the form for the weights and the tokenizers but since I don't have any related publications so probably, I will not get that. But for now, I will try to work with some hypothetical weights until the weights are released !<|||||>Also will there be a Jax implementation? It would be super helpful. I can help contribute to it as well<|||||>I can contribute as well for the Jax implementation! Also I'm not sure if we can just use their pytorch code, since it is released under GPLv3 instead of the Apache License of transformers.<|||||>I have the weights. Haven't checked out the rules and I'm gonna assume I can't share it, but if you guys have an implementation I would love to help by testing it out.<|||||>At this stage we don't know if there is going to be an implementation in Transformers due to:
- inaccessibility of weights (no one who got them is allowed to share them on the Hub)
- different license of the code
We are looking if the Meta folks would be happy to release the weights in a gated repo on the Hub and if the code will be in Transformers or just put as code on the Hub because of the license. @thomasw21 is working on a PyTorch port that our research team will use in any case.
So stay tuned!<|||||>> At this stage we don't know if there is going to be an implementation in Transformers due to:
> * inaccessibility of weights (no one who got them is allowed to share them on the Hub)
Even if there is no permission to have the weights on the hub, usually transformers models are released with the conversion scripts done for the conversion. Even an implementation combined with the needed conversion script can be useful, because then researchers can convert the model to HF if needed and still use it within their HF based projects without having to reinvent the wheel.<|||||>+1 to henk717. Would be super useful even if there was just a way to plug in your own weights and use the existing transformers library!<|||||>It looks like the weights are right here.
https://huggingface.co/nyanko7/LLaMA-7B
https://huggingface.co/ricecake/LLaMA/tree/main
https://huggingface.co/datasets/nyanko7/LLaMA-65B
License is here:
https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform<|||||>Working on this today!<|||||>Are weights actually copyrightable? Technically, they are just a list of numbers generated by a machine and hence don't fall under US copyright laws.
I say, just upload the weights and call Meta's bluff.<|||||>> Are weights actually copyrightable? Technically, they are just a list of numbers generated by a machine and hence don't fall under US copyright laws.
>
> I say, just upload the weights and call Meta's bluff.
lots of people are way ahead of you on this.<|||||>Can someone make an ONNX version? I tried to convert it but I ran out of RAM.
I would quite like to try it with Onnxruntime. Even though I think this uses far more VRAM than using torch. Also onnxruntime has a memory leak with external weight files. But still...<|||||>I'm interested in fine-tuning LLaMa for creating text embeddings, anyone have any tips for how to do it with the LLaMa architecture? Can I just add a pooling layer at the end?
https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama
Here's code for RLHF training btw<|||||>I have a working Jax implementation [here](https://github.com/Sea-Snell/JAX_llama) |
transformers | 21,795 | closed | Fix page counting in Slack CI report script. | # What does this PR do?
We started having an issue getting slack report for our CI. The error shown at the end indicates there is an issue to get all the job links for a workflow run. I take a look and find a change is necessary.
I am not sure why it was that way, especially for `i+2` part. But I remembered it has to be that to avoid duplicated pages. Maybe GitHub Actions change their API now and causes the issue.
```bash
"url": f"{github_actions_job_links['Extract warnings in CI artifacts']}",
KeyError: 'Extract warnings in CI artifacts'
``` | 02-24-2023 19:09:09 | 02-24-2023 19:09:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21795). All of your documentation changes will be reflected on that endpoint.<|||||>My self doubt sprit and intuition led me to look the issue again, and it turned out the issue was coming from the GitHub API call rate limit was reached as we didn't use a token when making these calls. PR #21823 was opened and merged.
The current way of page counting was good - the first page could be `0`, `1` or without page number. The next one would be `2`. My math capability was reduced a lot, especially when I tried to quickly fix things on Friday. |
transformers | 21,794 | closed | [GPTJ] Fix gradient checkpointing bug | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue #21737
cc @sgugger @amyeroberts | 02-24-2023 18:25:03 | 02-24-2023 18:25:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @amyeroberts As mentioned I did the following:-
> Thanks for your PR! You need to remove the other one below (line 660).
However won't that cause an issue with function declaration and it's corresponding else blocks? I saw other implementations of this fix and they don't remove the block below just add it again above. What do you think?<|||||>Hi @krypticmouse - thanks for your question and applying the update @sgugger requested.
Looking at the diff again, L654 shouldn't be removed. I believe this should resolve the function declaration issue you mentioned. <|||||>You will also need to resolved the conflict as `logger.warning` has been renamed to `logger.warning_once` since you opened your PR.<|||||>Is this ok to be merged now? |
transformers | 21,793 | closed | check for None forced tokens | # What does this PR do?
Fixes #21791
@sanchit-gandhi
| 02-24-2023 18:18:50 | 02-24-2023 18:18:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,792 | closed | Improve TF weight loading, especially PT crossloading | Draft PR for now, this will probably break a bunch of stuff until I get it all working!
- [X] Support `from_pt` and `load_weight_prefix` at the same time
- [x] Replace hacky loading code in models that was written to get around this issue
- [x] ~Test and possibly replace code paths that name submodules based on `cls.load_weight_prefix` - I think this is very risky~
- [x] Support `load_weight_prefix` in the `load_sharded` functions as well
- [x] ~Check the `cls._requires_load_weight_prefix` paths and see if there's a better solution~
- [x] Update any affected tests
- [x] Add test for `load_sharded` with `load_weight_prefix`
Classes using `load_weight_prefix` that may need updating:
- [x] BART
- [x] EncoderDeoder
- [x] VisionEncoderDecoder
- [x] RAG
- [x] Blenderbot
- [x] T5
- [x] LED
- [x] BART
- [x] mBART
- [x] Marian
- [x] OPT
- [x] Pegasus
- [x] The cookiecutter template | 02-24-2023 16:34:54 | 02-24-2023 16:34:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This is ready for review now! It's not that big, but it changes a lot of things:
- Loading a PT model in TF with `from_pt=True` now supports `load_weight_prefix`
- Loading a sharded TF checkpoint now supports `load_weight_prefix`
- Sharded TF checkpoints can now be loaded even when not all weights match (this allows model surgery that didn't work with sharded models before!)
- TF `from_pretrained` now supports a `tf_to_pt_weight_rename` kwarg. This should be a callable function which converts TF weight names to PT weight names for that model.
- Composite classes like `TFEncoderDecoder` and `TFVisionEncoderDecoder` have been refactored to use the `tf_to_pt_weight_rename` kwarg, which let me remove all the ingenious workarounds that @ydshieh needed when he added those classes.
- I found a few small issues in other classes when I was testing this PR and fixed them. This is mostly just stuff like removing unused args and disabling TF32 in tests so that the outputs match.
- Add tests for the new features to `test_modeling_tf_common`
cc:
@ArthurZucker because I touched your sharded weight loading code
@ydshieh because I touched your composite model code
@gante for TF review
@sgugger as repository overlord |
transformers | 21,791 | closed | Flax Whisper predicts erroneous exclamation mark | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.4 (gpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @andyehrenberg @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code snippet:
```python
from transformers import FlaxWhisperForConditionalGeneration, WhisperProcessor
from datasets import load_dataset
model = FlaxWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
librispeech = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = librispeech[0]["audio"]["array"]
input_features = processor(sample, return_tensors="np").input_features
pred_ids = model.generate(input_features)
pred_text = processor.batch_decode(pred_ids.sequences)
print(pred_text)
```
**Print Output:**
```
['<|startoftranscript|>!<|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
```
We see an extra `!` after the `<|startoftranscript|>` token that should't be there.
Do you fancy taking a look into this one @andyehrenberg? Otherwise can try and find time next week.
### Expected behavior
Should return:
```
['<|startoftranscript|><|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
``` | 02-24-2023 16:20:24 | 02-24-2023 16:20:24 | I think the expected behavior should be returning:
`['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mischekvilder is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']`
The generation config has [[1, None], ...] for forced decoder ids, so the language is predicted by the model, and for some reason it's sampling "!". Maybe we should have a processor for only sampling from the language tokens for that first step when it is supposed to predict the language.<|||||>Indeed - that's correct we should be seeing the language token predicted at the second index!
Just checked and `!` is the zero-th token in the tokenizer vocab -> maybe somethings going astray with the forced tokens logits processor when a `None` is passed as the forced token?<|||||>I'm seeing the problem - `force_token_array.at[index].set(token)` when `token` is `None` sets the value at `index` to 0. We should just make it so when `token` is None, we keep the value at that index at -1.<|||||>So should update FlaxForceTokensLogitsProcessor to:
```
def __init__(self, force_token_map):
force_token_map = dict(force_token_map)
# Converts the dictionary of format {index: token} containing the tokens to be forced to an array, where the
# index of the array corresponds to the index of the token to be forced, for XLA compatibility.
# Indexes without forced tokens will have a negative value.
force_token_array = jnp.ones((max(force_token_map.keys()) + 1), dtype=jnp.int32) * -1
for index, token in force_token_map.items():
if token is not None:
force_token_array = force_token_array.at[index].set(token)
self.force_token_array = jnp.int32(force_token_array)
```<|||||>Also, just keep in mind that `forced_decoder_ids` has to be a static argument for jitted functions. The workaround I use is having empty `forced_decoder_ids` and instead passing them into `decoder_input_ids` when I know the forced ids might change.<|||||>Nice one @andyehrenberg! That must indeed be the root cause of the problem β
. We'll have to pass the forced decoder ids as static argnums when we `pmap` the generate function in #21764<|||||>Hi
Could you please specify how to set forced_decoder_ids for the FlaxWhisperForConditionalGeneration object?
@sanchit-gandhi <|||||>You should either modify the ` model.generation_config.forced_decoder_ids` or when calling `generate`, set the `language`, `task` and `return_timestamps` arguments. You can also pass them as `decoder_input_ids` (which is also an argument of the `generate()` function or ` forced_decoder_ids`. <|||||>> You should either modify the ` model.generation_config.forced_decoder_ids` or when calling `generate`, set the `language`, `task` and `return_timestamps` arguments. You can also pass them as `decoder_input_ids` (which is also an argument of the `generate()` function or ` forced_decoder_ids`.
It does not respect the forced_decoder_ids when I pass it to model.generation_config.
This is my code:
```
model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True)
jit_generate = jax.jit(model.generate, static_argnames=["max_length"])
model.generation_config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="en", task="translate")
pred_ids = jit_generate(input_features, max_length=128)
```
But the result is not translation to english (as same as when forced_decoder_ids is set to None)
<|||||>Also, when I set `language` to `generate()` function, it raises error:
```
pred_ids = jit_generate(input_features, max_length=128, language="<|en|>")
File /opt/conda/lib/python3.8/site-packages/jax/_src/api_util.py:568, in _str_abstractify(x)
567 def _str_abstractify(x):
--> 568 raise TypeError(f"Argument '{x}' of type {type(x)} is not a valid JAX type")
TypeError: Argument '<|en|>' of type <class 'str'> is not a valid JAX type
```
I've also tested `language='en'` and `language='english'` and the result is the same (following error)<|||||>That is because you are not using the latest version of transformers. All of this was adressed in #21965<|||||>Your error regarding strings not being a valid JAX type can be fixed by setting the language prior to compiling and keeping it static. It also looks like youβre changing the modelβs generation config after wrapping its generate method in jit, which could be causing problems. My guidance is the set your generation parameters how you want, and then get a `partial(model.generate, arg1=val2, β¦)` and then compile that function (or just use static argnames).<|||||>It is resolved by passing the `language` parameter in the static_argnames:
```
jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language"])
input_features = jnp.array(input_features, dtype=jnp.float16)
pred_ids = jit_generate(input_features, max_length=128, language='<|en|>')
```
Thanks @andyehrenberg and @ArthurZucker
|
transformers | 21,790 | closed | Fix resume_from_checkpoint for deepspeed [by mosheber] | # What does this PR do?
From #21735 without any change, but to launch CI. | 02-24-2023 15:45:41 | 02-24-2023 15:45:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,789 | closed | Fix nn.init.trunc_normal_ call on torch.float16 data | Following https://github.com/huggingface/transformers/pull/20803 that gave the idea, but was still buggy for some cases, for example:
```python
from transformers import ViTForMaskedImageModeling
import torch
model = ViTForMaskedImageModeling.from_pretrained('hf-internal-testing/tiny-random-vit', torch_dtype=torch.float16).to("cuda")
```
still raising `RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'`.
Let me know if you would like me to add tests. | 02-24-2023 14:57:22 | 02-24-2023 14:57:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @younesbelkada Feel free to merge, it appears I have no merge rights even after approval on transformers. |
transformers | 21,788 | closed | [SpeechT5] Fix HiFiGAN tests | # What does this PR do?
SpeechT5HiFiGAN tests added in the PR #21702 failed in the CI daily run: https://github.com/huggingface/transformers/actions/runs/4248850253/jobs/7388500325
This PR fixes the torch devices β
| 02-24-2023 14:55:51 | 02-24-2023 14:55:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,787 | closed | Fix PyTorch Perceiver `PerceiverFourierPositionEncoding` with fp16 | Passing `torch_dtype=torch.float16` with perceiver is currently broken on main. The error comes from a parameter generated on the fly always with `torch.float32` dtype, hence raising issues later on. Reproduction:
```python
from transformers import AutoImageProcessor, PerceiverForImageClassificationConvProcessing
from PIL import Image
import requests
import torch
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("deepmind/vision-perceiver-conv")
model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv", torch_dtype=torch.float16).to("cuda")
inputs = image_processor(images=image, return_tensors="pt").pixel_values.to("cuda")
inputs = inputs.to(torch.float16)
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Raising `RuntimeError: expected scalar type Float but found Half`
This PR fixes the issue. Let me know if you would like me to add tests for this. I'm actually surprised this was not catched in an existing test.
## Who can review?
@amyeroberts @sgugger
| 02-24-2023 14:18:13 | 02-24-2023 14:18:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @amyeroberts Feel free to merge, it appears I have no merge rights even after approval on transformers. |
transformers | 21,786 | closed | BioGPT Token Classification | ### Feature request
It would be nice to have this available.
### Motivation
working on biomedical token classification datasets and I would like to try BioGPT with them.
### Your contribution
I could send a PR if you want. I guess it shouldn't be too hard. | 02-24-2023 13:37:24 | 02-24-2023 13:37:24 | Yes, feel free to contribute :)<|||||>sure, Let me work on it<|||||>> sure, Let me work on it
Are you working on it? Else I will take it up.<|||||>@kurchi1205 Am working on it . The problem is that biogpt doesnt have fast tokenizer . Currenlty am in a testing phase |
transformers | 21,785 | open | Add Pop2Piano | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Pop2Piano model to HuggingFace.
Fixes #20126
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a [link](https://github.com/huggingface/transformers/issues/20126)
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-24-2023 13:15:23 | 02-24-2023 13:15:23 | Hi @ArthurZucker the implementation is almost ready(also tests) but I feel that the way I implemented this model is not a descent level, thats why I want you to take a look at the `Pop2PianoModel` structure.
Just to be clear with the feature extractor, the `Pop2PianoFeatureExtractor` takes raw_audio as input and generates variable length output(`10, 50000`, `15, 62200`), even if I pad the raw_audio at start, it will still produce different results for different audio files, so I used lists to stack them and then wrapped them through `BatchFeature`.
Please don't mind about docs I will change them afterwards
**EDIT : Please ignore this** <|||||>*(Here is the author of pop2piano)*
Thank you for doing this PR. It seems that this was implemented by understanding the original code better than me! Please feel free to ask me if there is anything I can check or do.<|||||>@sweetcocoa Thanks for you comments, HF team has helped me a lot in this integration.<|||||>For solving the import issues, you have to create a `require_xxx` with the name of the package. Look for example at the [`require_accelerate`](https://github.com/ArthurZucker/transformers/blob/c3a10a5dace55657a639789ad41fb4ded80e96fe/src/transformers/testing_utils.py#L259) in the `testing_utils.py`! π
<|||||>Hi @ArthurZucker thanks for you comment!
But I have already created `require_xxx` in `testing_utils.py` regarding `essentia` and `pretty_midi` and also I have used them in `transformers/src/transformers/models/pop2piano/__init__.py`. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21785). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @ArthurZucker sorry for the delay but the tests are green now! please review it.<|||||>Hi @ArthurZucker , sorry for the huge delay, I have made most of the changes that you asked. Also there are some changes that I didn't do these are below -
1. I managed to remove dependency on soundfile and torchaudio but not librosa, since raw_audio is used 2 times first time in `extract_rhythm` which takes audio with original sampling_rate and the second time in `single_preprocess` which first upscales/downscales raw_audio to sampling_rate of 22050 and then uses it. And preloading raw_audio with sampling_rate of 22050(not with native sampling_rate) was giving very bad results! I tried to use scipy.resample but since it uses fft it is relatively slow and less accurate.
2. As you suggested to pad the feature_extractor outputs with silence, I tried to do that but I found that different audio files with same length have input_features of different shapes! For example one was having shape of [7, 38, 512] and another one of [6, 42, 512], both were 10s audios. I could pad them and use them in a batch but then I need to keep track of their shapes which would introduce another variable, what do you suggest? Should I leave them or try to pad and keep track of shapes?
The [licensing page](https://essentia.upf.edu/licensing_information.html) of `essentia` says - "Essentia is available under an open license, [Affero GPLv3](http://www.gnu.org/licenses/agpl.html), for non-commercial applications, thus it is possible to test the library before deciding to licence it under a comercial licence." I don't know much about licensing so I will leave it upto you to decide what to change in the headings.
Also please forgive me if I missed something, I will change them in the future commit.<|||||>Hey! Will try to have a look asap<|||||>Pinging @sanchit-gandhi for a review too! <|||||>Hi @ArthurZucker In have made the changes you requested except the resample one (I have mentioned the reason [here](https://github.com/huggingface/transformers/pull/21785#discussion_r1180702087)), let me know if more changes are needed or not.<|||||>Hi @sanchit-gandhi thanks for your comments! and sorry for the late reply, the batching is not working for these reasons -
1. **feature_extractor** - The output of the feature extractor varies from music to music! For example - music1.mp3(10 seconds long) can have a feature extractor output of shape - [7, 38, 512] and music2.mp3(also 10 seconds long) can have a feature extractor output of shape - [6, 42, 512] . So if a user tries to process multiple music files in batch it will be very hard to batch them!
- `truncation` - Truncating both of them to say [5, 25, 512] gives pretty bad results!
- `padding` - One way we can overcome this is, we can take the maximum dimensions on each axis and pad them, but then we must unpad them or get their original shapes back, otherwise the the tokenizer won't work! So we can make a variable just to record the shapes of the tensors before padding. We can do this approach if you want.
I tried other approaches such as `torch.nested.nested_tensor` but there the user wont be able to use `.to("cuda") or .to("cpu")` on feature_extractor outputs because they are not supported!
2. **model** - The model.generate can take inputs of (dim1, dim2, dim3) but as we have (dim1, dim2, dim3) shapes for each input we may need to use a for loop if we are to support batching!
Also should I make a new PR regarding the assert for T5?<|||||>Hi @sanchit-gandhi, I pushed the modifications, tests which are failing are mostly due to internal errors(Connection to HF hub, TF installation etc).
Please ignore the all checkpoints as those are temporary. I will change all of them just before the merge.
Also please tell me if any more modifications are needed or not and also if I have missed any or not.
And in meantime I will make a PR regarding the change of T5 assert to except, I think maybe we should wait until that gets merged and then we will change the blocks to except here too?
**EDIT** : pushed the change with the T5 modification.<|||||>pushed the change with the `T5` modification.<|||||>Also requesting review from @hollance!<|||||>Hi @hollance, I have made those changes you requested. And Hi @sanchit-gandhi, please review the batching part(except the checkpoint part as we discussed in slack if want to move them to a separate org or not), let me know if more code changes are required or not.
btw I was automatically removed from slack channel as it says `[email protected]βs Workspaceβs free trial of Slack Pro has ended. Any channels with people outside your company β such as #pop2piano-integration β have been disconnected.`
If anymore work is needed such as transferring the files to organization checkpoint, updating the HF Space for Pop2Piano ... please let me know I would be happy to do that! <|||||>@susnato
> btw I was automatically removed from slack channel as it says `[email protected]βs Workspaceβs free trial of Slack Pro has ended. Any channels with people outside your company β such as #pop2piano-integration β have been disconnected.`
I added you back as a guest to the pop2piano-integration channel. You should be able to use this using regular Slack (not Pro). Let me know if that doesn't work.
<|||||>Hi @sanchit-gandhi, I have pushed the new changes.<|||||>Alright nice! And you've verified that the outputs are the same with/without padding? Requesting review from @ArthurZucker and @hollance to kick-off the last round of reviews :)<|||||>yes I did check for 3 types - 1. single audio + no_attention_mask, 2.single audio + attention_mask, 3. 2 audios + attention_mask and the outputs were same. Since you just said, I will still add a test for that in the next commit(after last round of reviews). <|||||>reviewing right now<|||||>Hi @ArthurZucker I have pushed the comments. Let me know if any more changes are needed or not. <|||||>I'll @sanchit-gandhi review now, before pinging a core maintainer <|||||>I have transferred the necessary files to `sweetcocoa/pop2piano` and updated the checkpoints. @sanchit-gandhi <|||||>Thanks for updating again, @sanchit-gandhi has his hands full, I'll review this weekend! Sorry for the delay and great work! π₯ <|||||>@ArthurZucker thanks for the quick reply, please don't worry about the delay, and thanks to the HF team for launching very intuitive audio course! :hugs: <|||||>Sorry, I missed the review request the first time! Rest assured it's on my list - I haven't forgotten @susnato! Aiming to have you a review either on Sunday or Monday afternoon latest π<|||||>Hi @sanchit-gandhi, I have pushed the changes that you requested. Also answered the questions in the threads.<|||||>Cool thanks for the explanations @susnato, all good with me. Could you click "resolve" on all the threads that have been completed?<|||||>Hi @sanchit-gandhi, just resolved all other threads.<|||||>Sorry @susnato - could you resolve **all** the threads for this PR if they're completed? There are bunch still open from previous reviews. This will help the last person to review pin-point which bits are still pending (if any). Thanks!<|||||>Hi @sanchit-gandhi, sorry I misunderstood what you said at the first time, but now I have resolved all threads except the two you asked to leave. <|||||>Perfect - thanks @susnato! This PR is now ready for final review π€<|||||>Hi @amyeroberts thanks for your comments! and I am so sorry that I took so long to do the changes and for resolving the discussions without any message(I will try to not do this from the next time), but now I have reopened those discussions which you commented on and replied those answers. Also I have pushed the changes that you requested except [this one](https://github.com/huggingface/transformers/pull/21785#discussion_r1264735735) (I have mentioned the reason in the thread).
Please let me know if more changes are needed or not and if I have missed any? I will surely pushed those in the next commit. <|||||>Hi @amyeroberts, I added `Pop2PianoProcessor` along with its tests, addressed the comments that were due from the last review and pushed the changes that you requested,
To answer your question -
> At the moment, most of the logic in the feature extractor and tokenizer are written to take a batch which means they're all having to implement their own looping logic.
its because how `Pop2Piano` processes the audio files, just to give you an example - suppose we have a single audio file, pop2piano feature extractor will convert the file into array of multiple batches(eg. `(70, 8, 512)`) even though there is only 1 file. That is why even to process a single file for all functions that is applied after `self.extract_rhythm` we need to implement the batching logic for each of them. I tried to change the logic in past but couldn't since the `self.extract_rhythm` always outputs arrays of this shape. I hope this explanation helps.
Let me know if this need more changes or not.<|||||>Hi @amyeroberts , I have pushed the changes you requested :)
Let me know if it needs more changes or not.<|||||>All checks are green now.<|||||>Hi @amyeroberts I have made some major changes to the `tokenizer` -
- I have implemented the `__call__` method which has the same design(accepts standard arguments as most other tokenizers) as other tokenizers in the library and converts notes(which is equivalent to tokens here) to ids.
- Renamed the previous call method as `batch_decode`.
- Aligned the processors's call method to both tokenizers and feature extractors, which compiles with the library and created a seperate `processor.batch_decode` method which calls the tokenizers `batch_decode` method.
- Worked on the comments you gave in the previous review.
It was my fault that I confused the tokenizer's `batch_decode` as the `__call__` this whole time, I am so sorry for this mistake.
If I am not wrong, now this tokenizer has almost the same design as others ~~except the fact that it does not have a vocabulary, I am currently working on that(as you suggested [here](https://github.com/huggingface/transformers/pull/21785#discussion_r1273406611))~~, (this tokenizer has a vocab now).
Please review it and let me know if the changes make sense or not and what changes are needed now. :) <|||||>I have added a vocab(in fact there are 4 vocab files for 4 different token types "TIME", "NOTE", "VELOCITY" and "SPECIAL" and 1 with vocab params).
Please let me know if this complies with the library or should I squeeze them into one single nested vocab in the checkpoint and load a single file instead of five.<|||||>Hey @susnato huge effort, will review today / tomorrow, I need to dive a bit in all the review comments! π€ Almost there! <|||||>Hi @ArthurZucker I have pushed the changes you requested except [this one](https://github.com/huggingface/transformers/pull/21785#discussion_r1282874305)(I have asked for some help about that on the respective thread).
BTW the failed `circle-ci` test is unrelated to this PR. |
transformers | 21,784 | closed | Inheritance-based framework detection | # What does this PR do?
Related to #21761
Problem: In some functions, we detect the framework of the model class through its name (e.g. if it starts with `TF`). This is a quirk of our library, and users might run into issues due to this hidden behavior. For instance, in the issue linked above, a user created a child class of a TensorFlow model whose name did not start with `TF`, running into exceptions.
Solution: Inheritance-based framework detection :) | 02-24-2023 12:09:33 | 02-24-2023 12:09:33 | cc @Rocketknight1 (FYI :P)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Now with a test π <|||||>@Rocketknight1 good point! I have no clue if it holds for TF2.5 -- going to double check<|||||>@Rocketknight1 hah, the class is different but it still works! In TF2.4, we have `<class 'tensorflow.python.keras.engine.training.Model'>` in the inheritance tree, so `"keras.engine.training.Mode"` is still a match :D
No change is needed, but thanks for the shoutout π |
transformers | 21,783 | closed | When will transformers consider supporting LoRA? | null | 02-24-2023 11:40:52 | 02-24-2023 11:40:52 | Hey, Huggingface released https://github.com/huggingface/peft about two weeks ago, which enables you to use LoRA with transformers :)
<|||||>Thanks for jumping on this @AhmedIdr . Closing this! |
transformers | 21,782 | closed | Fix type in gpt2 config docstring | This PR corrects the type of the field `embd_pdrop` in the docstring of `configuration_gpt2.py`, the field should be a float not an int, like the default value `0.1` suggests.
## Who can review?
Documentation: @sgugger, @stevhliu and @MKhalusova
| 02-24-2023 11:18:58 | 02-24-2023 11:18:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,781 | closed | piepline not loading the model | ### System Info
ubuntu 22.04
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
pipe = pipeline("document-question-answering", model=model,tokenizer=tokenizer)
File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 811, in pipeline
raise Exception(
Exception: Impossible to guess which feature extractor to use. Please provide a PreTrainedFeatureExtractor class or a path/identifier to a pretrained feature extractor.
```
### Expected behavior
SHould be able to load the model | 02-24-2023 11:01:24 | 02-24-2023 11:01:24 | How? This pipeline requires a feature extractor to see the document doesn't it? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,780 | closed | Replace `-m torch.distributed.run` by `torchrun` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR replaces occurrences of `-m torch.distributed.launch` (deprecated) and `-m torch.distributed.run` (equivalent) by `torchrun`. More information [here](https://pytorch.org/docs/stable/elastic/run.html).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 02-24-2023 09:59:20 | 02-24-2023 09:59:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21780). All of your documentation changes will be reflected on that endpoint.<|||||>Just for reference, is there a reason why the previous occurence is deprecated? (not familiar with it!)<|||||>`torchrun` is equivalent to `python -m torch.distributed.run` while `python -m torch.distributed.launch` is deprecated. I think the reason why it is deprecated is just that `torchrun` does the same but also provides more functionalities.
I improved the description of this PR accordingly.<|||||>However `torchrun` has only been available since the release of `torch` 1.10. I guess we want to keep compatibility with some previous versions of `torch` right? @sgugger @ArthurZucker <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,779 | closed | Add Flax Whisper for audio classification | ### Feature request
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of WhisperForAudioClassification. It would be great to add the Flax equivalent for cross-library equivalence β»οΈ
### Motivation
Whisper is an encoder-decoder model for speech recognition. However, we can repurpose the model for other speech tasks, such as audio classification.
Audio classification is the task of mapping from an input speech sequence to a single class prediction. For more details, refer to the task page on the Hub: https://huggingface.co/tasks/audio-classification
For audio classification, we only require a single model output. Thus, we do not need the auto-regressive generation capacities of the Whisper decoder (which is used to generate a sequence of text tokens during speech recognition). Instead, we can just use the Whisper encoder to get hidden states, and add a classification head on top to make class label predictions.
This is analogous to using a Wav2Vec2 model for audio classification: the Wav2Vec2 encoder is used to get hidden states, and a classification head added on top to make class label predictions.
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of WhisperForAudioClassification. It required adding a projection layer and classification layer on top of the WhisperEncoder. For more details, refer directly to the pull request.
It would be great to add the Flax equivalent of this model for cross-framework support.
The most difficult part of this PR will be getting the model tester to work. You can see from the PyTorch PR that we require a standalone tester for the audio classification model. This is because the original Whisper model is an encoder-decoder model, but the audio classification model is an encoder-only model. Thus, we require different testing logic.
### Your contribution
Opening this one up to the community! This will be quite a fun JAX/Flax PR! π
If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration! | 02-24-2023 08:58:47 | 02-24-2023 08:58:47 | Hi. It's my first time contributing to open source. I want to tackle this issue. How can I get started?<|||||>I have contributed to a few good first issues on HF, would like to take this to learn JAX if available!<|||||>@Potato-Cracker , @yhl48 Are you guys currently working on it? I have a working branch locally with passing tests, but if you guys would like to make PR, that's totally cool too. <|||||>@Shubhamai please go ahead with the PR!<|||||>Uh, looks like a PR is already submitted :smile: , I will see if I can assist the linked PR. <|||||>Very cool that there's so much interest in adding Flax models! Great to see that the JAX/Flax community is so active π Would you guys be interested in finding other PyTorch models to port to JAX/Flax in `transformers`?<|||||>@sanchit-gandhi I will be happy to contribute to some of them, would be great if you have any suggestions on any particular models!<|||||>Very cool! You can take a look at the model integration table here: https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx#supported-frameworks
There are a bunch of popular models that are supported in PyTorch but not Flax, LLaMa being one of them! This could be a cool model addition if you're interested?<|||||>I would love to take up LLaMa if it's available.<|||||>Very cool! What I would suggest doing is starting from the Flax GPT-Neo model (since this is the Flax model most similar to LLaMa) and then adding the new bits in<|||||>@sanchit-gandhi Would love to take on https://huggingface.co/openai-gpt. I just hope inferencing on my mac works out<|||||>@sanchit-gandhi .hello, I would like to work on TAPAS. |
transformers | 21,778 | open | Add TensorFlow Wav2Vec2 for sequence classification | ### Feature request
Wav2Vec2 is one of the most popular speech recognition models, used over 2 million times monthly. In the PyTorch modelling code, we have Wav2Vec2 for speech recognition _and_ Wav2Vec2 for audio classification. However, in TensorFlow, we only have Wav2Vec2 for speech recognition. It would be great to add Wav2Vec2 for audio classification to the TensorFlow modelling code for cross-framework equivalence!
### Motivation
The audio classification class for PyTorch Wav2Vec2 lives under `Wav2Vec2ForSequenceClassification`:
https://github.com/huggingface/transformers/blob/13489248fa8f2cda7503628204f8f43b108797a2/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1745
For this feature request, we'll need to port this PyTorch code into TensorFlow to create an equivalent TensorFlow class, `TFWav2Vec2ForSequenceClassification`.
This means adding a projection layer and classification layer on top of the base `TFWav2Vec2Model`. See the PyTorch code for reference:
https://github.com/huggingface/transformers/blob/13489248fa8f2cda7503628204f8f43b108797a2/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1753-L1758
To check our that our implementation is correct, we can do one forward pass of the PyTorch model and a forward pass of the TensorFlow model with the same inputs. If the output logits are to within 1e-5, we know that our TensorFlow model is correct β
. We can then enable PT-TF cross tests in the modelling file such that these checks are performed by the CI.
### Your contribution
Opening this one up to the community! If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration! | 02-24-2023 08:53:01 | 02-24-2023 08:53:01 | This feature request is closely related to #21777! Once we have the TF Wav2Vec2 model for sequence classification added, we can copy across the projection layers and classification layers to Whisper in order to add `TFWhisperForAudioClassifcation`. Two birds with one stone β‘οΈ<|||||>Hi @sanchit-gandhi I would love to take this up.<|||||>Very cool @nandwalritik! The first thing to do would be to add the equivalent TensorFlow code for the projection layer and classification layer on top of the base `TFWav2Vec2Model`. Do you want to have a go at adding this in a new PR? Happy to help with any questions / guidance! There's a bit of info as to where the PyTorch code lives in the original post ^<|||||>Hi @sanchit-gandhi I have added some initial changes in #22073 PR, but while initializing it with pytorch weights
```model_tf = TFWav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ks",from_pt=True)``` like this it gives `Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:` can you guide me with this?
* I checked the shapes for `hidden_states` and `pooled_output` in pytorch and tf implementation they both are matching.<|||||>hi @sanchit-gandhi can you guide me for above error, so that I can make all the required changes and close the PR.<|||||>Hey,
Can you share the complete stack trace?
>Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:
The important part of the error is _Some_. Most likely the classification head is not being loaded correctly.
Questions:
1. Is it a warning? or is it an error?
2. Did you try running the model after this?
3. Tried using the same model for PyTorch and see if you get the same error.
cc: @nandwalritik <|||||>> Hey, Can you share the complete stack trace?
>
> > Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification:
>
> The important part of the error is _Some_. Most likely the classification head is not being loaded correctly.
>
> Questions:
>
> 1. Is it a warning? or is it an error?
> 2. Did you try running the model after this?
> 3. Tried using the same model for PyTorch and see if you get the same error.
>
> cc: @nandwalritik
<details>
<summary>Stacktrace</summary>
```
>>> tf_model = TFWav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ks",from_pt=True)
/home/nandwalritik/nandwalritik/transformers/src/transformers/configuration_utils.py:379: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
warnings.warn(
TFWav2Vec2ForSequenceClassification has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU
TFWav2Vec2Model has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFWav2Vec2ForSequenceClassification: ['wav2vec2.encoder.layers.10.attention.q_proj.weight', 'wav2vec2.encoder.layers.1.attention.k_proj.bias', 'wav2vec2.encoder.layers.1.attention.q_proj.bias', 'wav2vec2.encoder.layers.0.attention.v_proj.bias', 'wav2vec2.encoder.layers.6.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.attention.v_proj.weight', 'wav2vec2.encoder.layers.1.attention.out_proj.bias', 'wav2vec2.encoder.layers.0.layer_norm.weight', 'wav2vec2.encoder.layers.3.layer_norm.weight', 'wav2vec2.encoder.layers.10.attention.out_proj.weight', 'wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.bias', 'wav2vec2.feature_extractor.conv_layers.4.conv.weight', 'wav2vec2.encoder.pos_conv_embed.conv.weight_v', 'wav2vec2.encoder.layers.8.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.layer_norm.weight', 'wav2vec2.encoder.layers.0.attention.k_proj.bias', 'wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.v_proj.weight', 'wav2vec2.encoder.layers.5.attention.k_proj.weight', 'wav2vec2.encoder.layers.6.final_layer_norm.weight', 'wav2vec2.encoder.layers.9.feed_forward.output_dense.weight', 'wav2vec2.masked_spec_embed', 'wav2vec2.encoder.layers.6.attention.q_proj.weight', 'wav2vec2.encoder.layers.4.attention.v_proj.bias', 'wav2vec2.encoder.layers.11.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.6.attention.q_proj.bias', 'wav2vec2.encoder.layers.0.attention.q_proj.bias', 'wav2vec2.encoder.layers.4.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.attention.k_proj.bias', 'wav2vec2.encoder.layers.7.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.3.attention.k_proj.bias', 'wav2vec2.encoder.layers.8.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.attention.out_proj.weight', 'wav2vec2.encoder.layers.7.attention.out_proj.bias', 'wav2vec2.encoder.layers.8.attention.q_proj.bias', 'wav2vec2.feature_extractor.conv_layers.2.conv.weight', 'wav2vec2.encoder.layers.11.feed_forward.output_dense.weight', 'wav2vec2.encoder.pos_conv_embed.conv.bias', 'wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.feed_forward.output_dense.bias', 'wav2vec2.feature_projection.projection.weight', 'wav2vec2.encoder.layers.5.attention.v_proj.weight', 'wav2vec2.encoder.layers.10.attention.out_proj.bias', 'wav2vec2.encoder.layers.4.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.attention.k_proj.weight', 'wav2vec2.encoder.layers.7.layer_norm.bias', 'wav2vec2.encoder.layers.1.attention.q_proj.weight', 'wav2vec2.encoder.layers.7.layer_norm.weight', 'wav2vec2.feature_extractor.conv_layers.1.conv.weight', 'wav2vec2.encoder.layers.8.attention.v_proj.bias', 'projector.bias', 'wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.attention.q_proj.weight', 'wav2vec2.encoder.layers.8.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.10.attention.k_proj.bias', 'wav2vec2.encoder.layers.4.attention.out_proj.bias', 'wav2vec2.encoder.layers.6.final_layer_norm.bias', 'layer_weights', 'wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.k_proj.bias', 'wav2vec2.encoder.layers.7.attention.v_proj.weight', 'wav2vec2.encoder.layers.2.attention.out_proj.bias', 'wav2vec2.encoder.layers.4.attention.out_proj.weight', 'wav2vec2.encoder.layers.0.final_layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.q_proj.weight', 'wav2vec2.encoder.layers.3.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.feed_forward.output_dense.weight', 'wav2vec2.feature_projection.layer_norm.bias', 'wav2vec2.encoder.layers.6.attention.k_proj.weight', 'wav2vec2.encoder.layers.7.attention.v_proj.bias', 'wav2vec2.encoder.layers.4.attention.k_proj.bias', 'wav2vec2.encoder.layers.4.layer_norm.weight', 'wav2vec2.encoder.layers.9.attention.q_proj.bias', 'wav2vec2.encoder.layers.4.attention.q_proj.bias', 'wav2vec2.encoder.layers.8.layer_norm.weight', 'wav2vec2.encoder.layers.2.final_layer_norm.weight', 'wav2vec2.feature_projection.projection.bias', 'wav2vec2.encoder.layers.3.final_layer_norm.bias', 'wav2vec2.encoder.layers.8.layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.k_proj.bias', 'wav2vec2.encoder.layers.5.layer_norm.weight', 'wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.6.attention.v_proj.bias', 'wav2vec2.encoder.layers.8.attention.v_proj.weight', 'wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.1.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.5.attention.out_proj.bias', 'wav2vec2.encoder.layers.10.layer_norm.weight', 'wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.9.attention.q_proj.weight', 'wav2vec2.encoder.layers.5.attention.v_proj.bias', 'wav2vec2.encoder.layers.6.attention.out_proj.weight', 'wav2vec2.encoder.layers.3.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.attention.q_proj.bias', 'wav2vec2.feature_projection.layer_norm.weight', 'wav2vec2.encoder.layers.1.layer_norm.bias', 'wav2vec2.feature_extractor.conv_layers.6.conv.weight', 'wav2vec2.encoder.layers.7.attention.q_proj.bias', 'wav2vec2.encoder.layers.9.attention.k_proj.bias', 'wav2vec2.encoder.layers.3.attention.q_proj.weight', 'wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.3.final_layer_norm.weight', 'wav2vec2.encoder.layers.2.attention.v_proj.weight', 'wav2vec2.encoder.layers.0.attention.out_proj.bias', 'wav2vec2.encoder.layers.3.layer_norm.bias', 'wav2vec2.encoder.layers.6.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.0.attention.out_proj.weight', 'wav2vec2.encoder.layers.4.layer_norm.bias', 'wav2vec2.encoder.layers.5.attention.q_proj.bias', 'wav2vec2.encoder.layers.5.attention.q_proj.weight', 'wav2vec2.encoder.layers.9.final_layer_norm.bias', 'wav2vec2.encoder.layers.5.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.4.attention.q_proj.weight', 'wav2vec2.encoder.layers.2.attention.out_proj.weight', 'wav2vec2.feature_extractor.conv_layers.3.conv.weight', 'wav2vec2.encoder.layers.5.final_layer_norm.weight', 'wav2vec2.encoder.layers.2.attention.q_proj.bias', 'wav2vec2.encoder.layer_norm.weight', 'wav2vec2.encoder.layers.3.attention.v_proj.bias', 'wav2vec2.encoder.layers.7.final_layer_norm.weight', 'wav2vec2.encoder.layers.6.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.attention.k_proj.weight', 'wav2vec2.encoder.layer_norm.bias', 'wav2vec2.encoder.layers.7.attention.out_proj.weight', 'wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.weight', 'classifier.weight', 'wav2vec2.encoder.layers.1.attention.v_proj.bias', 'wav2vec2.encoder.layers.1.attention.out_proj.weight', 'wav2vec2.encoder.layers.2.attention.q_proj.weight', 'wav2vec2.encoder.layers.11.attention.k_proj.weight', 'wav2vec2.encoder.layers.4.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.7.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.8.final_layer_norm.weight', 'wav2vec2.encoder.layers.11.attention.out_proj.weight', 'wav2vec2.encoder.pos_conv_embed.conv.weight_g', 'wav2vec2.encoder.layers.10.final_layer_norm.bias', 'projector.weight', 'wav2vec2.encoder.layers.0.attention.q_proj.weight', 'wav2vec2.encoder.layers.6.attention.v_proj.weight', 'wav2vec2.encoder.layers.11.attention.v_proj.bias', 'wav2vec2.feature_extractor.conv_layers.0.conv.weight', 'wav2vec2.encoder.layers.10.attention.k_proj.weight', 'wav2vec2.encoder.layers.10.feed_forward.output_dense.bias', 'wav2vec2.feature_extractor.conv_layers.0.layer_norm.bias', 'wav2vec2.encoder.layers.2.attention.v_proj.bias', 'wav2vec2.encoder.layers.1.layer_norm.weight', 'wav2vec2.encoder.layers.7.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.1.final_layer_norm.weight', 'wav2vec2.encoder.layers.3.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.4.attention.k_proj.weight', 'wav2vec2.encoder.layers.0.layer_norm.bias', 'wav2vec2.encoder.layers.11.final_layer_norm.bias', 'wav2vec2.encoder.layers.9.attention.out_proj.bias', 'wav2vec2.encoder.layers.8.final_layer_norm.bias', 'wav2vec2.encoder.layers.10.final_layer_norm.weight', 'wav2vec2.encoder.layers.1.final_layer_norm.bias', 'wav2vec2.encoder.layers.1.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.attention.v_proj.bias', 'wav2vec2.encoder.layers.3.attention.out_proj.weight', 'wav2vec2.encoder.layers.3.attention.out_proj.bias', 'wav2vec2.encoder.layers.9.attention.v_proj.bias', 'wav2vec2.encoder.layers.4.attention.v_proj.weight', 'wav2vec2.encoder.layers.1.attention.v_proj.weight', 'wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.11.attention.out_proj.bias', 'wav2vec2.encoder.layers.5.final_layer_norm.bias', 'wav2vec2.encoder.layers.5.attention.out_proj.weight', 'wav2vec2.encoder.layers.10.attention.q_proj.bias', 'wav2vec2.encoder.layers.6.layer_norm.bias', 'wav2vec2.encoder.layers.7.final_layer_norm.bias', 'classifier.bias', 'wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.6.attention.k_proj.bias', 'wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.2.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.2.attention.k_proj.weight', 'wav2vec2.encoder.layers.2.layer_norm.weight', 'wav2vec2.encoder.layers.3.attention.v_proj.weight', 'wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.0.feed_forward.output_dense.weight', 'wav2vec2.encoder.layers.10.layer_norm.bias', 'wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.9.attention.v_proj.weight', 'wav2vec2.encoder.layers.9.final_layer_norm.weight', 'wav2vec2.encoder.layers.11.layer_norm.weight', 'wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.bias', 'wav2vec2.encoder.layers.1.attention.k_proj.weight', 'wav2vec2.feature_extractor.conv_layers.5.conv.weight', 'wav2vec2.encoder.layers.2.layer_norm.bias', 'wav2vec2.encoder.layers.2.final_layer_norm.bias', 'wav2vec2.encoder.layers.2.feed_forward.output_dense.bias', 'wav2vec2.encoder.layers.3.attention.q_proj.bias', 'wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.bias', 'wav2vec2.feature_extractor.conv_layers.0.layer_norm.weight', 'wav2vec2.encoder.layers.0.attention.v_proj.weight', 'wav2vec2.encoder.layers.2.attention.k_proj.bias', 'wav2vec2.encoder.layers.9.layer_norm.bias', 'wav2vec2.encoder.layers.8.attention.k_proj.bias', 'wav2vec2.encoder.layers.11.attention.q_proj.weight', 'wav2vec2.encoder.layers.4.final_layer_norm.bias', 'wav2vec2.encoder.layers.6.layer_norm.weight', 'wav2vec2.encoder.layers.8.attention.k_proj.weight', 'wav2vec2.encoder.layers.11.layer_norm.bias', 'wav2vec2.encoder.layers.9.attention.out_proj.weight', 'wav2vec2.encoder.layers.0.final_layer_norm.weight', 'wav2vec2.encoder.layers.5.layer_norm.bias', 'wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.weight', 'wav2vec2.encoder.layers.9.feed_forward.output_dense.bias']
- This IS expected if you are initializing TFWav2Vec2ForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFWav2Vec2ForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
Some weights or buffers of the TF 2.0 model TFWav2Vec2ForSequenceClassification were not initialized from the PyTorch model and are newly initialized: ['tf_wav2_vec2_model_1.wav2vec2.masked_spec_embed', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.0.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.1.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.2.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.3.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.4.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.5.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_extractor.conv_layers.6.conv.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.projection.weight', 'tf_wav2_vec2_model_1.wav2vec2.feature_projection.projection.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.weight_v', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.weight_g', 'tf_wav2_vec2_model_1.wav2vec2.encoder.pos_conv_embed.conv.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.0.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.1.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.2.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.3.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.4.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.5.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.6.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.7.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.8.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.9.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.10.final_layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.k_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.k_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.q_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.q_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.v_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.v_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.out_proj.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.attention.out_proj.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.layer_norm.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.intermediate_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.output_dense.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.feed_forward.output_dense.bias', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.final_layer_norm.weight', 'tf_wav2_vec2_model_1.wav2vec2.encoder.layers.11.final_layer_norm.bias', 'dense_2.weight', 'dense_2.bias', 'dense_3.weight', 'dense_3.bias', 'Variable']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
</details>
1. It's a warning.
2. I tried running on sample_inputs same as [here](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification.forward.example)
```
>>> inputs_tf = feature_extractor(dataset[0]["audio"]["array"],sampling_rate=sampling_rate,return_tensors="tf")
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
...
>>> logits = tf_model(**inputs_tf).logits
>>> inputs_tf = feature_extractor(dataset[0]["audio"]["array"],sampling_rate=sampling_rate,return_tensors="tf")
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
...
>>> logits_tf = tf_model(**inputs_tf).logits
>>> logits
tensor([[-0.0732, -0.5845, -3.5185, -1.4014, -0.1823, -2.9616, -3.1919, -1.3804,
-1.1895, 0.4006, 6.4601, -6.2880]])
>>> logits_tf
<tf.Tensor: shape=(1, 12), dtype=float32, numpy=
array([[-1.310684 , 0.13441604, 0.6363504 , -0.5188892 , 0.46565807,
-0.25152174, -0.45716044, -0.14784068, 0.176272 , 1.4507922 ,
-1.9966551 , -0.5963241 ]], dtype=float32)>
>>> equal = torch.allclose(logits,torch.tensor(logits_tf.numpy()), rtol=1e-5)
>>> equal
False
```
3. Pytorch model doesn't gives any error/warning like that.
<|||||>Ok. You have enough to go on here.
The output is not equal because you're not using all the weights in the pretrained model.
1. The warning states that for some reason some layers were initialized with the pretrained weights and some weren't.
2. This usually happens if the model doesn't match perfectly.
3. If the model has N layers and only the first M match exactly then only the first M will be loaded from the pretrained model.
So, print the dimensions of all the layers of both models and verify layer by layer if everything matches perfectly.
cc: @nandwalritik <|||||>Thanks for helping out here @vimarshc! Your tips were spot on β
@nandwalritik has the PR nearly finished and equality with the PyTorch model |
transformers | 21,777 | open | Add TensorFlow Whisper model for audio classification | ### Feature request
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of `WhisperForAudioClassification`. It would be great to add the TensorFlow equivalent.
### Motivation
Whisper is an encoder-decoder model for speech recognition. However, we can repurpose the model for other speech tasks, such as _audio classification_.
Audio classification is the task of mapping from an input speech sequence to a single class prediction. For more details, refer to the task page on the Hub: https://huggingface.co/tasks/audio-classification
For audio classification, we only require a _single_ model output. Thus, we do not need the auto-regressive generation capacities of the Whisper decoder (which is used to generate a _sequence_ of text tokens during speech recognition). Instead, we can just use the Whisper encoder to get hidden states, and add a classification head on top to make class label predictions.
This is analogous to using a Wav2Vec2 model for audio classification: the Wav2Vec2 encoder is used to get hidden states, and a classification head added on top to make class label predictions.
The PR https://github.com/huggingface/transformers/pull/21754 adds the PyTorch version of `WhisperForAudioClassification`. It required adding a projection layer and classification layer on top of the `WhisperEncoder`. For more details, refer directly to the pull request.
It would be great to add the TensorFlow equivalent of this model for cross-framework support.
The most difficult part of this PR will be getting the model tester to work. You can see from the PyTorch PR that we require a standalone tester for the audio classification model. This is because the original Whisper model is an encoder-decoder model, but the audio classification model is an encoder-only model. Thus, we require different testing logic.
### Your contribution
Opening this one up to the community! If you're interested in tackling this, free to drop a comment in this thread and open a PR when you're ready. More than happy to answer any questions / queries about this integration! | 02-24-2023 08:43:56 | 02-24-2023 08:43:56 | Hey @sanchit-gandhi, if we're just using the encoder do you think a CTC head could also work, i.e. `WhisperForCTC`?<|||||>Hey @OllieBroadhurst! I don't think a an encoder-only Whisper model for speech recognition would be super practical since we'd then need an _external_ language model to correct the phonetic errors made by the CTC model. IMO we're better off using the _internal_ language model provided by the decoder in the original encoder-decoder architecture. The encoder-decoder model is trained end-to-end and on all of the Whisper pre-training data, so likely going to be better than any combination of CTC + LM we train ourselves<|||||>Hello @OllieBroadhurst are you currently working on this? I would love to help out if I can/you need it. Otherwise, I would like to take a look at this issue.<|||||>Hi @adit299 ! I'm not so you can take it away!<|||||>Great, will do! |
transformers | 21,776 | closed | Fix flaky test for log level | # What does this PR do?
This should fix the flakiness of the log level test. If I'm not wrong, the flakiness came from the fact the log level of Transformers can be changed by other tests (for instance lots of Trainer tests change it) and thus assuming it would be warning at the beginning of the test was wrong. Instead we test depending on the actual log level observed, which should fix the issue. | 02-24-2023 08:30:15 | 02-24-2023 08:30:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the fix. There is a remaining one test to fix however, which appears after this PR.
I am able to reproduce the issue with (deterministically)
```bash
python -m pytest tests/trainer tests/utils
```
but not with
```bash
python -m pytest tests/trainer
```
or with
```bash
python -m pytest tests/trainer/test_trainer.py -k test_log_level
```
or with the reversed order
```
python -m pytest tests/utils tests/trainer
```
**With this PR, the `test_log_level` pass with the 1st command mentioned above, but we get**
```bash
FAILED tests/utils/test_logging.py::HfArgumentParserTest::test_advisory_warnings - AssertionError: '' != 'Testing 1, 2, 3\n'
+ Testing 1, 2, 3
```
However, on this PR again, and with
```bash
python -m pytest tests/utils
```
or even with the reversed order
```bash
python -m pytest tests/utils tests/trainer
```
it pass.<|||||>The test in question didn't set any log level, so the log level was still at ERROR when running the tests in the sequence you give. I fixed it by resetting the logger at the beginning of the test. |
transformers | 21,775 | closed | [FX tracer] Make `concrete_args` from outside available | # What does this PR do?
Current `HFTracer` implementation will replace `concrete_args` with their default values from function signature, this behavior is different from the one in the description of flag `complete_concrete_args_with_inputs_not_in_dummy_inputs`:
> If `True`, and `dummy_inputs` is specified, every argument that `root` can take that is not in `dummy_inputs` **AND NOT IN `concrete_args`** will be added to `concrete_args`, otherwise does nothing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 02-24-2023 06:16:52 | 02-24-2023 06:16:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@michaelbenayoun Could you merge this PR? I have no write access. Thanks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.