repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,270 | closed | Adding resource section to GPT-J docs | # What does this PR do?
Adds resources section to the GPT-J documents.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20055 (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @stevhliu @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-23-2023 20:55:23 | 01-23-2023 20:55:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello,
I have been currently working on finding resources for GPT-J, and mainly I have been using the links mentioned in #20055 and searching GPT-J in each of the links. I found a few links, but I feel this is not the best way to find the resources. Can you share some tips for how you were able to find more resources? @stevhliu
What I have so far:
GPT-J Description:
- https://huggingface.co/EleutherAI/gpt-j-6B
Blog Posts:
- https://huggingface.co/blog/gptj-sagemaker
- https://www.philschmid.de/gptj-deepspeed-inference
NielsRogge's Transformers Tutorials:
- https://github.com/kingoflolz/mesh-transformer-jax<|||||>Thanks for your work, that's a great start and I think you have most of them! You can also add:
* This [GPT-J notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb) from Niels Transformers Tutorials for inference.
* A [chapter](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) in the Hugging Face Course for causal language modeling.
* The example scripts and notebooks for causal language modeling and text generation (see the last three bullet points under the Resource section [here](https://huggingface.co/docs/transformers/model_doc/gpt2#resources) for GPT-2).<|||||>It looks like the formatting for the docs is still not correct..the bulletpoints are all jumbled up. Looking into this...<|||||>I have marked the pull request as ready to review 👍 @stevhliu |
transformers | 21,269 | closed | [GenerationConfig] add additional kwargs handling | # What does this PR do?
This add the same support that we have in the `PretrainedConfig`, where additional kwargs are automaticallu updated.
This will allow users to re-use the `GenerationConfig` class for most of the use_cases, whithout having to add a model specific class. I was trying to load [the following `generation_config` ](https://huggingface.co/openai/whisper-small/discussions/10/files)and got half of my additional arguments deleted 😉 | 01-23-2023 20:50:35 | 01-23-2023 20:50:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also will have to add test + this is apparently breaking a lot of things haha <|||||>Okay, after talking a bit with @gante and testing, this is not the best, this PR will focus on other missing functionalities. Mostly addition of the `dict_torch_dtype_to_str` function, as the `dtype` could be passed to the generation 😉
The problem is mostly that if we process all the additional kwargs, we are getting all of the arguments from the `configuration.json` which mixes things up.
The simplest solution is either to store them in `generate_kwargs` or re-write the configuration for the model. I though this was cumbersome but it is actually the most logical and cleanest way to do it.
EDIT : gonna just add a condition, if the kwargs are from a config file, they are not added. <|||||>Now only thing left is to add a pretty test with all the different edge cases I encountered. |
transformers | 21,268 | closed | Supported pipeline tasks update | The docstring of the `transformers.pipeline` listed only 16 supported tasks while `SUPPORTED_TASKS` contains 24 tasks. This PR adds the missing tasks to the docstrings so that the generated reference docs accurately list all of the supported tasks here - https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline
| 01-23-2023 18:29:55 | 01-23-2023 18:29:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,267 | closed | Remove CLI spams with Whisper FeatureExtractor | # What does this PR do?
Whisper feature extractor representation includes the MEL filters, a list of list that is represented as ~16,000 lines. This needlessly spams the command line. I added a `__repr__` method that replaces this list with a string `<array of shape (80, 201)>`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| 01-23-2023 18:09:01 | 01-23-2023 18:09:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, thanks for the contribution! I agree with you, wee should not save the filters as they just depend on the parameters with which they were created, which is why I would be in favor of simply adding the following :
```python
def to_dict(self) -> Dict[str, Any]:
"""
Serializes this instance to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
"""
output = copy.deepcopy(self.__dict__)
output["feature_extractor_type"] = self.__class__.__name__
if "mel_filters" in output:
del output["mel_filters"]
return output
```
Also cc @sanchit-gandhi this seems very logitcal to me<|||||>Yes indeed, I think this solution is better<|||||>For the remaining failing test, I suggest you rebase on main 😉 <|||||>You can also modify the test to make the CI go green 😉 |
transformers | 21,266 | closed | Use return_tensors="np" instead of "tf" | This PR is doing exactly the same thing as the [notebooks PR here](https://github.com/huggingface/notebooks/pull/308).
In our TF examples, we use return_tensors="tf" for the data collators. However, `prepare_tf_dataset` and `to_tf_dataset` actually use a NumPy loader internally, which we wrap with a `tf.data.Dataset` at the end. As a result, return_tensors="np" works much better for them, and avoids some weird slowdown bugs we've experienced.
This PR replaces every instance in our examples with return_tensors="np". (cc @gante, @amyeroberts, @sayakpaul just so you're aware)
| 01-23-2023 17:23:51 | 01-23-2023 17:23:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,265 | closed | Notebook examples grouping and update | This PR groups the notebook examples on [this page](https://huggingface.co/docs/transformers/main/en/notebooks) by modality for easier navigation. It also adds a few notebooks from the official repo that were not previously listed, e.g. fine-tuning models for image classification, semantic segmentation, video classification, image similarity, and time series.
| 01-23-2023 16:00:05 | 01-23-2023 16:00:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,264 | closed | Generate: save generation config with the models' `.save_pretrained()` | # What does this PR do?
As originally discussed in #20388, this PR makes `model.save_pretrained()` also call `model.generation_config.save_pretrained()` if it is a generation-capable model (on all 3 frameworks).
It also adds a bunch of tests, namely:
- tests whether the generation config can be pushed to the hub
- tests whether `model.save_pretrained()` actually saves `generation_config.json` if it is a model that can generate (on all 3 frameworks) | 01-23-2023 15:29:54 | 01-23-2023 15:29:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,263 | closed | [Whisper] Add rescaling function with `do_normalize` | # What does this PR do?
Fixes #19888, by allowing the user to `normalise` the input audio before computing the MEL spectrogra,. | 01-23-2023 15:28:44 | 01-23-2023 15:28:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The test works locally, merging 😉 |
transformers | 21,262 | closed | [Whisper] ASR Pipeline with "return_timestamps=True" gives IndexError: index -1 is out of bounds for axis 0 with size 0 | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @Narsil @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the link to [Google Colab Notebook](https://colab.research.google.com/drive/1ZLQXzD1IW2D1fz0WZOSEghUpewd0N3Bn?usp=sharing)
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model='openai/whisper-small.en',
chunk_length_s=30,
stride_length_s=(5,5),
device=0,
return_timestamps=True,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(
language='en', task='transcribe'
)
res = pipe('trial.wav')
print(res)
```
Here is the stack trace for the error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-8-3813911bc8bc>](https://localhost:8080/#) in <module>
1 # run with transformers installed from latest commit: 00ba7cadd812437708b380ab078a3cfe8cfaff31 at the moment.
2 # index out of bounds error with return_timestamps=True!!
----> 3 res = pipe('trial.wav')
4 print(res)
5
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in __call__(self, inputs, **kwargs)
370 `"".join(chunk["text"] for chunk in output["chunks"])`.
371 """
--> 372 return super().__call__(inputs, **kwargs)
373
374 def _sanitize_parameters(
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1074 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1075 elif self.framework == "pt" and isinstance(self, ChunkPipeline):
-> 1076 return next(
1077 iter(
1078 self.get_iterator(
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py](https://localhost:8080/#) in __next__(self)
123 # We're out of items within a batch
124 item = next(self.iterator)
--> 125 processed = self.infer(item, **self.params)
126 # We now have a batch of "inferred things".
127 if self.loader_batch_size is not None:
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in postprocess(self, model_outputs, decoder_kwargs, return_timestamps)
620 items = _find_longest_common_sequence(final_items, self.tokenizer)
621 elif stride and self.type == "seq2seq_whisper" and return_timestamps:
--> 622 items = _find_timestamp_sequence(
623 final_items, self.tokenizer, self.feature_extractor, self.model.config.max_source_positions
624 )
[/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in _find_timestamp_sequence(sequences, tokenizer, feature_extractor, max_source_positions)
103 timestamp_tokens = sequence >= timestamp_begin
104 consecutive = np.where(timestamp_tokens[:-1] & timestamp_tokens[1:])[0] + 1
--> 105 last_timestamp = np.where(timestamp_tokens)[0][-1]
106 consecutive = np.append(consecutive, last_timestamp) if last_timestamp not in consecutive else consecutive
107 if seq_idx != 0:
IndexError: index -1 is out of bounds for axis 0 with size 0
```
### Expected behavior
The problem occurs when using any Whisper models from the Hub with ```return_timestamps=True``` in the ASR Pipeline. The error does NOT occur if timestamps is not forced. | 01-23-2023 14:31:08 | 01-23-2023 14:31:08 | Thanks, this is normal and is currently being fixed 😉 see #21252 <|||||>I have also the same problem, how you fixed this problem ? <|||||>@avishai119 Are you running on `transformers@main` ? It should be fixed there.<|||||>what you mean ?
this is my code:
`
from transformers import AutoProcessor, pipeline
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
import librosa
import torchaudio
from pydub.silence import split_on_silence
processor = AutoProcessor.from_pretrained('.\saved_processor',local_files_only=True)
model = ORTModelForSpeechSeq2Seq.from_pretrained('.\saved_model',local_files_only=True)
model.config.forced_decoder_ids = processor.tokenizer.get_decoder_prompt_ids(language="hebrew", task="transcribe")
speech_recognition_pipeline = pipeline(
"automatic-speech-recognition",
model=model,
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
)
speech_recognition_pipeline.model.config.forced_decoder_ids = speech_recognition_pipeline.tokenizer.get_decoder_prompt_ids(language="hebrew", task="transcribe")
#testing
audio,sr = librosa.load("C:\\Users\\avishai\\Desktop\\whisper-interface\\3.mp3",sr=16000)
result = speech_recognition_pipeline(audio,max_new_tokens=440)
print(result)
`<|||||>when i trying to add this :
`speech_recognition_pipeline(audio,max_new_tokens=440,return_timestamps=True)`
it's don't work :( <|||||>Do you mind outputting the output of `transformers-cli env` ? |
transformers | 21,261 | closed | installation.mdx. | null | 01-23-2023 13:49:16 | 01-23-2023 13:49:16 | |
transformers | 21,260 | closed | Update TF doc test template | # What does this PR do?
The PR #21106 introduced failures in some doctests:
* src/transformers/models/deit/modeling_tf_deit.py::transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacher.call
* src/transformers/models/resnet/modeling_tf_resnet.py::transformers.models.resnet.modeling_tf_resnet.TFResNetModel.call
* src/transformers/models/segformer/modeling_tf_segformer.py::transformers.models.segformer.modeling_tf_segformer.TFSegformerForImageClassification.call
* src/transformers/models/vit/modeling_tf_vit.py::transformers.models.vit.modeling_tf_vit.TFViTModel.call
This was due to `processor_class` no longer being passed to `add_code_sample_docstrings` e.g. the changes to [modeling_tf_deit.py](https://github.com/huggingface/transformers/pull/21106/files#diff-d8a9a4a182509f1903e7dbcd751d605285c8b62d0c8213a1a9ae1ba15e9fcc77).
Whilst `processor_class` could be removed for the PyTorch models doctests, `{processor_class}` hadn't been removed for the equivalent TensorFlow doctest templates. This updates the test templates to match the equivalent PyTorch ones and resolve failing tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 01-23-2023 13:44:13 | 01-23-2023 13:44:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closed as changes are a subset of changes introduced in #21225 |
transformers | 21,259 | closed | Add methods to PreTrainedModel to use PyTorch's BetterTransformer | As per title.
Should be merged only on the next Optimum release that will include https://github.com/huggingface/optimum/pull/676
## Before submitting
Tests are still to be done.
## Who can review?
@younesbelkada @sgugger
| 01-23-2023 13:41:06 | 01-23-2023 13:41:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>as a side note, since in the previous `optimum` versions the `save_pretrained` and `push_to_hub` methods [are not blocked](https://github.com/huggingface/optimum/blob/18e73f3ba4be33071f53650824fe625d3018af40/optimum/bettertransformer/transformation.py#L236), I propose to explicitly block them for transformed models in this PR and/or force users to use a certain version of `optimum`.<|||||>Yes we should probably force the next optimum version.<|||||>Should be ready @sgugger , the documentation has been extended in https://moon-ci-docs.huggingface.co/docs/transformers/pr_21259/en/perf_infer_gpu_one .
Let me know if I should add a test - in which case optimum should be added in the setup.py, I guess.<|||||>@fxmarty there should be no need to add `optimum` in `setup.py`, we can do something similar than `bitsandbytes` and add `optimum` in the Dockerfile of the Docker image that will run the slow tests: https://github.com/huggingface/transformers/blob/0db5d911fc94604f9568b4b212e005ec4600d157/docker/transformers-all-latest-gpu/Dockerfile#L52
I very much agree that we should add tests, especially to test `accelerate` compatibility, happy to help you on this, let me know if you need help<|||||>Thanks, will do!
> especially to test accelerate compatibility
Isn't this already tested on Optimum side?<|||||>> Isn't this already tested on Optimum side?
Yes but the tests [are run on GPU](https://github.com/huggingface/optimum/blob/40a01b3c883ca3c092a4493d3f5ca524ed3109ab/tests/bettertransformer/test_bettertransformer_encoder.py#L186 ): therefore not run on any of the runners on `optimum` on a daily basis (but not sue if there are tested somewhere else) - I just asked individually to each contributor to run the `accelerate` test locally on their GPU before merging (only in case I have serious doubts that the PR breaks anything related to `accelerate`).
Since in `transformers` tests are run on GPU on daily basis, we can leverage that and setup a small `BetterTransformer` testing suite that tests all the tests + `accelerate` compatibility. Also this enables us to flag anything we need to upstream to `accelerate` if something breaks `BT` integration with `accelerate`<|||||>There are tests on the daily basis on GPU in Optimum, for example https://github.com/huggingface/optimum/blob/main/.github/workflows/test_onnxruntime_train.yml and https://github.com/huggingface/optimum/blob/main/.github/workflows/test_onnxruntime_gpu.yml
In my opinion, thorough tests should be added in Optimum, not Transformers. The test I was thinking of in Transformers was only an integration one to check that there's no error.<|||||>There is an issue with `accelerate` loaded models and `transform` from BT, let's wait until this gets fixed before merging this PR<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>not stale<|||||>If you want this PR included in the next release, you should finish the work and have it merged sooner rather than later :-)
The last I saw was Younes telling we should wait for a fix, was that fix added? Then this needs a rebase on main since it has been a while.<|||||>Thanks for the headsup!
Indeed we are working on fixing some bugs on `optimum` side that was introduced by one of my PRs (the revert-transform PR) before adding the `invert_transform` method
We can maybe merge this PR by keeping only `transform` method and blocking the `save_pretrained` & `push_to_hub` methods after transforming the model<|||||>> you should finish the work and have it merged sooner rather than later :-)
There is substantial work left in Optimum before this should be merged. Marking as draft for now!<|||||>OK, so this won't be in the next release of Transformers (probably this week in preparation for PyTorch 2.0).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @fxmarty and @younesbelkada, are there standing PRs in `optimum` that need to be merged for this to proceed/anything we can help with to have this move forward? Thanks :)<|||||>Hey @LysandreJik @sgugger
@fxmarty recently managed to fix all issues related to decoder-based models integration in `optimum`! I believe that this PR could be re-opened, in my understanding we just need to add few tests and we should be good to go<|||||>@sgugger @LysandreJik this is now ready for review! |
transformers | 21,258 | closed | Add missing checkpoint for doctest | # What does this PR do?
The checkpoint for mobilenetv2 was accidentally removed in #21106 (see file change [here](https://github.com/huggingface/transformers/pull/21106/files#diff-f224d96e46d68f58f9632184f915210b3217a61253c42ff6354763e1b0f34050)), resulting in failing doctests. This adds it back.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 01-23-2023 13:40:27 | 01-23-2023 13:40:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,257 | closed | [ci-daily] Fix pipeline tests | # What does this PR do?
Should fix the `automatic_speech_recognition_pipeline` tests.
Also using `streaming` dataset to speed up tests. Think it is a good idea if we are only using 1 data. | 01-23-2023 13:36:28 | 01-23-2023 13:36:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,256 | closed | Fix MaskFormerImageProcessor.post_process_instance_segmentation | # What does this PR do?
Fixes the `post_process_instance_segmentation` method of `MaskFormerImageProcessor`. This issue mainly affects Mask2Former as it uses MaskFormerImageProcessor and there aren't any MaskFormer models trained on instance segmentation datasets.
Unlike panoptic segmentation post-processing, the final score of each binary mask proposal is calculated by multiplying the mask proposal score with the class score. `mask_threshold` and `overlap_mask_area_threshold` arguments are not needed anymore, I can either add a warning to deprecate them or leave it as it is for now.
Post-processed results of the `mask2former-swin-small-coco-instance` model inference:

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 01-23-2023 13:08:15 | 01-23-2023 13:08:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for fixing! Should we add a corresponding test for it, which verifies the postprocessed results?
I added a test but Mask2Former, unlike MaskFormer, outputs segmentation maps of shape (96, 96) instead of the preprocessed input size for efficiency. They scale the mask logits to the preprocessed image size during postprocessing (same for semantic and panoptic segmentation), even if no` target_sizes` is passed. I think it'd better to add an image processor for Mask2Former as its post-processing requires additional scaling.
What do you think @NielsRogge @sgugger?<|||||>If postprocessing is different, then it indeed requires its own image processor class. |
transformers | 21,255 | closed | DistilBertModel to sequence classification | ### System Info
Hi,
I'm trying to create a ```DistilBertModel ``` model for sequence classification, such that ```max_position_embeddings=1024``` (otherwise I would have used ```DistilBertForSequenceClassification``` which is defult to ```max_position_embeddings=512``` )
I define the model in the following way:
```
configuration = DistilBertConfig(max_position_embeddings=1024)
model = DistilBertModel(configuration)
```
When forwarding an input to the model in the following way:
```
output = model(ids, attention_mask = mask, return_dict=False)[0]
```
such that ```ids.shape = (batch_size, 1024)``` and ```mask.shape = (batch_size, 1024)``` the shape of the output is ```(batch_size, 1024, 768)``` .
My question is: What is the best practice to convert this output into a probability vector over the number of labels such the modified output shape would be ``(batch_size, num_labels)```?
I thought of a few options including flattening the current output + an additional FC layer, but I'm not sure this is the best practice.
would it be possible to add a config parameter for ```DistilBertConfig``` to automatically enable this behavior?
Thank you in advance :)
@ArthurZucker , @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import DistilBertConfig, DistilBertModel
configuration = DistilBertConfig(max_position_embeddings=1024)
model = transformers.BertModel.from_pretrained('bert-base-uncased')
output = self.l1(ids, mask, return_dict=False)[0]
print(output.shape)
# (batch_size, 1024, 768)
````
### Expected behavior
My question is: What is the best practice to convert this output into a probability vector over the number of labels such the modified output shape would be ``(batch_size, num_labels)```? | 01-23-2023 13:07:04 | 01-23-2023 13:07:04 | Hey! These kind of question have more sense if you ask in the [forum](https://discuss.huggingface.co/), as it is not exactly an issue nor a bug. Nevertheless, IMO you should use `DistilBertForSequenceClassification` and just modify the `max_position_embeddings`.
Closing as it is not an issue |
transformers | 21,254 | closed | Fix reformer CI | # What does this PR do?
Some fixes are required for doctest after #21199. See comments in the review. | 01-23-2023 12:57:30 | 01-23-2023 12:57:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,253 | closed | [WIP] Adding GPT2 with Multi Query Attention | # Adding GPT2 with Multi Query Attention
This PR adds a GPT2 architecture with Multi Query Attention (MQA). With MQA the V,K weights are shared across heads and only Qs are unique which makes it possible to run the model with very large batches.
This is the Architecture used in [BigCode's SantaCoder](https://huggingface.co/bigcode/santacoder).
There are a few things to do before we can merge the PR:
- add performance improvements suggested by @jlamypoirier
- fix tests:
- there is an issue with `past`
- there is an issue with loading the tokenizer (i guess missing vocab file in repo?)
- fix the generation examples
You can run the tests with:
```bash
RUN_SLOW=1 python -m pytest -s -v ./tests/models/gpt2mqa/
```
cc @bigximik @jlamypoirier @RaymondLi0
To review when ready I tag @ArthurZucker and @younesbelkada. | 01-23-2023 11:21:19 | 01-23-2023 11:21:19 | Regarding tests `test_batch_generation` and `test_batch_generation_2heads`. If token initialisation class is changed form `GPT2Tokenizer` to `GPT2TokenizerFast` the test passes through until generated tokens assertion. Is it intended behaviour or the loading functionality should have rerouted from the default class?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing in favour of #22575 |
transformers | 21,252 | closed | [Whisper] Refactor whisper | # What does this PR do?
The goal of this PR is to allow the users to do the following :
```python
...
whisper_model.generate(audio, return_timestamps = True)
whisper_model.generate(audio, return_timestamps = True, task = Transcribe)
```
The language is automatically detected. This also simplifies the pipeline calls, and add a good example of `generation_config` 's intended usage.
| 01-23-2023 10:48:44 | 01-23-2023 10:48:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>"The language is automatically detected". From my experience the language detection by Whisper is very unreliable. Will it still be possible to specify language?<|||||>Sure, let's make sure we still allow the language to be past! Thanks for pointing this out<|||||>Once #21257 is merged, the tests here should also pass ! <|||||>Pipeline tests need #21269 to be merge 😉 <|||||>The two failing tests are from the latest modification of the multilingual tokenizer's config |
transformers | 21,251 | closed | Generate: precision fix in compute_transition_scores doctests | # What does this PR do?
See title -- it was causing doctests to fail. | 01-23-2023 10:44:32 | 01-23-2023 10:44:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,250 | closed | [Whisper] fix all issues with unk token | # What does this PR do?
Previously, all OOV ( and thus timestamp tokens) outputed by the model are decoded to `<|endoftext|>` by the `xxx.en` whisper models. This does not happen with the multilingual model only because I added `""` to the vocabulary, and the `unk_token_id` is the same `""`. But this does not really make sense.
As the default behavior for Whisper is just to outptu `""` for any OOV, now the `_convert_id_to_token` function does not use a `unk_token`.
This will fix the inconsistency, and will help for the whisper refactoring.
| 01-23-2023 10:40:22 | 01-23-2023 10:40:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,249 | closed | Unable to use GPU during wav2vec2 decoding | ### System Info
Hi All,
I have built a finetuned model for Tamil using facebook/wav2vec2-xls-r-300m. I could do inferencing successfully using CPU. However, Wav2vec2 decoding (with pyctcdecoder) using GPU is not working. I have tried enabling device='gpu' in the decoding script, and also running the decoding script as “python -m torch.distributed.launch --nproc_per_node=<num of GPUs> <Decoding_Script.py>”. But, on monitoring nvidia-smi output during decoding, none of these methods are using GPU for decoding. Pls suggest @sanchit-gandhi . Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use 'cuda' as device for decoding
audio_name=pd.DataFrame(test_data).audio[i]
text_org = pd.DataFrame(test_data).text[i]
audio_input, sample_rate = sf.read(audio_name)
#withLM
input_values = processor_with_lm(audio_input, sampling_rate=16000, return_tensors="pt").input_values
logits = model(input_values).logits
hypothesis = processor_with_lm.batch_decode(logits.detach().numpy()).text
text_with_lm = hypothesis[0]
#withoutLM
input_values_wo = processor(audio_input, sampling_rate=16000, return_tensors="pt").input_values
logits_wo = model(input_values_wo).logits
predicted_ids = torch.argmax(logits_wo, dim=-1)
hypothesis_wo_lm = processor.decode(predicted_ids[0])
text_wo_lm = hypothesis_wo_lm.replace('[PAD]','')
### Expected behavior
GPU decoding should have happened and nvidia-smi output to be shown accordingly. | 01-23-2023 07:12:00 | 01-23-2023 07:12:00 | `pyctcdecode` doesn't support GPU<|||||>Indeed, `pyctcdecode` is a CPU only decoding method. PyTorch recently released a fast beam search decoder with a Flashlight backend: https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
We could look at integrating this into transformers for faster CTC + LM decoding!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,248 | closed | add interface or integration to provide interpretability/explainability of hugging face models | ### Feature request
require model answer requests on training biases and model transparency for interpretability and explainability within workflow of such model processes, currently there is no support for this on the platform
e.g.
a form of blackbox testing on the models
a form of ui interface
a way to evaluate
a way to visualize
a linege of data changes in learning process
a publically available benchmarking of models
a way to retune the models - debasing bias
perhaps, linkage to captum/lime, or other such tooling
### Motivation
regulation requirements for trustworthy ai (to be able to answer how the model learned this for correctness, transparency, and fairness)
to be able to correct the biases in training datasets
this is important because you have an array of models which support zero transparency.
it is also important for progressing ai.
to build model lineage
to provide for continued compliance and data governance across different geographic regulations in use of such models.
fundamentally, to answer these questions:
how the result was produced
whether the model was correct in producing such a result based on the implementation
[https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html]
### Your contribution
not sure how I can help if the developers have yet to even add such feature and make themselves unapproachable, this is something that is constantly overlooked by the ML/DL community with a lot of marketing hype where they cannot fully explain outside of a research paper how the model process reached that result.
| 01-23-2023 06:54:43 | 01-23-2023 06:54:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,246 | closed | BERT Embedding Weights VS Last Hidden State | ### BERT for Feature Extraction: Embedding Weights VS Last Hidden State
I am trying to extract the pretrained BERT token embeddings and get the feature vector for any specific token by indexing the token ID. However, I found that indexing the pretrained embedding matrix returns very different values as compared to feeding the token IDs into the encoder to get the embeddings. Am I missing something? I can't find this anywhere in the official documentation/tutorials and I've seen the use of both methods to extract features, which is quite concerning if implemented without proper understanding.
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
# get pretrained token embeddings
embedding_matrix = model.embeddings.word_embeddings.weight
# look up embeddings token sequence
embeddings = embedding_matrix[input['input_ids']]
# feed input into pretrained encoder
input = tokenizer(text, return_tensors='pt')
output = model(**input)
# why are the token embeddings different before fine-tuning?
torch.all(embeddings == output.last_hidden_state)
```
**Examples:**
_**text = "hello"**_
embeddings
```
tensor([[[ 0.0136, -0.0265, -0.0235, ..., 0.0087, 0.0071, 0.0151],
[-0.0043, -0.0330, -0.0217, ..., -0.0425, -0.0127, -0.0389],
[-0.0145, -0.0100, 0.0060, ..., -0.0250, 0.0046, -0.0015]]],
grad_fn=<IndexBackward0>)
```
output.last_hidden_state
```
tensor([[[-0.3061, 0.2622, -0.1896, ..., -0.1651, 0.1014, 0.4119],
[-0.7390, -0.0336, 0.3932, ..., -0.1818, -0.1839, -0.2185],
[ 0.5801, 0.0627, -0.2637, ..., 0.3963, -0.5684, -0.4924]]],
grad_fn=<NativeLayerNormBackward0>)
```
_**text = "hello!"**_
embeddings
```
tensor([[[ 0.0136, -0.0265, -0.0235, ..., 0.0087, 0.0071, 0.0151],
[-0.0043, -0.0330, -0.0217, ..., -0.0425, -0.0127, -0.0389],
[ 0.0298, -0.0373, -0.0356, ..., 0.0161, 0.0192, 0.0173],
[-0.0145, -0.0100, 0.0060, ..., -0.0250, 0.0046, -0.0015]]],
grad_fn=<IndexBackward0>)
```
output.last_hidden_state
```
tensor([[[-0.0509, 0.1088, -0.1411, ..., -0.1243, -0.0803, 0.2858],
[-0.6771, -0.5464, 0.0878, ..., -0.0575, 0.0359, -0.3080],
[-1.0903, -0.9996, -0.5636, ..., 0.3232, -0.2773, -0.1463],
[ 0.8302, 0.0501, -0.2251, ..., 0.3216, -0.6489, -0.2456]]],
grad_fn=<NativeLayerNormBackward0>)
``` | 01-22-2023 23:30:30 | 01-22-2023 23:30:30 | After much more digging, I found a superb in-depth explanation by Alexey Kravets in this article:
https://towardsdatascience.com/deep-dive-into-the-code-of-bert-model-9f618472353e
Apparently, the last hidden state returns the context-aware representations of the word embeddings, which are calculated from the pretrained model weights via normalization and the attention mechanism (using pretrained weights and biases for the Q, K and V matrices).
Essentially: feeding the pretrained feature vectors into the pretrained encoder model. For sequence-based tasks, this method is definitely more appropriate as compared to using the unprocessed context-invariant embeddings. |
transformers | 21,245 | closed | [GIT] Convert more checkpoints | # What does this PR do?
Microsoft open-sourced some more GIT checkpoints (see https://github.com/microsoft/GenerativeImage2Text/issues/34#issuecomment-1374378625), hence I've converted them by extending the conversion script. | 01-22-2023 20:43:10 | 01-22-2023 20:43:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,244 | closed | Models not in eval()-mode when loaded with from_config() | ### System Info
The docs say that models loaded with `from_pretrained()` are done so with `model.eval()` mode on by default. But when using `from_config()` that's not the case, even though loading configs and tokenizers would be using `from_pretrained()` like so:
```python
config = AutoConfig.from_pretrained(
MODEL_NAME,
padding='max_length',
truncation=True,
output_hidden_states=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, config=config)
model = AutoModelForSequenceClassification.from_config(config)
```
I'd like to argue that we should put the model in `eval()` mode when using `from_config()`. I know at least 2 other people who have spent a great number of hours validating and hunting for that. Similar reasoning to https://github.com/huggingface/transformers/issues/695#issuecomment-502964803 I think it's important to make things deterministic out of the box.
Or, open to understanding why that wouldn't be the case.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try something like this:
```python
config = AutoConfig.from_pretrained(
MODEL_NAME,
padding='max_length',
truncation=True,
output_hidden_states=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, config=config)
model = AutoModelForSequenceClassification.from_config(config)
```
### Expected behavior
I'd expect the model to be loaded in `eval()` mode. | 01-22-2023 19:47:47 | 01-22-2023 19:47:47 | A model created with `from_config` will have random weights and is thus not suitable for inference. This is why it is put in training mode, as the documentation clearly states. In any case, it has been the case for such a long time that reverting this would surprise way more users with a breaking change.<|||||>The code snippet I posted above does not load random weights.<|||||>Yes it does.<|||||>Well that explains a lot. I stand corrected, thank you. |
transformers | 21,243 | closed | How to create distil-opt/bloom | Is there any script to create a distilled version of opt or bloom model? | 01-22-2023 18:28:11 | 01-22-2023 18:28:11 | This is not an issue, could you maybe ask the question in the [forum](https://discuss.huggingface.co/)? Also, the answer is no. @younesbelkada worked a bit on this so he can answer if you ping him on the forum. |
transformers | 21,242 | closed | [`pipeline`] add explicit `ValueError` if you don't pass a valid arg | null | 01-22-2023 10:17:44 | 01-22-2023 10:17:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21242). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,241 | closed | Add Japanese translation installation.mdx | # What does this PR do?
Adds Japanese translation to installation.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Partially addresses #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-22-2023 10:11:31 | 01-22-2023 10:11:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,240 | closed | AutoTokenizer loading fails with `object has no attribute 'config'` | ### System Info
- `transformers` version: 4.25.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipeline = TokenClassificationPipeline(model, tokenizer)
outputs = pipeline("A test example")
print(outputs)
```
### Expected behavior
Since this is a part-of-speech model, I expect part-of-speech tags for "A test example". This works as expected in atleast version `4.2.0`.
With the lastest (`4.25.1`), the tokenizer loading fails with the error:
`AttributeError: 'BertTokenizerFast' object has no attribute 'config'`
Forcing the python tokenizer by setting `use_fast=False` changes the error to:
`AttributeError: 'BertTokenizer' object has no attribute 'config'`
Since the model and the code worked recently, is this a regression or is there a (intended) breaking change in the recent versions? Either way, whats the best way to fix the model/code to make it work again?
Thanks! | 01-22-2023 07:53:29 | 01-22-2023 07:53:29 | Hi, @fdalvi the code runs as expected if you use `pipeline` instead of `TokenClassificationPipeline`,
```
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipe = pipeline(task="token-classification", model=model, tokenizer=tokenizer)
outputs = pipe("A test example")
print(outputs)
```
>>[{'entity': 'DT', 'score': 0.9997243, 'index': 1, 'word': 'A', 'start': 0, 'end': 1}, {'entity': 'NN', 'score': 0.9997472, 'index': 2, 'word': 'test', 'start': 2, 'end': 6}, {'entity': 'NN', 'score': 0.99973196, 'index': 3, 'word': 'example', 'start': 7, 'end': 14}]
I think there might be a problem with `TokenClassificationPipeline`.
EDIT - as mentioned by @younesbelkada there is no problem with `TokenClassificationPipeline`, it was due to not passing positional arguments correctly, sorry I completely overlooked that part!<|||||>Thanks @susnato for narrowing it down and for the quick temporary fix! Hope this makes it easier to figure out what the underlying issue is.<|||||>Hi @fdalvi
Thanks for the issue, you need to pass explicit positional arguments into `TokenClassificationPipeline` to make it work. The snippet below works fine:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification, TokenClassificationPipeline
model_name = "QCRI/bert-base-multilingual-cased-pos-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
pipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
outputs = pipeline("A test example")
```
the snippet shared by @susnato will also fail if you don't pass positional arguments<|||||>Ah thats an easy fix! Thanks a lot for the quick response. |
transformers | 21,239 | closed | [WIP] Add UDOP models | #20650 | 01-22-2023 05:46:58 | 01-22-2023 05:46:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @NielsRogge The model weights are here https://huggingface.co/ZinengTang/Udop/tree/main , But how to get the config for these models ?
<|||||>@raghavanone For reference, someone asked the same question on the UDOP repo: https://github.com/microsoft/i-Code/issues/17<|||||>Note: Cannot proceed further without microsoft releasing the entire weights. Currently vision decoder weights have not been released.<|||||>If I'm not mistaken, vision decoder weights should not be needed when using the text layout decoder part, only.
`vision_encoder` weights are part of the shared model weights.<|||||>@raghavanone is there anything else blocking? It sounds like we can proceed with the given weights, assuming that we notify users that the vision decoder is not trained. <|||||>@logan-markewich Yes, I will work on closing this within couple of days . <|||||>@sgugger Need some pointers on How should this model be tested ? Can I follow the tests used for T5 model and replicate similar tests ? <|||||>@NielsRogge Any pointer here ? <|||||>I hope it gets merged soon @raghavanone . Nice work :)<|||||>Forgive my naiveté, why do all the tests call `from_pretrained()` on some variation of `t5`? The UDOP model checkpoints are [here](https://huggingface.co/ZinengTang/Udop/tree/main). Could these be used?<|||||>Ah, I see that the test script they provide also [uses T5-large](https://github.com/microsoft/i-Code/blob/main/i-Code-Doc/scripts/finetune_rvlcdip.sh), I expected it to use one of those checkpoints<|||||>@raghavanone how are things going with this so far? I'm very interested in using this model as soon as it gets integrated - if you need a hand with anything let me know! And thanks for bringing it into the library 😄
<|||||>> @raghavanone how are things going with this so far? I'm very interested in using this model as soon as it gets integrated - if you need a hand with anything let me know! And thanks for bringing it into the library 😄
@thefirebanks I am working on fixing last few tests. Hoping to close this PR very soon. Sorry for the delay.<|||||>@raghavanone I am currently trying to finetune `UdopUniModelForConditionalGeneration` using this PR. I ran into the following exception while training:
```
File "/opt/conda/lib/python3.8/site-packages/transformers/models/udop/modeling_udop.py", line 2422, in forward
encoder_outputs = self.encoder(
TypeError: forward() got an unexpected keyword argument 'ids_keep'`
```
I explained what appears to be happening in [this comment](https://github.com/huggingface/transformers/commit/ea7e44ca37d14d24798ed938b52ce3b2a202816f#r103307941).
It looks like the `ids_keep` parameter was removed from `UdopUniStack` but not removed from the call to it in `UdopUniModelForConditionalGeneration`
**EDIT**
Looks like `output_attentions`, also needs to be removed
And in the `self.decoder()` call, `cross_attn_head_mask`, `output_attentions`
Happy to make the changes myself with repo permissions
<|||||>> @raghavanone I am currently trying to finetune `UdopUniModelForConditionalGeneration` using this PR. I ran into the following exception while training:
>
> ```
> File "/opt/conda/lib/python3.8/site-packages/transformers/models/udop/modeling_udop.py", line 2422, in forward
> encoder_outputs = self.encoder(
> TypeError: forward() got an unexpected keyword argument 'ids_keep'`
> ```
>
> I explained what appears to be happening in [this comment](https://github.com/huggingface/transformers/commit/ea7e44ca37d14d24798ed938b52ce3b2a202816f#r103307941).
>
> It looks like the `ids_keep` parameter was removed from `UdopUniStack` but not removed from the call to it in `UdopUniModelForConditionalGeneration`
>
> **EDIT** Looks like `output_attentions`, also needs to be removed And in the `self.decoder()` call, `cross_attn_head_mask`, `output_attentions`
>
> Happy to make the changes myself with repo permissions
@plamb-viso Yes, removing those parameters were not done in all places, I have fixed it locally. I am working on fixing failing tests. This the last step pending for merging. Fixing these tests are taking more time than expected. <|||||>@raghavanone I saw you closed this PR. Skimming over your work, the PR seemed to be in a rather good state. Where there any blockers you encountered? IMO, it would be nice to add UDOP models in Hugginface at some point.<|||||>@maxjeblick @NielsRogge feels that the code original repo is bit hacky, he is working a separate PR to UDOP in better implementation, so closed this in consultation with him. He should open a PR soon .
@NielsRogge please do add more details for the benefit of folks following this PR <|||||>Thanks a lot for the fast reply!<|||||>@NielsRogge @raghavanone please link the new PR when its available for people subscribed to this one<|||||>Hi yes I'll open a PR soon! Thanks a lot for your work already @raghavanone, will ping you on the PR <|||||>Hi @NielsRogge I saw the large amount of commits on your new UDOP branch, curious if you have any idea on when you think a PR might be ready<|||||>Sorry to keep hammering on this, but again have noticed a flurry of activity on that branch then almost 2 weeks off. Curious what the plan is for it @NielsRogge <|||||>Hi @plamb-viso sorry for the late reply, the model is working, only have limited time to work on it. I'll open a PR this weekend/Monday.
For now you can already use the model if you're curious, check [this code example](https://github.com/NielsRogge/transformers/blob/14f327d1e9804aeddbe420bd44b811945a3aadd4/tests/models/udop/test_modeling_udop.py#L363) regarding usage. Model is already on the hub [here](https://huggingface.co/nielsr/udop-large).<|||||>Out of curiosity @NielsRogge : did you ever use your implementation to fine tune it on a task like CORD?<|||||>I've fine-tuned the model on a [toy dataset of RVL-CDIP](https://huggingface.co/datasets/nielsr/rvl_cdip_10_examples_per_class), works well but the model is pretty heavy, got OOM on Google Colab even with batch size = 1 so had to use a bigger GPU. The author only released large variants. <|||||>In my original work on @raghavanone 's version of the model, I also had to use a batch size of 1 to get it to not OOM on 40gb GPUs |
transformers | 21,238 | closed | Statement seems to have no effect | ### System Info
transformers from `v4.3.3` to `v4.25.1`.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just stare at [this line][1] (method is not actually called).
```python
class Trainer:
def __init__(self, ...)
...
# force device and distributed setup init explicitly
args._setup_devices
...
```
This change was done in 2021-02-11 (almost two years ago).
[1]: https://github.com/huggingface/transformers/blob/4e730b387364c9f46b6b1b0c79fdaf0903c42257/src/transformers/trainer.py#L329
### Expected behavior
I do not know what to expect because of the issue. May be all distributed (at least parallel) training with PyTorch is broken. May be everything is fine. I am totally not sure. I'd like to see some regression tests or something that prove that there is not issue or something what was broken after this change. | 01-21-2023 19:52:04 | 01-21-2023 19:52:04 | It turns out that this is not a function but a property which do some complex initialization. :shrug:
|
transformers | 21,237 | closed | Add support of backward_prefetch and forward_prefetch | #21156
Adds support for backward_prefetch and forward_prefetch in trainer.
@sgugger @pacman100 | 01-21-2023 15:30:31 | 01-21-2023 15:30:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Done, But not sure why this test is failing. Any pointers on how to make this build green would help.<|||||>@sgugger @pacman100 Need pointer on why this test is failing.<|||||>> The test is a flaky one, don't worry about it. Thanks for iterating, I just have one last comment on the deprecation warning for `fsdp_min_num_params` and we can merge this!
Done
<|||||>@sgugger @pacman100 Can we merge
this PR ? <|||||>?Hello @raghavanone , could you please resolve the comments above that I have unresolved as they are yet to be addressed ?<|||||>> ?Hello @raghavanone , could you please resolve the comments above that I have unresolved as they are yet to be addressed ?
Done<|||||>Thank you @raghavanone for iterating and addressing the comments and for the overall contribution! 🚀 |
transformers | 21,236 | closed | Optimize by not computing gradients for parameters set to requires_grad=False | Fix #21182
@sgugger
| 01-21-2023 14:23:04 | 01-21-2023 14:23:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21236). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger Need to retrigger this build .
|
transformers | 21,235 | closed | WIP porting of lite transformer | # What does this PR do?
#19730
| 01-21-2023 13:05:15 | 01-21-2023 13:05:15 | @NielsRogge Needs some help how to go about the conversion script and testing.
The original model is not in pytorch hub, I has only Google Drive link. In the conversion script should I download and convert ? <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21235). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @raghavanone, do you still want to proceed with this PR? If yes, I'll reopen it :) <|||||>@NielsRogge Yes, Please it keep it open, I want to wrap up UDOP PR beforing wraping this up . |
transformers | 21,234 | closed | Speed up `BeamScorer` by 1000% | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20820
For reasons, explanations, benchmarks, etc, please have a look at the issue
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-21-2023 13:00:30 | 01-21-2023 13:00:30 | I am really unfamiliar with huggingface CI, it errors:
```
From github.com:huggingface/transformers
* [new ref] refs/pull/21234/head -> origin/pull/21234
Checking out branch
fatal: reference is not a tree: 9fb13c79a72d13cdf0dd59d48762cd7c95370b29
exit status 128
```
However, looking at https://github.com/huggingface/transformers/pull/21234/files, seems I only change one file.
Do not think I can fix this :/

<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21234). All of your documentation changes will be reflected on that endpoint.<|||||>> Before addressing the comments, let's first make sure this change is worth merging :) We won't accept PRs that make the code harder to read (as most vectorized versions of an algorithm are) unless there are clear benefits.
I will need execution time numbers of .generate() from a model at least as big as gpt2, before and after this change, for several number of beams (e.g. 2, 4, 8, and 16). Ideally with and without GPU.
Totally understand your concerns :) I do not have much time now (you know, doing research and maintaining [my open source libs](https://github.com/fzyzcjy)), but will try to squeeze out some time when possible. Anyway, the PR in its current status may already be somehow useful for users who finds out it is too slow, since they can manually copy and tweak the `generate` function to use a custom scorer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,233 | closed | IndexError: index out of range in self during ViltForImagesAndTextClassification fine-tuning | ### System Info
I am running on google colab. Though I got the same error for GPU. Here I am showing without GPU information
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Datasets
text | images |
-- | -- |
moorhen swamphen | [image1.jpg, image2.jpg, image3.jpg, image4.jpg, image5.jpg, image6.jpg, image7.jpg, image8.jpg, image9.jpg, image10.jpg]|
-- | -- |
According to the dataset, I have to pass 1 text with 10 images. So my input shape:
```
pixel_values: torch.Size([6, 10, 3, 384, 384])
pixel_mask: torch.Size([6, 10, 384, 384])
Input_ids: torch.Size([6, 9])
```
According to the forward function of [ViltForImagesAndTextClassification](https://github.com/huggingface/transformers/blob/v4.25.1/src/transformers/models/vilt/modeling_vilt.py#L1281) I can pass **num_images** while calling the model.
But During training the model, showing the following error:
```
IndexError Traceback (most recent call last)
[<ipython-input-27-191138835385>](https://localhost:8080/#) in <module>
70 # encoding = base_processor(images, batch[1], return_tensors="pt")
71
---> 72 outputs = model(input_ids=batch['input_ids'], pixel_values=batch['pixel_values'], labels=batch['labels'])
73
74 # print(outputs)
8 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[<ipython-input-23-da8fb21f3dcd>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, head_mask, inputs_embeds, image_embeds, labels, output_attentions, output_hidden_states, return_dict)
64
65 # forward every image through the model
---> 66 outputs = self.vilt(
67 input_ids,
68 attention_mask=attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/vilt/modeling_vilt.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, head_mask, inputs_embeds, image_embeds, image_token_type_idx, output_attentions, output_hidden_states, return_dict)
836 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
837
--> 838 embedding_output, attention_mask = self.embeddings(
839 input_ids,
840 attention_mask,
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/transformers/models/vilt/modeling_vilt.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, inputs_embeds, image_embeds, image_token_type_idx)
231 torch.zeros_like(attention_mask, dtype=torch.long, device=text_embeds.device)
232 )
--> 233 image_embeds = image_embeds + self.token_type_embeddings(
234 torch.full_like(image_masks, image_token_type_idx, dtype=torch.long, device=text_embeds.device)
235 )
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/sparse.py](https://localhost:8080/#) in forward(self, input)
158
159 def forward(self, input: Tensor) -> Tensor:
--> 160 return F.embedding(
161 input, self.weight, self.padding_idx, self.max_norm,
162 self.norm_type, self.scale_grad_by_freq, self.sparse)
[/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2211
2212
IndexError: index out of range in self
```
But while I am changing the value of **image_token_type_idx= i + 1** to **image_token_type_idx=1** in forward function during passing the images in the vilt model in the following snippet, it is working fine.
```
for i in range(num_images):
# forward every image through the model
outputs = self.vilt(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
pixel_values=pixel_values[:, i, :, :, :] if pixel_values is not None else None,
pixel_mask=pixel_mask[:, i, :, :] if pixel_mask is not None else None,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
image_embeds=image_embeds[:, i, :, :] if image_embeds is not None else None,
image_token_type_idx=i + 1,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
```
### Expected behavior
According to the documentation, there should not be any problem.
| 01-21-2023 12:06:57 | 01-21-2023 12:06:57 | cc @NielsRogge and @alaradirik <|||||>Hi @shantanu778, your input shapes seem correct but could you provide a minimal code example that reproduces the error?<|||||>As you can see the error is in the forward function. I actually didn't changed a lot in ViltForImagesAndTextClassification class. Here is my CustomModel:
```
class CustomModel(PreTrainedModel):
def __init__(self, config):
super().__init__(config)
# print(config)
self.num_labels = config.num_labels
self.vilt = ViltModel(config)
# Classifier head
num_images = config.num_images
self.classifier = nn.Linear(config.hidden_size * num_images, config.num_labels)
def forward(
self,
input_ids = None,
attention_mask = None,
token_type_ids = None,
pixel_values = None,
pixel_mask = None,
head_mask = None,
inputs_embeds = None,
image_embeds = None,
labels = None,
output_attentions = None,
output_hidden_states = None,
return_dict = None,
):
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# print(input_ids)
# print(pixel_values.size())
if pixel_values is not None and pixel_values.ndim == 4:
# add dummy num_images dimension
pixel_values = pixel_values.unsqueeze(1)
if image_embeds is not None and image_embeds.ndim == 3:
# add dummy num_images dimension
image_embeds = image_embeds.unsqueeze(1)
num_images = pixel_values.shape[1] if pixel_values is not None else None
# print(num_images)
if num_images is None:
num_images = image_embeds.shape[1] if image_embeds is not None else None
if num_images != self.config.num_images:
raise ValueError(
"Make sure to match the number of images in the model with the number of images in the input."
)
pooler_outputs = []
hidden_states = [] if output_hidden_states else None
attentions = [] if output_attentions else None
for i in range(num_images):
# print(i)
# print(input_ids)
# print(pixel_values[:, i, :, :, :])
# forward every image through the model
outputs = self.vilt(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
pixel_values=pixel_values[:, i, :, :, :] if pixel_values is not None else None,
pixel_mask=pixel_mask[:, i, :, :] if pixel_mask is not None else None,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
image_embeds=image_embeds[:, i, :, :] if image_embeds is not None else None,
image_token_type_idx=i+1,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# print("="*20)
# print(outputs)
pooler_output = outputs.pooler_output if return_dict else outputs[1]
# print("="*20)
# print(pooler_output)
pooler_outputs.append(pooler_output)
if output_hidden_states:
hidden_states.append(outputs.hidden_states)
if output_attentions:
attentions.append(outputs.attentions)
pooled_output = torch.cat(pooler_outputs, dim=-1)
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
# print(labels)
loss = loss_fct(logits.view(-1, self.num_labels), labels)
if not return_dict:
output = (logits, hidden_states, attentions)
return ((loss,) + output) if loss is not None else output
return ViltForImagesAndTextClassificationOutput(
loss=loss,
logits=logits,
hidden_states=hidden_states,
attentions=attentions,
)
```
I don't know where is the exact problem. But after passing **image_token_type_idx= 1**, I didn't get any error.<|||||>Hi @shantanu778 could you provide a complete example, including the toy inputs, batch generation and the forward pass so that we can replicate the error?
Are you trying to customize the model or is the CustomModel class is just meant to fix an existing issue?<|||||>@alaradirik First of all, CustomModel is mainly meant to fix an existing issue. Because when I tried to fine-tune ViltForImagesAndTextClassification, I got above error. Then, I created customModel class as like as [your source code](https://github.com/huggingface/transformers/blob/v4.26.0/src/transformers/models/vilt/modeling_vilt.py#L1281) and fix the issue by editing ** image_token_type_idx** in forward function. But I am not sure is it right or wrong way to fix it.
Now I am trying to Describe my task,
I have a text and 10 images and I have to find the correct image from the 10 images. I wanted solve this problem as Multi-label Classification.
*Dataset*
text | images | gold_image
-- | -- | --
gangster outlaw |['image.166.jpg','image.173.jpg', 'image.172.jpg','image.165.jpg', 'image.174.jpg','image.170.jpg','image.171.jpg', 'image.167.jpg'image.168.jpg','image.169.jpg']| 'image.165.jpg'
*Custom Dataset*
```
class ImageTextDataset(Dataset):
def __init__(self, data_dir, train_df, data_type, device, text_augmentation=False):
self.data_type = data_type
self.transforms = transforms.Compose([transforms.Resize([512,512]),transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
self.data_dir = data_dir
if self.data_type == "train" or self.data_type == "valid":
self.all_image_names = list(train_df['images'])
self.context = list(train_df['text'])
self.gold_images = list(train_df['gold_image'])
else:
raise ValueError("Invalid data type. Expected one of: %s" % self.data_type)
def __len__(self):
return len(self.context)
def __getitem__(self, idx):
# Load the image and text
context = self.context[idx]
#loading images
if self.data_type=='train' or self.data_type == 'valid':
label = []
images = self.all_image_names[idx]
image = []
for i, im in enumerate(images):
path = os.path.join(self.data_dir, im)
img = Image.open(path)
if img.mode != "RGB":
img = img.convert('RGB')
img = self.transforms(img)
image.append(img)
label.append(1.0) if im == self.gold_images[idx] else label.append(0.0)
sample = {'context':context, 'images': image, 'label': label}
else:
raise ValueError("Invalid data type. Expected one of: %s" % self.data_type)
return sample
```
*Custom Data collator Function*
```
def custom_collate(batch, processor):
tokenizer = processor['tokenizer']
feature_extractor = processor['feature_extractor']
dic = {}
context = []
images = []
labels = []
for item in batch:
context.append(item['context'])
images.append(item['images'])
labels.append(item['label'])
pixel_masks, pixel_values= [], [],
for idx, s in enumerate(images):
# print(s)
pixel_mask, pixel_value, label = [], [], []
for jdx, img in enumerate(s):
# print(img.size())
# print(img.size())
feature_encoding = feature_extractor(img, return_tensors="pt")
pixel_mask.append(feature_encoding['pixel_mask'].squeeze(0))
pixel_value.append(feature_encoding['pixel_values'].squeeze(0))
pixel_mask = torch.stack(pixel_mask)
pixel_value = torch.stack(pixel_value)
pixel_masks.append(pixel_mask)
pixel_values.append(pixel_value)
encoding = tokenizer(context, return_tensors="pt", padding=True ,truncation=True, max_length=40)
encoding['pixel_values'] = torch.stack(pixel_values)
encoding['pixel_mask'] = torch.stack(pixel_masks)
encoding['labels'] = torch.as_tensor(labels)
return encoding
```
*Training Script*
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "dandelin/vilt-b32-finetuned-coco"
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
feature_extractor = ViltFeatureExtractor.from_pretrained(checkpoint)
processor = {
'tokenizer': tokenizer,
'feature_extractor': feature_extractor
}
model=CustomModel(config = ViltConfig.from_pretrained(checkpoint, output_attentions=True,output_hidden_states=True, num_images=10, num_labels=10, problem_type="multi_label_classification"))
model.to(device)
print(model.config.architectures[0])
# Create the dataset
train_ds = ImageTextDataset('/train_images_v1', train, data_type="train",device = device, text_augmentation=True)
# Create the dataloader
train_dataloader = DataLoader(train_ds, shuffle=True, batch_size=6, collate_fn=lambda batch: custom_collate(batch, processor))
print(len(train_dataloader))
# model.to(device)
lr = 5e-5
optimizer = AdamW(model.parameters(), lr=lr)
num_epochs = 2
num_training_steps = num_epochs * len(train_dataloader)
progress_bar_train = tqdm(range(num_training_steps))
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps,
)
print(num_training_steps)
for i in range(num_epochs):
total_loss = 0
print(f"Epoch {i+1}")
model.train()
for batch in train_dataloader:
batch.to(device)
outputs = model(input_ids=batch['input_ids'], pixel_values=batch['pixel_values'], labels=batch['labels'])
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar_train.update(1)
```
Now if you use ViltForImagesAndTextClassification for fine-tuning, you will encounter the error. Then if you use my CustomModel in my previous comment, it will solve the issue.
N:B: I never created issue before therefore I don't know the proper way to explain the problem and task. Sorry for your inconvenience.
<|||||>Hi @shantanu778, could you provide a minimal code example that reproduces the error without the custom class?
<|||||>I don't know how to give u minimal code example,
I describe before what I wanted to do.
if u try to fine-tune ViltForImagesAndTextClassification with 10 images instead of 2, I think you will able to generate the error.
In my case, instead of using CustomClass use ViltForImagesAndTextClassification rest of them are as like as I mentioned earlier. @alaradirik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>A simple solution:set modality_type_vocab_size = num_images+1 |
transformers | 21,232 | closed | [Mask2Former] Add doc tests | # What does this PR do?
This PR ensures that the code snippet's in Mask2Former's docs work as intended, and are tested. | 01-21-2023 09:12:39 | 01-21-2023 09:12:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh not sure why CI is failing, running `make fixup` locally doesn't result in any updates. My version is black 22.3<|||||>@NielsRogge mine is also 22.03, but it reformates the modeling file. Not sure why though, do you want me to push? I can also post the whole content of `pip freeze` for you to check the package versions.<|||||>Feel free to push a commit :)<|||||>I pushed a commit. Actually, you are right. `make fixup` will change the files twice in the run, and that 2 changes cancel each other's change. I am not sure why. After running `make style` to fix some issues, it then works for `make fixup` too. |
transformers | 21,231 | closed | how to fine tune BlipForImageTextRetrieval? | ### Feature request
how to fine tune BlipForImageTextRetrieval?
Can you borrow some methods from here to achieve this?
https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip_models/blip_retrieval.py
### Motivation
Implement a graphical matching model that, due to the filtering of poor quality pairs of matches
### Your contribution
Not available at the moment | 01-21-2023 08:25:46 | 01-21-2023 08:25:46 | cc @younesbelkada <|||||>I'd recommend fine-tuning CLIP if you want to do image-text retrieval using this script: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text.
Fine-tuning BLIP might be harder as it involves some very specific loss functions.<|||||>Thank you for your answer! I tried the fintuning of clip, and it was successful. I use the blip model because I want to use its text matching (binary classification model) to filter out noisy data that is not matched by the text diagram. Because I collect a large number of unlabeled pictures from the Internet, I want to use the blip caption model to tag them, and then filter the invalid image data.<|||||>CLIP can also be used for image-text matching, by just encoding the image, encoding the text, and computing a cosine similarity score between the respective embeddings.<|||||>> CLIP can also be used for image-text matching, by just encoding the image, encoding the text, and computing a cosine similarity score between the respective embeddings.
In fact, what I want to express is that this graphical text matching classifier is similar to the cross encoder in text matching, which can get the interaction information between the two by splicing the graphical embedding, so the accuracy will be higher than the clip (similar to the bi-encoder in text matching)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,230 | closed | when adding tokens for BlipModel,A bug has appeared |
When I execute the following code to add a vocab, an error is reported -
`
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
import sys
import logging
import pandas as pd
from dataclasses import dataclass, field
from typing import Optional
import torch
from datasets import Dataset
from datasets import load_dataset
from PIL import Image
from torchvision.io import ImageReadMode, read_image
from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize
from torchvision.transforms.functional import InterpolationMode
from transformers import BlipModel, BlipForImageTextRetrieval, BlipForConditionalGeneration, BlipProcessor, AutoConfig, AutoTokenizer
import transformers
from transformers import (
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
model_path= r'D:\all_models_archives\models--Salesforce--blip-itm-large-coco'
tokenizer= AutoTokenizer.from_pretrained(model_path)
processor = BlipProcessor.from_pretrained(model_path)
model_config= AutoConfig.from_pretrained(model_path)
model= BlipModel.from_pretrained(pretrained_model_name_or_path= model_path, config= model_config)
`

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
import sys
import logging
import pandas as pd
from dataclasses import dataclass, field
from typing import Optional
import torch
from datasets import Dataset
from datasets import load_dataset
from PIL import Image
from torchvision.io import ImageReadMode, read_image
from torchvision.transforms import CenterCrop, ConvertImageDtype, Normalize, Resize
from torchvision.transforms.functional import InterpolationMode
from transformers import BlipModel, BlipForImageTextRetrieval, BlipForConditionalGeneration, BlipProcessor, AutoConfig, AutoTokenizer
import transformers
from transformers import (
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
model_path= r'D:\all_models_archives\models--Salesforce--blip-itm-large-coco'
tokenizer= AutoTokenizer.from_pretrained(model_path)
processor = BlipProcessor.from_pretrained(model_path)
model_config= AutoConfig.from_pretrained(model_path)
model= BlipModel.from_pretrained(pretrained_model_name_or_path= model_path, config= model_config)
`

### Expected behavior
Adding vocabulary can be done normally | 01-21-2023 07:45:24 | 01-21-2023 07:45:24 | Hi @ScottishFold007
Thanks for the issue, I don't really see how the script you provided can add a new vocab to the model, can you either provide the full script or the full traceback of the error? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,229 | closed | Add scikit-learn dependency to train langage-modeling | # What does this PR do?
In order to run the language modeling training script, we need `scikit-learn` to be installed, so this PR adds it to the requirements.txt
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| 01-21-2023 05:40:59 | 01-21-2023 05:40:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,228 | closed | Issue Importing Image Resolution Models | Hey Everyone,
I am tying to import a model from transformers for deblurring images.
I am on Python 3.10.9 and just install transformers 4.25.1
The error comes on import
`from transformers import AutoImageProcessor, Swin2SRForImageSuperResolution`
and the error is:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'Swin2SRForImageSuperResolution' from 'transformers' (\env\lib\site-packages\transformers\__init__.py)`
These are the packages currently installed in my virtual environment
`certifi==2022.12.7
charset-normalizer==3.0.1
colorama==0.4.6
filelock==3.9.0
huggingface-hub==0.11.1
idna==3.4
numpy==1.24.1
opencv-python==4.7.0.68
packaging==23.0
Pillow==9.4.0
PyYAML==6.0
regex==2022.10.31
requests==2.28.2
tokenizers==0.13.2
torch==1.13.1+cu117
torchaudio==0.13.1+cu117
torchvision==0.14.1+cu117
tqdm==4.64.1
transformers==4.25.1
typing_extensions==4.4.0
urllib3==1.26.14` | 01-21-2023 05:15:03 | 01-21-2023 05:15:03 | Hi, @pravin-santhanam27 there seems to be a problem loading `Swin2SRForImageSuperResolution` with stable transformers(4.25.1) which we install from pypi. But this error is not present if you install from the source(4.26.0.dev0). This error is fixed in the source version(which is regularly updated) and will also be updated to stable release later. If you want to use it right now then please install `transformers` from source - `pip install git+https://github.com/huggingface/transformers`<|||||>Will close this as the issue seems resolved. |
transformers | 21,227 | closed | [WIP] Support BLIP and GIT in image-to-text and VQA pipelines | # What does this PR do?
Support BLIP and GIT models in image-to-text and VQA pipelines.
Fixes #21110
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
| 01-21-2023 01:43:47 | 01-21-2023 01:43:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21227). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @NielsRogge, should I remove the return of the topk scores in the VQA pipeline that used ViltForQuestionAnswering only?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,226 | closed | Skip failing test for now | # What does this PR do?
All is said in the title. Test is currently failing on main for no reason (I imagine a new release of one of the deps), more can be found [here](https://app.circleci.com/pipelines/github/huggingface/transformers/55784/workflows/83b929a9-3d0d-482a-a823-806f44824bf8/jobs/673138).
cc @sanchit-gandhi | 01-21-2023 01:42:37 | 01-21-2023 01:42:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21226). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,225 | closed | Models docstring | # What does this PR do?
This PR cleans up all docstrings following up from #20757 and #21199. It removes the need for the `processor_class` in TensorFlow and Flax generic examples by setting in the examples like #20757 did for PyTorch then makes a full pass across all models to clean up the docstrings (removing the processor_class` in the `add_code_sample` decorator, remove random outputs, use the auto classes for preprocessing).
Note that in some cases we can't use the auto-classes for preprocessing: when linking to the `__call__` method of a processor or image processor, we need the actual class (cc @amyeroberts I changed a couple of things you did here). | 01-20-2023 22:06:59 | 01-20-2023 22:06:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @sgugger for cleaning this up. With all ~250 files, I will trust you instead of look lines by lines, except one question below.
I would definitely prefer to run a doctest first offline before merging this PR - for which I can launch on my side. From previous PRs, it has shown there are always some surprise. I will launch doctest CI when all reviewers give their approval.
**So here my question**
> Note that in some cases we can't use the auto-classes for preprocessing: when linking to the __call__ method of a processor or image processor, we need the actual class (cc @amyeroberts I changed a couple of things you did here).
I see even in such places, we still have
```python
Pixel values can be obtained using [`AutoImageProcessor`]. See [`ConvNextImageProcessor.__call__`] for details.
```
I don't have much context and prior knowledge, but is it true we want to use `AutoImageProcessor` but `ConvNextImageProcessor.__call__` in such cases?<|||||>> With all ~250 files, I will trust you instead of look lines by lines.
A review would still be much appreciated, as it could catch accidental typos.
> I would definitely prefer to run a doctest first offline before merging this PR - for which I can launch on my side. From previous PRs, it has shown there are always some surprise. I will launch doctest CI when all reviewers give their approval.
Sure, we can wait for that as long as the results are available before the release branch is cut.
> I don't have much context and prior knowledge, but is it true we want to use AutoImageProcessor but ConvNextImageProcessor.__call__ in such cases?
Yes.<|||||>I triggered the doctest CI against the (last) commit (so far) in this PR. Will take a look on the PR changes too :-)
[run page](https://github.com/huggingface/transformers/actions/runs/3987623228/jobs/6837694181) |
transformers | 21,224 | closed | [`BLIP`] fix docstring for `BlipTextxxx` | # What does this PR do?
Fixes docstrings for `BlipTextModel` and `BlipTextLMHeadModel` to follow the dostring structure of `transformers` and be rendered properly by the `doc-builder`
cc @sgugger | 01-20-2023 21:37:39 | 01-20-2023 21:37:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,223 | closed | Add: TensorFlow example for semantic segmentation task guide | This PR adds a TensorFlow example to the existing [Semantic Segmentation task guide](https://huggingface.co/docs/transformers/main/en/tasks/semantic_segmentation) using the same dataset and fine-tuning steps.
This example supplements the existing guide and can be helpful to those who choose TensorFlow over PyTorch and would like to use Transformers for semantic segmentation. | 01-20-2023 21:05:00 | 01-20-2023 21:05:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,222 | closed | Add WhisperTokenizerFast | Adds the fast version of Whisper tokenizer. The Whisper tokenizer is essentially GPT2 tokenizer with special tokens. The main difference is the additional normalizer (which I mirrored from the slow tokenizer) and language/task-dependent prefix tokens.
One of the tokenizer tests is failing, it's because there is no `tokenizer.json` file in the `openai/whisper-*` (specifically the `tiny` checkpoint). I added a converter, so now it is possible to load fast tokenizer from existing checkpoints and export `tokenizer.json`. | 01-20-2023 18:53:32 | 01-20-2023 18:53:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker thanks for the help! I think now the steps are to update the unknown token in multilangual checkpoints and add `tokenizer.json` to the repos. Let me know if there's anything I can help with :)<|||||>Feel free to open community PR on the model' (hub) linking to this PR (github) 🚀 <|||||>@ArthurZucker sure! I've just created https://huggingface.co/openai/whisper-tiny/discussions/5, let me know if it looks as expected and I will open a matching PR on the other checkpoints too.
FTR I generated the `tokenizer.json` with:
```python
import sys
sys.path.reverse()
sys.path.append("/Users/jonatanklosko/git/transformers/src")
sys.path.reverse()
from transformers import WhisperTokenizerFast
tokenizer = WhisperTokenizerFast.from_pretrained("/Users/jonatanklosko/git/hf/whisper-tiny/")
tokenizer.save_pretrained("/Users/jonatanklosko/git/hf/whisper-tiny/")
```
I also updated the unknown token configuration manually.<|||||>Changing the unknown token in configuration leads to a weird behaviour when loading the slow tokenizer, see an example in the PR. Any ideas why that is?<|||||>So the issue is that the multilingual tokenizer doesn't have `<|endoftext|>` in the initial vocabulary, so it would need to be added from special tokens map. However, when loading special tokens we have this check:
https://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/tokenization_utils.py#L419-L420
and since `eos_token` and `unk_token` are both `<|endoftext|>`, we end up not adding them to the vocabulary.<|||||>To address this we would need to add `"<|endoftext|>": 50257` to `vocab.json` and remove it from `added_tokens.json`. Note that this is the case in the English checkpoints (except with 50256).
The question is if this hurts compatibility; when loading the slow tokenizer both of these files would be used to load the vocabulary, so moving the entry from one to the other should be alright?<|||||>Yep, I think the idea is to make the multilingual added tokens match the ones that we have for english. I forgot to mention but yes, we have to add `"<|endoftext|>` to the vocabulary instead of `''`. This should normally do the trick (with also the modification of the content of the unknown token. <|||||>Ah, so we should actually replace it, so that `<|endoftext|>` gets the id that currently `""` has, and we keep `""` just to make sure the ids are not shifted at any point?
```
"<|endoftext|>": 50256,
"": 50257,
```
and not:
```
"": 50256,
"<|endoftext|>": 50257,
```<|||||>@ArthurZucker I updated the PR on the checkpoint. I tried the remaining failing tests locally pointing tokenizer to the updated revision and they passed, so I think we are good on this side.<|||||>Note that the only difference is that originally EOS (`<|endoftext|>`) was 50257 and now it is 50256, not sure if that's something to worry about.<|||||>The EOS toke id appears multiple times in the `config.json` so we need to adjust it too. Let me know if that's the way to go, or if we should swap them back :)<|||||>> Note that the only difference is that originally EOS (<|endoftext|>) was 50257 and now it is 50256, not sure if that's something to worry about.
Ah, this can be an issue I think. We have to keep it at 50257! So let's leave `''` in the vocab (it is also in the original repo) and we just need `{"<|endoftext|>": 50257}` this to be in the `added_special_tokens`. See [this repo](https://github.com/openai/whisper/tree/main/whisper/assets/multilingual) which contains most of what we need <|||||>@ArthurZucker we need `<|endoftext|>` in the `vocab` rather than `added_tokens` as per https://github.com/huggingface/transformers/pull/21222#issuecomment-1401119817.
Note that this means unknown token changes from 50256 to 50257, but hopefully that's less invasive.<|||||>Yeah! That's better<|||||>Ok, so I think the `openai/whisper-tiny` PR ready too, if there's anything else let me know :)<|||||>I merged your PR on the hub, now let's fix the failing tests! <|||||>@ArthurZucker all green!<|||||>Will ask for a final review from @sgugger <|||||>@ArthurZucker it looks like the new failures come from the GenerationConfig missing some attributes, also looking at `openai/whisper-tiny` the `forced_decoder_ids` have a `null` token and don't match what we have in `config.json`.<|||||>Hey, `null` token is fine! I added that for the refactoring, it allows the model to automatically predict the language<|||||>OKay the error comes from the `tiny_random_testing` where configuration files are created from the config, and thus don't have any of the parameters related to generation. The `return_timestamps` is set to `True` but it should not if there are not generation config.
Feel free to skip these tests for now, unless @ydshieh you have an alternative solution<|||||>> OKay the error comes from the `tiny_random_testing` where configuration files are created from the config, and thus don't have any of the parameters related to generation. The `return_timestamps` is set to `True` but it should not if there are not generation config. Feel free to skip these tests for now, unless @ydshieh you have an alternative solution
The CI is currently running and I can't see which test you are mentioning. I will check later once the CI results is available.<|||||>PRs for other checkpoints:
* [whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en/discussions/10)
* [whisper-small.en](https://huggingface.co/openai/whisper-small.en/discussions/6)
* [whisper-base.en](https://huggingface.co/openai/whisper-base.en/discussions/5)
* [whisper-medium.en](https://huggingface.co/openai/whisper-medium.en/discussions/5)
* [whisper-small](https://huggingface.co/openai/whisper-small/discussions/11)
* [whisper-base](https://huggingface.co/openai/whisper-base/discussions/7)
* [whisper-medium](https://huggingface.co/openai/whisper-medium/discussions/7)
* [whisper-large](https://huggingface.co/openai/whisper-large/discussions/20)<|||||>Hey @ydshieh, the tests are aforementioned tests are not skipped, but you can see the previous CI failure [here](https://app.circleci.com/pipelines/github/huggingface/transformers/56124/workflows/d31aa74b-175d-4c79-a237-cd342ded9900/jobs/677380).<|||||>Hi, @jonatanklosko could you rebase on main branch? You will need to resolve the conflicts. Let me know if you need help on this. Sorry for being late here.<|||||>@jonatanklosko Thank you. I will take a look on Monday if the pipeline testing is still failing!<|||||>@ydaigo perfect, thanks :)<|||||>Hey @jonatanklosko can you rebase on main to or resolve the merge conflicts?<|||||>@ArthurZucker done and everything passes now :) |
transformers | 21,221 | closed | MobileViT | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts and @NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.Run run_image_classification.py with MobileViT models (mobilevit-x-small)
### Expected behavior
Breaks first on Normalize function
| 01-20-2023 18:26:11 | 01-20-2023 18:26:11 | Answered here: https://github.com/NielsRogge/Transformers-Tutorials/issues/241 |
transformers | 21,220 | closed | [Generation Config] General issues | # Generation config
I know it has just been added so it is normal! But the following are missing (and are pretty intuitive w.r.t our other objects such as configs, processors etc):
- [ ] `GenerationConfig.from_pretrained("openai/whisper-tiny.en" )` where the path does not already have a `generation_config.json`. Currently this only looks for the `generation_config.json` but it should be possible to initialised by default using the `config` if a generation file is not present. (as it is actually done in generate)
- [ ] Similarly, the `from_config` only supports a config `object` which has to be initialized.
- [ ] The `generation_config` should be automatically initialised before the `generate` function : this is because other models will call `super().generate` after having played with extra kwargs, and these kwargs are needed for pre-processing ( adding selected logit processors etc).
- [ ] [edit] when running 'generate()` the generation config should by default be initialized `from_pretrained`, this is the whole point of saving a generation_config file IMO
- [ ] The following warning
```python
warnings.warn(
"You have modified the pretrained model configuration to control generation. This is a"
" deprecated strategy to control generation and will be removed soon, in a future version."
" Please use a generation configuration file (see"
" https://huggingface.co/docs/transformers/main_classes/text_generation)")
```
should be discussed as it is more efficient to modify the generation parameters on the fly if you are using a model that requires special arguments (like whisper), which is what `generation_config` was designed for. Indeed you might want to do `translation` but the default task on the hub is `traduction`. For a newbie, he has to go through finding `processor.set_forced_decoder_ids` while he could just do `model.generate(task = 'transcribe')`. The same goes for the `return_timestamp` which uses the same model so no need for a new config/ new model on the hub.
More generally, I think that if we have models that use prompts and special initial tokens, this is very useful.
cc @sgugger , @gante, @patrickvonplaten and @LysandreJik
This arises in the refactoring of whisper to have somewhat of a 1-1 with open ai, where you just load the model and can do `generate()`. It also simplifies the pipeline for asr's whisper specific parts
ps: I might be completely wrong about this! Feel free to give me feedbacks! | 01-20-2023 17:36:23 | 01-20-2023 17:36:23 | 1. Agree - I think we should/could allow this functionality.
2. Here I don't think we need to change anything. It's quite intuitive for me that the "[from_model_config](https://github.com/huggingface/transformers/blob/4e730b387364c9f46b6b1b0c79fdaf0903c42257/src/transformers/generation/configuration_utils.py#L620)" API has to load a config type object
3. I don't fully understand this - could you add an example?
4. Could you maybe send a link to where the warning is thrown? Would make it easier to understand what logic it's talking about <|||||>3. An example would be the following:
```python
class WhisperForConditionalGeneration:
...
# redefine generate with custom kwargs like `task`, `return_timestamps` and `is_multilingual`
def generate(
self,
inputs: Optional[torch.Tensor] = None,
generation_config= None,
logits_processor = None,
stopping_criteria = None,
prefix_allowed_tokens_fn = None,
synced_gpus = False,
return_timestamps = None,
task = None,
is_multilingual = None,
**kwargs
):
# At this point we want the generation config to be initialized, otherwise we have to copy past the initialization
# scheme, and it will be run again when calling super. Also modifying self.generate_config
# here update self.generation_config or generation_config
self.generation_config.return_timestamps = return_timestamps if return_timestamps is not None else False
self.generation_config.task = task if task is not None else False
self.generation_config.is_multilingual = is_multilingual if is_multilingual is not None else False
if self.generation_config.forced_decoder_ids and task is not None:
if self.generation_config.is_multilingual:
self.generation_config.forced_decoder_ids[1][1]= generation_config.task_to_id[generation_config["task"]]
else:
raise ValueError("A task or language were given but the model was trained on english and can thus only transcribe from english to english.")
if return_timestamps:
logits_processor = [WhisperTimeStampLogitsProcessor(self.generation_config)]
return super().generate(inputs, self.generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
```
This is not possible because the initialisation of the config only takes place in the generate. But this also means that if a `generation_config.json` file exist (meaning someone went through the trouble of saving a generation config and pushing it to the hub` it is not used automatically either. You have to instantiate it. This is not really good as for example some arguments are only necessary for `generate()` in this case, `no_timestamps_token_id`, and cannot be set through the pipeline (the pipeline would have to support `generation_config` as well). Again you would have to do a `GenerateConfig.from_pretrained("...")` which should be automatically done (it's the purpose of having a save file).
4. Sorry, the warning comes from [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1186)<|||||>Before going down at the individual points, I think it is worth agreeing on high-level design choices :) The goal of this refactor was to separate the two types of configurations, which were being held in the same file and class. It was also made to be retrocompatible with existing uses.
Beyond these two basic points, there was a design decision to nudge users into treating these two configurations separately, so they can evolve in isolation and have minimal cross-dependencies. I've also made an intentional effort to avoid using the term `config` in `GenerationConfig`, when referring to the model config. If we agree that this is a desirable property, then some short-term pain should be endured.
1. While it is simple to implement, it will only be useful in the transition phase -- in the future, no generation parameters are held in the model config. If we do implement it, `GenerationConfig.from_pretrained()` will acquire a dependency on another class and it will remove the incentive for the users to use separate files (both undesirable IMO). It will also make the name of the function a lie, as we will not be returning from a "pre-trained Generation Config". Side-note: the existing `GenerationConfig.from_model_config()` does this cross-class loading while keeping responsibilities isolated.
2. (I'm assuming you're writing about `GenerationConfig.from_model_config()`, as there is no `GenerationConfig.from_model_config()`) Happy to expand it :) What format would you like to get here, a dict?
3. It is pre-initialized [here](https://github.com/huggingface/transformers/blob/91ff7efeeb3e6bb10d83702db24108bb6583e013/src/transformers/modeling_utils.py#L1036) from the model config, for retrocompatibility. It is then overwritten [here](https://github.com/huggingface/transformers/blob/91ff7efeeb3e6bb10d83702db24108bb6583e013/src/transformers/modeling_utils.py#L2505) from the generation config file, if it exists. Hopefully, many versions from now, the initialization from the model config will be removed 🤞
4. I don't get this point -- we do use a `GenerationConfig` loaded from `from_pretrained` by default 🤔 If the question is that all `.generate()` calls should be using a `GenerationConfig` initialized that way, then we go back to the question in 1. :)
5. The warning only applies when the user modifies the model configuration to achieve a different generation behavior. This is actually a problem that relates to 1.: if we do load the generation config from the model config, and the user follows this pattern [change the model config], how can we ensure correctness? The generation config would no longer match the model config regarding generation parameters. My take here was to raise a warning and slowly kill this behavior, as it is making the separation of concerns impossible at the moment unless the logic you see at the start of `.generate()` is added everywhere. This is also why changes in the model config no longer work to parameterize `.generate()` if you call specific generation methods (like `greedy_search()`)
E.g.
```py
# modify generation properties through ad hoc model config changes
model = BartForConditionalGeneration.from_pretrained("hf-internal-testing/tiny-random-bart", max_length=10)
# or
model = BartForConditionalGeneration.from_pretrained("hf-internal-testing/tiny-random-bart")
model.config.max_length = 10
# both will raise that warning at generation time, since that is no longer the job of model config
model.generate(...)
# however, this will NOT raise the warning, since it's the generation config's job
model = BartForConditionalGeneration.from_pretrained("hf-internal-testing/tiny-random-bart")
model.generation_config.max_length = 10
model.generate(...)
```
Two additional comments/points:
6. You touched the point of having multiple `tasks`. That is not yet implemented, but highly desirable! At the very least, to make sure the right pipeline can load the right generation config file.
7. I'm noticing that are missing the functionality to save the generation config when `model.save_pretrained()` is called, which I forgot to add 🤦 This will ensure that ALL new saved models will have a `generation_config.json` if they can call `.generate()`. EDIT: https://github.com/huggingface/transformers/pull/21264
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as I have not more comments! Things look good for now!<|||||>Just to clarify the intention here...does the warning mean that we are supposed to override the model.config.do_sample parameter on the model object whenever we change temperature values in the generate keyword arguments from 0 to non-zero? (that seems to be what is needed unless I am missing something)
For example for GPT-Neo I was passing `do_sample=temperature > 0` so I don't get logitprocessor errors when we choose zero temp<|||||>Hey @slundberg 👋
`temperature` and `do_sample` control two different things, please refer to the [documentation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) :) Argument validation is yet to be added.<|||||>Hey! Got it. I see that do_sample controls the firing of the `sample` method vs the `greedy` method (or the beam search equivelents), but since setting temperature=0 implies greedy decoding, some common APIs (like OpenAI) automatically set sample vs greedy based on the temperature. If I expose an interface that does the same (sets do_sample based on the temperature given by the user), then I run into this warning unless I change the actual model object that was passed (which is not ideal). Not sure if this makes sense.<|||||>I see -- yeah, if that's the intended behavior (use greedy decoding when temperature is 0*), then it makes sense! From your past two messages, I'm assuming you are controlling it through `model.config` [`model.config.do_sample=temperature>0`], which raises the warning. Would you be able to control it through `.generate()` [`.generate(do_sample=temperature>0)`] or through `model.generation_config` [`model.generation_config.do_sample=temperature>0`]?
*While they are analytically equivalent, if you apply a very small temperature you'll get `-inf` numbers. ATM it results in an exception in the sampling phase, because of the `-inf`, in the feature we will raise an exception at the argument validation phase. In `.generate()`, we'd rather be explicit with exceptions than implicit with subtle corrections like switching into greedy decoding, so that the programmer has full control over what's happening :) |
transformers | 21,219 | closed | Microphone live inference catching up when inference is too slow (whisper). | # What does this PR do?
When using relatively slow inference models (like whisper, especially the large variants
on moderate hardware). The live inference snippets would be so slow that it would
feel extremely laggy.
This PR fixes it by adding an estimate of what is real time, and simply skip inferences
when we're too far behind.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-20-2023 17:02:32 | 01-20-2023 17:02:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,218 | closed | Replace reduce_labels with do_reduce_labels | # What does this PR do?
The `reduce_labels` flags for most of the image processors was deprecated for`do_reduce_labels`. This was to keep consistent with the `do_xxx` pattern with other flags.
This PR deprecates the flag for any models that still used `reduce_labels`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 01-20-2023 16:51:52 | 01-20-2023 16:51:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,217 | closed | [`BLIP`] fix doctest | # What does this PR do?
This PR fixes `BLIP` doctest.
Link to failing job: https://github.com/huggingface/transformers/actions/runs/3964164193
The docstring of the `forward` method of `BlipForQuestionAnswering` has been corrected to educate users on how to correctly use this module after https://github.com/huggingface/transformers/pull/21021 being merged.
The logic now of the `forward` method is pretty much similar to [`T5`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1592-L1610).
cc @ydshieh 💯 | 01-20-2023 16:39:31 | 01-20-2023 16:39:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,216 | closed | Skip `test_multi_gpu_data_parallel_forward` for `UperNetModelTest` | # What does this PR do?
Same as `BEIT`, this model uses some `add_module` and we need to skip this test. | 01-20-2023 16:32:46 | 01-20-2023 16:32:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Wait, are you saying that models that using add_module can't leverage multi-GPU training?<|||||>> Wait, are you saying that models that using add_module can't leverage multi-GPU training?
(well, just not with the way we used in `test_multi_gpu_data_parallel_forward`, which uses `nn.DataParallel`)
(PyTorch says `It is recommended to use [DistributedDataParallel] instead)
**Partially** , that's my observation while debugging `Maskformer`, `BEIT`, and then with `LayoutLMV2`, `Data2VecVision` etc.
```
@unittest.skip(
reason="Data2VecVision has some layers using `add_module` which doesn't work well with `nn.DataParallel`"
)
```
see https://github.com/huggingface/transformers/pull/17864
<|||||>@NielsRogge Let me know if you have further question :-) before giving 👍✅ . Thank you 🙏 |
transformers | 21,215 | closed | Fix OneFormer Docstrings | # What does this PR do?
Fixes docstrings for OneFormer.
- [x] Checked that all the doctests passed for the following command:
```bash
python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/oneformer/ -sv --doctest-continue-on-failure --doctest-glob="*.mdx"
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh | 01-20-2023 16:13:14 | 01-20-2023 16:13:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,214 | open | Support for OrderedConstraints, TemplateConstraints and LiteralConstraints in force_words_ids | ### Feature request
As raised by @sijunhe in [this blog post](https://huggingface.co/blog/constrained-beam-search), the `force_words_ids` argument of the `model.generate()` method needs to be modified to support `OrderedConstraints` and `TemplateConstraints`.
In addition, there is a need for a `LiteralConstraints` subclass. This would enable generating exactly the same list of tokens given in the `force_words_ids` argument, which would in turn allow for the calculation of sentence perplexity across all language models in the library by making use of [the attribute implemented in this PR](https://github.com/huggingface/transformers/pull/14654).
### Motivation
Currently, there is no standard way of calculating sentence perplexity and implementing it requires a lot of boilerplate code, which may not always work as intended. Third-party libraries such as [lm-scorer](https://github.com/simonepri/lm-scorer), which implemented this functionality, are no longer maintained and do not support all language models in the library.
### Your contribution
I would be interested in working on this PR as I'm the maintainer of a third-party library ( [hashformers](https://github.com/ruanchaves/hashformers) ) that performs sentence perplexity calculations with the Transformers library. | 01-20-2023 15:06:08 | 01-20-2023 15:06:08 | cc @gante<|||||>Hi @ruanchaves 👋
I'm not sure whether I understand the issue you described above. Our generation methods return the sequence log probabilities, from which you can compute the sequence perplexity. What would be missing for your use case?
Regarding `force_words_ids`, I'm reluctant to add more features there -- it has low usage and a high maintenance cost. I might reconsider my position here if I see more demand for further functionality :)<|||||>Olá @gante !
> I'm not sure whether I understand the issue you described above. Our generation methods return the sequence log probabilities, from which you can compute the sequence perplexity.
True, but I want the sequence log probabilities for a predefined sequence. I already have a sequence of tokens and I want the model to calculate its perplexity. I don't want the perplexity of a sequence generated through beamsearch or greedy search.
When [lm_scorer](https://github.com/simonepri/lm-scorer) was conceived, there was no straightforward way to do this with `transformers`:
```python
# Return token probabilities (provide log=True to return log probabilities)
scorer.tokens_score("I like this package.")
# => (scores, ids, tokens)
# scores = [0.018321, 0.0066431, 0.080633, 0.00060745, 0.27772, 0.0036381]
# ids = [40, 588, 428, 5301, 13, 50256]
# tokens = ["I", "Ġlike", "Ġthis", "Ġpackage", ".", "<|endoftext|>"]
```
Is this still the case? I hope you can point me in the right direction if new features were added since [lm_scorer](https://github.com/simonepri/lm-scorer) was released.
> Regarding `force_words_ids`, I'm reluctant to add more features there -- it has low usage and a high maintenance cost. I might reconsider my position here if I see more demand for further functionality :)
I get it, but being able to calculate the perplexity of a predefined sequence sounds like an essential feature to me, regardless of where it is implemented.<|||||>Hey @ruanchaves 👋
Yeah, we lack an easy interface to compute the logits of existing sentences, and that's something I really like to add ASAP! I'm planning to add it within the next month, but if you'd like to give me a hand you'd be more than welcome 🙌
The planned interface is
```python
log_scores = model.compute_token_scores(tokens, normalize_logits)
```
where `tokens` is the tokenized input (so it can be used in different modalities) and `normalize_logits` is an optional boolean (defaulting to true) to control whether we want to renormalize the model logits<|||||>@gante ,
> Yeah, we lack an easy interface to compute the logits of existing sentences, and that's something I really like to add ASAP! I'm planning to add it within the next month, but if you'd like to give me a hand you'd be more than welcome 🙌
Good! This would close the issue for me, as it's the thing I'm actually looking for. I'll be watching your PRs and see if I can contribute somehow.
Suggestion: consider adding the `compute_token_scores` method to masked language models as well. This has been implemented a few years ago at [awslabs/mlm-scoring](https://github.com/awslabs/mlm-scoring), but just like lm-scorer, it's no longer maintained. |
transformers | 21,213 | closed | Fix GPTJ doctest | # What does this PR do?
In #21178, the checkpoint used for `GPTJForSequenceClassification` is changed back to `_CHECKPOINT_FOR_DOC`, which is `hf-internal-testing/tiny-random-gptj`. That checkpoint has `self.score` with shape `[2, 512]`, but the model has `self.score` with shape `[2, 32]` as `config.n_embd=32`. The checkpoint has `512` which came from a mistake of using `n_ctx` previously, and that error is fixed in #14190, see [here](https://github.com/huggingface/transformers/commit/ce91bf9a3431b4d260005de84c0b0fa394409a3c#diff-61155574bf9c9669ccdfdf7dd508a5979b4e4915cc95f7ff4a63fee05a0e2715).
The PR uses another tiny checkpoint to pass the test.
| 01-20-2023 14:14:50 | 01-20-2023 14:14:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,212 | closed | Update `huggingface_hub` version | # What does this PR do?
Update `huggingface_hub` version to `0.12.0rc0` and necessary changes with this version.
| 01-20-2023 11:13:45 | 01-20-2023 11:13:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,211 | closed | Mask-fill pipeline for t5 and flan-t5 | ### Feature request
So far it isn't possible to use t5-models with the standard mask-fill-pipeline and everyone is building their own custom workaround.
### Motivation
It would save work and reduce complexity if this function is integrated.
### Your contribution
There is already a workaround: https://github.com/huggingface/transformers/issues/3985 | 01-20-2023 10:34:28 | 01-20-2023 10:34:28 | Hi @ArthurZucker, can I work on this issue? Thank you!<|||||>Sure! Awesome that you want to take this on! Feel free to open a PR and ping me if you need any pointers<|||||>@ArthurZucker I have several questions:
1. Is there any slack channel/discord where we can discuss the details of the issue?
2. About the scope of the issue, we have one workaround for a single mask. There are also requests for multi masks in one sentence and also the probability distribution over the targets. Do we focus on the single mask case first in the initial MR? If so, is our plan to integrate the current workaround into `FillMaskPipeline`?
3. I checked the `FillMaskPipeline` class and `run_single` method. I feel a little confused where's the best place to add the logic. I would appreciate it if you could point out some starting points!
Thank you for your help! <|||||>Hey! After digging a little bit, I am not sure that we actually need to do this PR. But let me answer to your questions and explain why.
1. I think you can ping us on the Hugging Face discord, but the best medium would be a PR on github or this issue 😉
2. Let's drop the potential addition. Instead of using the pipeline `FillMask`, which is specifically for models trained with a MaskedLMHead, you can use the following script :
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-base")
tokenizer = AutoTokenizer.from_pretrained("t5-base")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3> ."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
<pad><extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.
```
This is called `text2text-generation` and should work with the pipeline.
```python
text2text_generator = pipeline("text2text-generation", model = "t5-base")
text2text_generator(input_text)
[{'generated_text': 'man beer a salt.'}]
```
In order to get the scores, you should be using `generate()`.
<|||||>Does that fit in the use case that you want? <|||||>@ArthurZucker Hi, if i want to fill multiple words (specific number is unknown),
for example
`He <mask> now -> He is happy now`
Would this be possible?<|||||>No, I don't think this can be possible with a single mask. As you can see in the detail about the [task](https://huggingface.co/tasks/fill-mask).
Closing this as the issue is solved 😉 @anruijian ping me and re-open if you feel like it did not solve your issue <|||||>@Leolty
It could be possible that the model generates multiple words if it was pretrained with longer masked spans like in [UL2 mixture of denoisers](https://ai.googleblog.com/2022/10/ul2-20b-open-source-unified-language.html). Sometimes the t5 models already generate multiple words (and predictions) for one mask. With the input text ```India is a <extra_id_0> of the world.' ``` into t5-base it generates ```<pad><extra_id_0> part<extra_id_1> developing part<extra_id_2> part of the rest<extra_id_3> part<extra_id_4> part of the world.<extra_id_5>```.
@anruijian
Are you still interested in this issue?
I wrote this function to get the scores of target words:
```
def get_target_scores(text, targets, t5_tokenizer, t5_model):
"""
A wrapper function for a mask fill-in with target words for (flan-)t5
Parameters:
text(String): The input text with <extra_id_0> as mask
targets(list): A list with target words
t5_tokenizer(T5Tokenizer): The loaded tokenizer
t5_model(T5ForConditionalGeneration): The loaded t5 model
"""
target_numbers = len(targets)
constrain_ids_list = []
# encode the target words
for target in targets:
encoded_target_ids = t5_tokenizer(target, add_special_tokens=False).input_ids
constrain_ids_list.append(encoded_target_ids)
# encode the input text
encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
input_ids = encoded['input_ids'].to(DEVICE)
# generate the outputs with the target as constrains
outputs = t5_model.generate(input_ids=input_ids,
force_words_ids=[constrain_ids_list],
num_beams=target_numbers+5, num_return_sequences=target_numbers+5,
return_dict_in_generate=True,
output_scores=True,
max_length=2)
# calculate the mask position
_0_index = text.index('<extra_id_0>')
_result_prefix = text[:_0_index]
_result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
result_dict = {}
# filter each output and save it into the result dictionary
for output_number, output in enumerate(outputs["sequences"]):
_txt = t5_tokenizer.decode(output[1:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
if _txt in targets:
# save the target score
result_dict[_txt] = outputs["sequences_scores"][output_number]
# complete text
print(_result_prefix + _txt + _result_suffix)
# return the aggregated result
return result_dict
# test the function with this input text
text = 'India is a <extra_id_0> of the world.'
scores = get_target_scores(text, ["part", "state", "country", "democracy"], t5_tokenizer, t5_model)
print(scores)
```
I suggest that we reopen this issue and wrap such functions in the huggingface (fill-mask-)pipeline.
@ArthurZucker
Is the fill-mask-pipeline only for models with a MaskedLMHead?
We should find a way to integrate similar models. There are probably coming more such models, considering the improvement with the mixture of denoisers.<|||||>Interesting. I don't think I am against adding this, but will ping @Narsil to see what he thinks.
IMO:
- Pros: other models can also benefit from this. T5 is one of the most used, but flan T5 is also on fire!
- Cons: not really equivalent to mask fill pipeline? Would break the fact that it is normally only for models with the `MaskedLMHead`<|||||>I think it fills `fill-mask` quite nicely, in the sense the given a masked input, the model should tell us what should be under mask.
Now potential caveats/pains:
- Currently each mask return a single token, where the id, is returned, that wouldn't be possible with multiple items, potential Breaking change needed here (or mostly likely painful legacy code to maintain since we're unlikely to break here).
- Currently, if there are multiple masks, we return each potential mask locations potential tokens independantly. Not sure how t5/flan-t5 work here.
- There is an argument `top_k` which is quite necessary in a lot of situations for `fill-mask`, how that would work on generative ? (Would it get translated to beam-search maybe ?)
- Looking back at the example, it seems that you are suggesting a filling to the model in the decoder prompt, is that correct ? There is the `targets` parameters that might do something similar for bert-like approaches. Not sure how much they really overlap. (Can you have multiple various prompts, and find the most likely?)
- Also does the generative work without any prompt ?
Overall I'm all in favor of adding more complex (hopefully **better**) ways to fill mask, but I anticipate quite some pain in the actual implementation, dealing what's already there and making the overall experience similar enough.<|||||>Also this task is called `Corrupting Spans` in the original T5 paper no?


<|||||>I am not sure if this is the right place to ask this, but....I understand that text2text-generation pipeline can be used to achieve kind of MLM objective. But what if i want to train T5 MLM kind of objective on my own data ? Anyone can point me to any resources?<|||||>These kind of question should be asked on the [`forum`](https://discuss.huggingface.co/).
Also find the attached snippet that shows how you can fill in with multiple words.
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
import torch
model = T5ForConditionalGeneration.from_pretrained("t5-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("t5-base")
input_string = "Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>"
model.cuda()
inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda")
outputs = model.generate(inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
```
```
<pad><extra_id_0> Dursley<extra_id_1> a fortune<extra_id_2> had a long kind<extra_id_3> in<extra_id_4> a daughter named Mary<extra_id_5> Dursley<extra_id_6> with a kind<extra_id_7> in<extra_id_8> in<extra_id_9> a daughter named Mary<extra_id_10> Dursley<extra_id_11> Dursley<extra_id_12> a fortune<extra_id_13> Dursley<extra_id_14> had a short piece<extra_id_15> in<extra_id_16> Dursley<extra_id_17> a fortune<extra_id_18> Dursley<extra_id_19> a fortune<extra_id_20> in Dursley<extra_id_21> a daughter named Mary<extra_id_22> had a long, thick piece<extra_id_23> had a long piece<extra_id_24> with a short piece<extra_id_25> a daughter named<extra_id_26> named<extra_id_27> </s>
```
|
transformers | 21,210 | closed | Declare __len__ method in PreTrainedTokenizerBase | # What does this PR do?
When type hinting a tokenizer with `PreTrainedTokenizerBase`, it doesn't know supprt `len(tokenizer)`, but both slow and fast version implement `__len__` so we declare it at least to make the type hints happy, we could make this an `abstract_method` as well if needed | 01-20-2023 10:27:51 | 01-20-2023 10:27:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,209 | closed | Encode object type in Donut tokens | # What does this PR do?
This makes use of encoded object types in text generated by Donut. It fixes a few issues:
- keys of the same name appearing at different levels of the JSON are no longer confused
- no more ambiguity between a dict and a list of length 1 containing a dict
Additionally, this allows us to keep track of which keys have been opened and closed so far.
Now we can look ahead to find the token that closes the current element. This allows for much
deeper nesting (beyond just 2 levels) without breaking.
There is some fault tolerance included in the look-ahead. If a closing token cannot be found or a new opening token is encountered unexpectedly, ambiguous parts of the text will be discarded and processing continues with the next part of the text that can be converted to JSON without any ambiguity.
This requires matching changes in `json2token`. I wasn't quite sure where to put this. I think at the moment, the `Dataset` code containing that method is only part of the tutorials. Would it make sense to add it here as well? Essentially, all that's needed is something like
```python
from abc import ABCMeta, abstractmethod
class DonutDatasetMixin(ABCMeta):
added_tokens: list
@abstractmethod
def add_tokens(self, list_of_tokens: t.List[str]):
pass
def json2token(
self,
obj: t.Any,
update_special_tokens_for_json_key: bool = True,
sort_json_key: bool = True,
):
"""
Convert an ordered JSON object recursively into a token sequence
Args:
obj: Object to convert
update_special_tokens_for_json_key (bool):
Add encountered keys as special tokens to the processor's tokenizer
sort_json_key (bool): Whether to sort JSON keys in an object alphabetically
"""
if (obj_type := self.get_object_type(obj)) == "dict":
if len(obj) == 1 and "text_sequence" in obj:
return obj["text_sequence"]
else:
output = ""
if sort_json_key:
keys = sorted(obj.keys(), reverse=True)
else:
keys = obj.keys()
for k in keys:
v = obj[k]
v_obj_type = self.get_object_type(v)
if update_special_tokens_for_json_key:
self.add_tokens([rf"<s_{k}-{v_obj_type}>", rf"</s_{k}-{v_obj_type}>"])
output += (
rf"<s_{k}-{v_obj_type}>"
+ self.json2token(obj[k], update_special_tokens_for_json_key, sort_json_key)
+ rf"</s_{k}-{v_obj_type}>"
)
return output
elif obj_type == "list":
return r"<sep/>".join(
[
self.json2token(item, update_special_tokens_for_json_key, sort_json_key)
for item in obj
]
)
else:
obj = str(obj)
if f"<{obj}/>" in self.added_tokens:
obj = f"<{obj}/>" # for categorical special tokens`
return obj
@staticmethod
def get_object_type(obj: t.Any) -> t.Literal["list", "dict", "str"]:
if isinstance(obj, (list, np.ndarray)):
return "list"
if isinstance(obj, dict):
return "dict"
return "str"
```
Then the dataset can be constructed similarly to how it's already done in the tutorial:
```python
class DonutDataset(Dataset, DonutDatasetMixin):
pass
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@NielsRogge As promised, the improvements that we made to Donut's `token2json`. It works well with more complex JSON data structures, as demonstrated in the added tests.
| 01-20-2023 09:54:48 | 01-20-2023 09:54:48 | Not sure why `black` is failing. `make fixup` doesn't change anything for me.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21209). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @ts2095 , thanks for your contribution and sorry for the late reply.
Could you rebase your branch on main to make the CI green? Also, can you confirm this update is 100% backwards compatible?<|||||>Hi @ts2095 , sorry for the late reply here!
Would you be able to rebase your branch on the main branch of Transformers?
cc'ing @amyeroberts here for a review<|||||>@ts2095 There was a [recent update](https://github.com/huggingface/transformers/pull/22204) on main, updating our CI images to run on Python 3.8, which I believe should resolve the import issue with `from typing import Literal`. Could you rebase to include these? <|||||>@amyeroberts We still support 3.7 so we cannot accept type-hints using Literal.<|||||>@ts2095 Can you confirm that this is backwards compatible and that previous token sequences result in the same json output? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,208 | closed | UL2 Mixture-of-Denoiser loss | ### Feature request
The losses applied to the paper **UL2: Unifying Language Learning Paradigms**
The Mixture-of-Denoisers losses are described in the UL2 paper, which can be found at the following link: https://arxiv.org/abs/2205.05131
The code is based on T5x (which is JAX/FLAX): https://github.com/google-research/t5x
### Motivation
I am requesting the addition of new losses applied in the UL2 paper called Mixture-of-Denoisers. These new losses have been shown to improve the performance of unsupervised learning models and I believe they could benefit the HuggingFace community.
### Your contribution
Opening the request | 01-20-2023 08:40:52 | 01-20-2023 08:40:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,207 | closed | Fix `CONFIG_ARCHIVE_MAP_MAPPING_NAMES` | # What does this PR do?
Fix `CONFIG_ARCHIVE_MAP_MAPPING_NAMES` as reported in #21204.
Also, `UPERNET_PRETRAINED_CONFIG_ARCHIVE_MAP` doesn't exist.
Remark: we have planned deprecation
```
warnings.warn(
"ALL_PRETRAINED_CONFIG_ARCHIVE_MAP is deprecated and will be removed in v5 of Transformers. "
"It does not contain all available model checkpoints, far from it. Checkout hf.co/models for that.",
FutureWarning,
)
```
Fix #21204 | 01-20-2023 05:59:40 | 01-20-2023 05:59:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I will merge as it is - don't think it matters (at least not at this moment), and eventually this is going to be deprecated. |
transformers | 21,206 | open | OwlVit gives different results compared to original colab version | ### System Info
Using huggingface space and google colab
### Who can help?
@adirik
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
cat picture from http://images.cocodataset.org/val2017/000000039769.jpg
remote control image from https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSRUGcH7a3DO5Iz1sknxU5oauEq9T_q4hyU3nuTFHiO0NMSg37x
### Expected behavior
Being excited with the results of OwlVit, I tried to input some random image to see the results.
Having no experience on jax, my first option is to search out on huggingface space.
Given a query of remote control, and a cat picture, I wanted to get picture of remote controls.
https://huggingface.co/spaces/adirik/image-guided-owlvit

The results is not really what I expected (no box on remotes).
Then I checked for results on colab version, if they behave the same way.
https://colab.research.google.com/github/google-research/scenic/blob/main/scenic/projects/owl_vit/notebooks/OWL_ViT_inference_playground.ipynb#scrollTo=AQGAM16fReow

It correctly draw boxes on the remotes.
I am not sure what is happening, which part should I look at to determine what causes this difference?
| 01-20-2023 05:23:22 | 01-20-2023 05:23:22 | Yes we had a hard time making the Space output the same bounding boxes as in Colab (eventually it worked on the cats image). It had to do with the Pillow version.
So I'm guessing there might be a difference in Pillow versions here as well
Cc @alaradirik <|||||>Do you mean Pillow changes the input value?
I tried another image

space model cant detect cat inside this image, but colab version can detect it

<|||||>@darwinharianto thanks for bringing the issue up, I'm looking into it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Kindly bumping<|||||>Kindly reminder<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @alaradirik and @amyeroberts <|||||>I got the same issues.
This is original repo results.

And this is [huggingface demo](https://huggingface.co/spaces/Jiayi-Pan/OWL-ViT).
```
text_queries = text_queries.split(",")
target_sizes = torch.Tensor([img.shape[:2]])
inputs = processor(text=text_queries, images=img, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
outputs.logits = outputs.logits.cpu()
outputs.pred_boxes = outputs.pred_boxes.cpu()
results = processor.post_process(outputs=outputs, target_sizes=target_sizes)
```
<img width="1036" alt="image" src="https://user-images.githubusercontent.com/27891090/233775093-ce8aee88-b0a0-4d81-b917-ab3136c5388d.png">
The `rocket` bounding box score is different. (0.15 vs more than 0.21)
With lvis-api, the performance is not reproduced. (mAP = 0.095)<|||||>It seems problem still exist. I mentioned about problem here.
https://github.com/huggingface/transformers/pull/23157#issuecomment-1540056705
Maybe the best way is to cover with model predictions end-to-end tests on batch of images. This approach help us to be sure about changes
<|||||>@MaslikovEgor I agree with you. I have end-to-end test with lvis-api (both huggingface owlvit and google/scenic owl-vit). But owl vit in huggingface is not reproduced. (mAP = 0.095)
- [baseline](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit): mAp 0.193
<|||||>I want to fix this problem, but it would be efficient if I knew where to start. Can you give me a suggestion? @alaradirik <|||||>Hi @MaslikovEgor,
The demo didn't work before this fix as well (see https://github.com/huggingface/transformers/pull/20136). Try running coco evaluation with image conditioning before/after this fix, [email protected] increases from 6 to 37. This is still below the expected 44, but closer to the reported/expected performance. I am still trying to figure out why.
Best,
Orr<|||||>@RRoundTable, the issues you are reporting seem to do with the text-conditioned evaluation. This means that the issues probably stem from the forward pass/post-processing.
In your LVIS eval, did you make sure to implement a new post-processor that incorporates all the changes needed for eval? If helpful, I can add my function to 'processor' or something, please notice there are a few changes compared with normal inference.<|||||>@orrzohar, Yes. I tested with text-conditioned evaluation.
In my LVIS eval, I just used huggingface's postprocessor and preprocessor. It would be helpful if you contribute some functions.
```
transformers[torch] == 4.28.1
```
```
# example script
import requests
from PIL import Image
import torch
import glob
import os
import argparse
import json
from tqdm import tqdm
from transformers import OwlViTProcessor, OwlViTForObjectDetection
parser = argparse.ArgumentParser()
parser.add_argument("--dataset-path", type=str, required=True)
parser.add_argument("--text-query-path", type=str required=True)
parser.add_argument("--save-path", default="owl-vit-result.json", type=str)
parser.add_argument("--batch-size", default=64, type=int)
args = parser.parse_args()
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model.to(device)
with open(args.text_query_path, "r") as f:
text_query = f.read()
images = glob.glob(os.path.join(args.dataset_path, "*"))
image_ids = [img_path.split("/")[-1].split(".")[0] for img_path in images]
instances = []
N = len(images)
with torch.no_grad():
for i in tqdm(range(N // args.batch_size + 1)):
image_ids = []
batch_images = []
target_sizes = []
for img_path in images[i * args.batch_size: (i+1) * args.batch_size]:
image_ids.append(int(img_path.split("/")[-1].split(".")[0]))
image = Image.open(img_path).convert("RGB")
batch_images.append(image)
target_sizes.append((image.size[1], image.size[0]))
target_sizes = torch.Tensor(target_sizes)
target_sizes = target_sizes.to(device)
texts = [text_query.split(",")] * len(batch_images)
inputs = processor(text=texts, images=batch_images, return_tensors="pt")
inputs = inputs.to(device)
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process(outputs=outputs, target_sizes=target_sizes)
for image_id, res in zip(image_ids, results):
for bbox, score, label in zip(res["boxes"], res["scores"], res["labels"]):
# tensor to numpy
bbox = bbox.cpu().detach().numpy()
score = score.cpu().detach().numpy()
label = label.cpu().detach().numpy()
# bbox format: xyxy -> xywh
x1, y1, x2, y2 = bbox
bbox = [int(x1), int(y1), int(x2-x1), int(y2-y1)]
instance = {}
instance["image_id"] = image_id
instance["bbox"] = bbox # TODO
instance["score"] = float(score)
instance["category_id"] = int(label) + 1 # TODO
instances.append(instance)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @RRoundTable ,
I added a PR with the appropriate evaluation protocol
https://github.com/huggingface/transformers/pull/23982
Best,
Orr<|||||>Hi! @alaradirik,
I'm using transformers==4.30.2 but still encountered the same issue. Any thought on this?
**Query image:**

**Result from colab:**

**Result from huggingface:**

|
transformers | 21,205 | closed | WIP: Added basic eos token based pooling | # What does this PR do?
This PR is still a WIP. This is based on [this issue](https://github.com/huggingface/transformers/issues/21029). The main problem is that when new tokens are added to the tokenizer and text model and learned such as with [textual inversion](https://textual-inversion.github.io/) the clip text model pools at the wrong location as the pooling is not done at the new token location and not the eos token id location
Fixes #21029
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Models:
- text models: @ArthurZucker and @younesbelkada | 01-20-2023 04:34:12 | 01-20-2023 04:34:12 | @ArthurZucker Hi! Just moved to this pr. Just some git issues so switched but this is based on the pr [here](https://github.com/huggingface/transformers/pull/21096)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21205). All of your documentation changes will be reflected on that endpoint.<|||||>Attempt fixing the bugs now<|||||>From message on old pr: @ArthurZucker Thanks for the comment! Very interesting. The reason I'm resistant to just using self.config.vocab_size-1 is that when adding new tokens for the textual inversion training, usually we increase the resize_token_embedding method. So then when loading the trained embeddings, self.config.vocab_size-1 is not the eos token id anymore.
Do you think I should change the logic for resize_token_embeddings and tokenizer instead so that the eos_token_id is always the max? The disadvantage of this is that it'll be way more code.<|||||>Hey, not really, I would say the least changes the better! <|||||>Also for the eos_token ids, you can just set it as an argument in the clip config, and maybe raise some kind of warning if it is not the last? The default should be `config.eos_token_id`<|||||>@ArthurZucker Thanks for the comment! Good point. Will do that asap<|||||>ok seems like the bugs are coming from indexing I do when calculating the new clip pooling. I'll try fixing that within this week<|||||>@isamu-isozaki Thank you for working on this PR.
@ArthurZucker instead of assuming the eos_token is the last (by id) or relying on the config, isn't it better to look the id up from vocab?
Something like:
```
self.bos_token = self.vocab["<|startoftext|>"]
self.eos_token = self.vocab["<|endoftext|>"]
self.pad_token = self.vocab["<|endoftext|>"]
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,204 | closed | Typo in XCLIP model | ### System Info
transformers version 4.25.1
### Who can help?
@NielsRogge @ydshieh
There's a typo/mismatch for the xclip model pretrained config archive map between https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/configuration_auto.py#L333
and. https://github.com/huggingface/transformers/blob/main/src/transformers/models/x_clip/configuration_x_clip.py#L27
notice in the first example, there is underscore between X and CLIP whereas in the second example, there isn't.
This leads to an error when initializing automodel
`
huggingface_models = list(ALL_PRETRAINED_CONFIG_ARCHIVE_MAP.keys())
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py", line 612, in keys
self._initialize()
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/configuration_auto.py", line 602, in _initialize
mapping = getattr(module, map_name)
File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1086, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.models.x_clip has no attribute X_CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP`
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP
2. huggingface_models = list(ALL_PRETRAINED_CONFIG_ARCHIVE_MAP.keys())
### Expected behavior
Expect no errors to be raised | 01-20-2023 01:34:35 | 01-20-2023 01:34:35 | @Zhilin123 Thank you for reporting. See #21207. However, note that
```
ALL_PRETRAINED_CONFIG_ARCHIVE_MAP is deprecated and will be removed in v5 of Transformers.
``` |
transformers | 21,203 | closed | rename configuration_utils.PretrainedConfig.max_length to max_generation_length | ### Feature request
Max length (and min length) is an overloaded term. This renaming would help disambiguate this parameter from tokenizer max lengths when parameters are logged.
https://github.com/huggingface/transformers/blob/862888a35834527fed61beaf42373423ffdbd216/src/transformers/configuration_utils.py#L119
### Motivation
When looking at Mlflow logs for my text classification model, I saw "max length" as one of the parameters. I had to debug to figure out that it was not relevant to classification. This seems to me like the simplest solution to make it clear that it is a parameter that is only relevant to generation.
### Your contribution
I can of course do a PR with the renaming change. However, I am not sure of the downstream implications. | 01-20-2023 01:07:11 | 01-20-2023 01:07:11 | Thanks for raising an issue. Renaming an argument like this which is so widely used is too breaking a change for us to consider however. <|||||>Makes sense. Perhaps there is something that can be done in the mlflow logger instead. I'll raise a different issue if I have any ideas. |
transformers | 21,202 | closed | batched feature extraction pipeline for GPT-style models | ### Feature request
it would be nice to support feature extraction of batched input for GPT-style models using `Pipeline`s
### Motivation
I'm currently trying to generate encodings of a large number of sentences using LLMs. I.e.,:
```python
classification_token_idx: int
fe = pipeline("feature-extraction", model="some-LLM", framework="pt", return_tensors=True)
inputs = [...]
output = fe(inputs)
H = torch.stack([x[0, classification_token_idx, :] for x in output])
```
where depending on whether I'm using a BERT-style or GPT-style model, `classification_token_idx` will be either 0 or -1, respectively. My use-case can greatly benefit from batching, but the adapted snippet no longer works for GPT-style models:
```python
...
output = fe(inputs, batch_size=BATCH_SIZE)
H = torch.stack([x[0, classification_token_idx, :] for x in output])
```
In a batch of sequences, the 0th index of a sequence will always be the `[CLS]` token regardless of padding. However, the last index of a sequence in a padded batch of sequences will most likely be a `[PAD]` token rather than the true last token of the sequence. Using the `Pipeline` interface with a GPT-style model makes it non-trivial to extract features *and* take advantage of input batching, leaving users with three options:
1. do not batch. Possibly much slower, but reliable
2. do not use a `Pipeline`. More robust, but fairly cumbersome to implement and will likely be repeated across most users
3. implement a custom `Pipeline`. The most "elegant" solution (IMO), but one that should arguably be in the huggingface library (and is the point of this feature request.)
### Your contribution
I doubt I'm the person for the job. | 01-19-2023 23:53:49 | 01-19-2023 23:53:49 | cc @Narsil <|||||>This one is tricky.
In general GPT-like models should pad on the left, not on the right, meaning your snippet **should** work.
However, there's no telling if everything is properly configured simply. (The pipeline does whatever the config is set to do, it doesn't try to do reasoning on it).
In theory, everything should be quite transparent if the `padding_side` is properly set on the tokenizer.
Would that solve your issue ?
If you're unsure, maybe setting `classification_token_ids = 0 if fe.tokenizer.padding_side="right" else -1` could do the trick.
Maybe if you have a specific model where it doesn't work properly I could take a look ?
Note: I wasn't aware GPT-like models were good for document embedding.<|||||>Hi, @davidegraff this might be helpful,
```
import torch
from transformers import pipeline
ipts = ["Hi I am human.", "The sky", "hello there"]
fe = pipeline(task="feature-extraction",
model="gpt2",
framework="pt",
return_tensors=True)
# Since gpt2 doesn't have a pad_token
if not fe.tokenizer.special_tokens_map.get("pad_token"):
pad_token = {"pad_token":"<|endoftext|>"}
fe.tokenizer.add_special_tokens(pad_token)
fe.model.resize_token_embeddings(len(fe.tokenizer))
# Make sure the padding_side is 'left' (if you open gpt2tokenizer you will find that by default
# the padding_side is 'right')
fe.tokenizer.padding_size = "left" #For BERT like models use "right"
# get the outputs
opts = fe(ipts, batch_size=3)
classification_token_idx = -1 #For BERT like models use 0(if you want to use the embeddings of [CLS] token)
H = torch.stack([x[0, classification_token_idx, :] for x in opts])
```
To see if batch_size if working or not I ran,
```
%%timeit -n 100
opts = fe(ipts)
```
>> 143 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
%%timeit -n 100
opts = fe(ipts, batch_size=3)
```
>> 86.6 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
I hope it helps.<|||||>I see! Thanks for the example @susnato!
re @Narsil:
My use-case is a little specialized- I'm looking specifically at LLMs trained on chemical corpora. Frey et al. [[1]] show that you can use the "embeddings" from a GPT model trained on molecules in an unsupervised fashion as molecular representations (Fig. 7). They made their model available on the HF hub [[2]], but it seems like there might be an issue with padding based on your explanation:
```python
>>> featurizer = pipeline(
"feature-extraction",
model="ncfrey/ChemGPT-1.2B",
framework="pt",
return_tensors=True
)
>>> featurizer.tokenizer.padding_side
"right"
```
The original data preparation code also makes no mention of left-wise padding [[3]]. Loading the individual tokenizer (via `AutoTokenizer`) results in the same thing. Does this mean it was trained incorrectly? Or is this just something I have to be aware of when loading `Tokenizer`s (by adding `tokenizer.padding_side = "left"` for GPT-style models)?
[1]: https://doi.org/10.26434/chemrxiv-2022-3s512
[2]: https://huggingface.co/ncfrey/ChemGPT-1.2B
[3]: https://github.com/ncfrey/litmatter/blob/main/lit_data/lm_data.py#L31<|||||>> Does this mean it was trained incorrectly?
I'm not sure how we train "correctly" so it'd be hard to train "incorrectly". Joke aside, a model is trained a certain way, it's up to the inference to understand how it was done and was it acceptable or not within the framework of how it was trained.
Padding side in training shouldn't matter in inference, especially for causal LM since, they're supposed to ignore the padding anyway.
Now that's theory, in practice I would definitely run some tests to make sure what's supposed to be actually is correct.
But, you could always just override the `padding_side` and see how are the results compared to the non batched, non overridden ones on a subset of examples you know the answer for. That would be my first step at least.
To override.
```python
pipe = pipeline("feature-extraction", model="cfrey/ChemGPT-1.2B",
framework="pt",
return_tensors=True)
pipe.tokenizer.padding_side = "left"
for out in pipe(...):
print(out)
```
For instance.<|||||>Thanks for taking the time to explain all this, I really appreciate it!
The original paper is sparse on details, so I'm not really sure what the authors are doing when they (1) generate encodings and (2) how (or if) they pad during inference. In the absence of these details, I guess I'm trying to take a principled approach to generate these encodings:
1) a sanity check to make sure no funny business is going on
```python
featurizer = pipeline(
"feature-extraction", model="ncfrey/ChemGPT-1.2B", framework="pt", return_tensors=True,
)
featurizer.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
sfs = [
'[C][C][N][=C][Branch1_1][O][N][C][C][C][C][C][C][C][Ring1][Branch1_3][S][C][Expl=Ring1][=N][C][Branch1_2][C][=O][O-expl]',
'[C][N][Branch1_1][Branch2_2][C][C][C][C][C][C][Ring1][Branch1_2][S][Branch1_2][C][=O][Branch1_2][C][=O][C][=C][C][=C][Branch2_1][Ring1][Branch2_2][N][C][Branch1_2][C][=O][C][C][N][C][Branch1_2][C][=O][C@Hexpl][C][C][=C][C][C@@Hexpl][Ring1][Branch1_2][C][Ring1][Branch2_3][=O][C][=C][Ring2][Ring1][Branch1_2]',
'[C][C@Hexpl][C][C][C@Hexpl][Branch2_1][Ring1][Branch1_3][NH+expl][C][C][C][C@Hexpl][Branch1_1][=N][C@Hexpl][Branch1_1][C][O][C][=N][C][=C][N][Ring1][Branch1_1][C][C][Ring1][=C][C][Ring2][Ring1][Ring1]',
'[N][/C][Branch2_1][Ring1][Ring2][C][N][C][Branch1_2][C][=O][C@@Hexpl][C][C][=C][C][=C][C][=C][Ring1][Branch1_2][S][Ring1][Branch2_2][=N][\\O]'
]
featurizer.tokenizer.padding_side = 'right'
X_unpadded_r = torch.stack([H[0, -1, :] for H in featurizer(sfs)])
featurizer.tokenizer.padding_side = 'left'
X_unpadded_l = torch.stack([H[0, -1, :] for H in featurizer(sfs)])
torch.allclose(X_unpadded_r, X_unpadded_l)
# True
```
2) now seeing the effects of batching
```python
featurizer.tokenizer.padding_side = 'right'
X_padded_r = torch.stack([H[0, -1, :] for H in featurizer(sfs, batch_size=4)])
featurizer.tokenizer.padding_side = 'left'
X_padded_l = torch.stack([H[0, -1, :] for H in featurizer(sfs, batch_size=4)])
torch.allclose(X_padded_r, X_padded_l)
# False
torch.allclose(X_unpadded_r, X_padded_l), torch.allclose(X_unpadded_r, X_padded_r)
# (False, False)
```
so while it's expected that left vs. right padding produces different results if we take the same embedding (i.e., the last token), it's is surprising that _neither_ of these is the same as the unpadded results. The simple answer in this situation is likely just "then don't batch," but there are significant performance gains to be had when utilizing batching. Do you have any advice here? Thanks again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,201 | closed | Fix code example in training tutorial | The Keras [section](https://huggingface.co/docs/transformers/main/en/training#train-a-tensorflow-model-with-keras) of the training tutorial throws an error because it tokenizes `dataset["text"]` instead of `dataset["sentence"]`. There is no `text` column in this dataset. | 01-19-2023 22:52:48 | 01-19-2023 22:52:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,200 | closed | Fix task summary doctest | The doctests are failing for the updated task summary page because outputs weren't included in the code examples. This PR adds real inputs that can be pipelined instead of the generic `("path/to/data/file")`. I skipped the `text-generation` pipeline, but let me know if you'd prefer setting a seed for it so we can still reliably generate an output. | 01-19-2023 19:18:29 | 01-19-2023 19:18:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,199 | closed | Remove all hf-internal-testing checkpoints that can be removed | # What does this PR do?
This PR continues the work on docstrings and removes all checkpoints from the hf-internal-testing org where they can be removed. | 01-19-2023 19:13:14 | 01-19-2023 19:13:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,198 | closed | [Whispe] Fix pipeline after timestamp merges | # What does this PR do?
Fix the ASR pipeline for whisper without timestamps by ensuring the `WhisperTimestampProcessor`is not added to the list of logit processors when it is not requested.
Fixes #21179 | 01-19-2023 17:18:38 | 01-19-2023 17:18:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,197 | closed | Flax dtype-dependent numerical masking | # What does this PR do?
Fixes #21176
For some models, our Flax numerical masking was incompatible with the desired variable type. This PR fixes it by selecting a numerical mask that is the minimum for the corresponding variable type.
This PR is akin to #17306 for PT. Thank you @LysandreJik and @ydshieh for pointing it out 🙏 | 01-19-2023 16:10:39 | 01-19-2023 16:10:39 | @sgugger this solution was discussed on Slack with the Flax team, hence no added Flax reviewers :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,196 | closed | Enabling live `automatic-speech-recognition` asr for Whisper. | # What does this PR do?
Enables live ASR for Whisper.
Inference is slower for ASR than CTC models though, so these demos come with
come caveat:
**IT will not be live on too small hardware**. Just because it will fall behind
in inference times.
Simplest fix would be to increase the `stream_chunk_s` parameter. That will reduce
the "liveliness" of the inference but would put less strain on the hardware.
Another more complex fix (outside of these short scripts) would be to keep track
of real time, and **skip** some inferences in the pipelines when we are too late.
Live script:
```python
import sys
import numpy as np
from transformers import pipeline
from transformers.pipelines.audio_utils import ffmpeg_microphone_live
from curses import wrapper
import curses
def main():
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=0)
sampling_rate = pipe.feature_extractor.sampling_rate
chunk_length_s = 5
stream_chunk_s = 0.1
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s, # , stride_length_s=(1, 0.1)
)
print("Start talking...")
stdscr = curses.initscr()
curses.noecho()
curses.cbreak()
text = ""
for item in pipe(mic):
displayed = text + item["text"]
if not item["partial"][0]:
text += item["text"]
stdscr.addstr(0, 0, displayed)
stdscr.clrtoeol()
stdscr.refresh()
if __name__ == "__main__":
wrapper(main())
```
Simpler script:
```python
import datetime
import sys
from transformers import pipeline
from transformers.pipelines.audio_utils import ffmpeg_microphone_live
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-base", device=0)
sampling_rate = pipe.feature_extractor.sampling_rate
start = datetime.datetime.now()
chunk_length_s = 5
stream_chunk_s = 0.1
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s,
)
print("Start talking...")
for item in pipe(mic):
sys.stdout.write("\033[K")
print(item["text"], end="\r")
if not item["partial"][0]:
print("")
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-19-2023 15:35:45 | 01-19-2023 15:35:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> LGTM but we could add your script somewhere no? Seems like it was asked
Where ?
The `examples` is more focused on fine-tuning/training. It could be in the docstring of ASR pipeline ( which already contain tons of information).
Ultimately both examples are nice-to-have things, but definitely not something we want to support as the rest of core transformers iirc the discussions when this was added. Too many specific things, like capturing the correct mic is a hard job, then the amount of features that might be wanted could be very intense, and it's not really the goal of `tranformers` to maintain this. This is more a showcase and because using ffmpeg enables a relatively short code base for support.
Maybe something that could be more prominently be features in `speechbox`. @patrickvonplaten for an opinion on this ? |
transformers | 21,195 | closed | Add class properties with warnings | # What does this PR do?
Adds properties with deprecations warnings to image processors for backwards compatibility. This resolves issues users had when trying to reference a deprecated property e.g. `image_processor.reduce_labels`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 01-19-2023 14:29:54 | 01-19-2023 14:29:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,194 | closed | Rename GLPN image processor tests | # What does this PR do?
Renames GLPN feature extractor tests. Missed file in #21140
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 01-19-2023 14:10:34 | 01-19-2023 14:10:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,193 | closed | [`CVT`] Fix module initialization issue | # What does this PR do?
This PR fixes the issue described in the PR https://github.com/huggingface/transformers/pull/20803 and this comment: https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540 for `CVT`
Before this PR if a user wants to initialize CVT model in half precision with the example script below they will encounter an issue that is hard to interpret:
```
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
import torch
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-13')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-13', torch_dtype=torch.float16).to(0)
inputs = feature_extractor(images=image, return_tensors="pt").to(0, torch.float16)
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Error message:
```
RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'
```
The reason of the error is described in https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540
Therefore this PR circumvent this issue by forcing the `cls_token` module to be initialized on the correct place.
All slow tests pass
cc @sgugger @ydshieh
If this PR gets merged, there should be no more modules in `transformers` that will be initialized with `trunc_normal_` outside `init_weights` method | 01-19-2023 14:09:57 | 01-19-2023 14:09:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Regardless of anything else, the initialization should always be done in `_init_weights`. This is the reason we have many flaky failures with tests that check slow/fast init give the same results for instance.<|||||>Thanks everyone for double checking!
<|||||>Here is another explanation on why we should centralise weights initialization under `init_weights` (i.e. a more condensed explanation of https://github.com/huggingface/transformers/pull/20803#discussion_r1059138540 for anyone that wants to know more about the problem)
Regardless if you are in a GPU or CPU, `from_pretrained`[ calls `model = cls(config, *model_args, **model_kwargs) ` at some point under the hood,](https://github.com/huggingface/transformers/blob/b9403e951661b53630afd95166874f75ede885c4/src/transformers/modeling_utils.py#L2360) (i.e. calls `model.__init__`) that will sequentially call `__init__` functions of each submodule of the model. This is called on CPU, sometimes on `meta` if `device_map` is enabled.
Before this PR, this caused calling `trunc_normal` for `CVT` from `torch.nn` that is not supported under `fp16`.
Since `init_weights` is not called by `from_pretrained` - ([`_fast_init` is always set to `True`](https://github.com/huggingface/transformers/blob/b9403e951661b53630afd95166874f75ede885c4/src/transformers/modeling_utils.py#L2002) so `transformers` models never calls `init_weights` if `from_pretrained` is called, except if a user forces to do so - but there is no benefit doing it), we should centralise all weights initialization inside this function, therefore avoid calling this function when it is not needed. |
transformers | 21,192 | closed | Fix device issue in `UperNetModelIntegrationTest` | # What does this PR do?
Fix device issue in `UperNetModelIntegrationTest`. | 01-19-2023 13:14:23 | 01-19-2023 13:14:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,191 | closed | Generate: documented function to compute the transition scores | # What does this PR do?
Fixes #18616; Addresses comments in #5164, #20008 (and in a few other issues that I've lost track of).
## The issue
A few users would like to have a simple function to obtain the transition scores (i.e. the logits for each selected token at generate time). This is very useful for exploring the generated contents and simplifies the construction of powerful color-coded interfaces (e.g. [this one](https://joel.tools/codegen/)). It is also commonly requested to compare our models against OpenAI's.
We had a function for that in PT, `compute_beam_transition_scores`, but it was unknown to most users. This is because it was limited to Beam-based approaches, was not in our documents, and had no examples.
## The solution
This PR upgrades the function above to a first-class citizen 🥇 :
1. Makes it compatible with all generation strategies (e.g. Sample)
2. Adds a flag to renormalize the logits before fetching the right ones, which is a frequent downstream use
3. Adds it to the documentation
4. Populates the docstring with examples for which I've got questions a few times (how to print the token probabilities and how to recompute the score of the sequences in beam search)
In the process, I've decided to update the name of the function (`compute_beam_transition_scores` -> `compute_transition_scores`), to match better what it does. Although this technically breaks the API, the function was not part of our documented functions and, given the number of related issues, I'd say it was mostly unknown. | 01-19-2023 12:19:10 | 01-19-2023 12:19:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger merging as the failing test is a known flaky test (`tests/models/auto/test_modeling_auto.py::AutoModelTest::test_from_pretrained_dynamic_model_distant`)<|||||>Thanks! This function is super helpful for my use case. |
transformers | 21,190 | closed | Update year 2020 to 2023 | # What does this PR do?
Update year 2020 to 2023 for a single file | 01-19-2023 12:07:40 | 01-19-2023 12:07:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21190). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,189 | closed | workaround documentation rendering bug | # What does this PR do?
In the doc comments for a number of our models, the following occurs:
```text
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
```
The documentation renderer sees the `>` from `>= 7.5 (Volta)` as starting a quote. The resulting docs look like this:
<img width="480" alt="Screen Shot 2023-01-19 at 12 48 38" src="https://user-images.githubusercontent.com/346853/213434848-2b9d8ea0-6975-43a6-984c-3ec501487ac6.png">
To workaround this issue, I simply put the `>= 7.5` portion inside backticks everywhere in the code (even when it doesn't occur at the beginning of the line).
Arguably, this should be fixed in the documentation tools instead.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-19-2023 11:53:19 | 01-19-2023 11:53:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,188 | closed | hertz is already per second | # What does this PR do?
Small documentation update. The sampling rate was described as "Hertz per second", but hertz (usually not capitalized) already means per second.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-19-2023 11:41:24 | 01-19-2023 11:41:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,187 | closed | [Whisper] Fix timestamp processor |
# What does this PR do?
Mostly adds condition when looking in the past and in the futur based on timing information.
Fixes the tests | 01-19-2023 10:47:22 | 01-19-2023 10:47:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21187). All of your documentation changes will be reflected on that endpoint.<|||||>Tested with a concatenated librispeech (clean, test, 5.4 hours), took 393.1348168849945 seconds, and with a WER 0.030776774096215136. So in that case not really sure why we are performing better.
Openai took 2661.047640323639 and had a wer of 0.2589624153733004. Which is pretty interesting (the model is `large`) so a `x6.7`speed
<|||||>Will open a PR for the other fix |
transformers | 21,186 | closed | Add Japanese translation index.mdx | # What does this PR do?
Adds Japanese translation to index.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18413
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-19-2023 09:22:34 | 01-19-2023 09:22:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker could you have a look?<|||||>Thanks a lot for this 🚀 |
transformers | 21,185 | closed | "text2text-generation" pipeline fails when setting return_dict_in_generate=True | ### System Info
- `transformers` version: 4.23.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.4
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Setting `return_dict_in_generate` to `True` in the `text2text-generation` pipeline returns the following error:
```
Traceback (most recent call last):
File "/Users/karimfoda/.asdf/installs/python/3.9.4/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/karimfoda/.asdf/installs/python/3.9.4/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/karimfoda/.vscode/extensions/ms-python.python-2022.20.2/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/scripts/debug_return_dict_in_generate_error.py", line 6, in <module>
print(sentiment_t5_model("hello this is a test", return_dict_in_generate=True, output_scores = True))
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 148, in __call__
result = super().__call__(*args, **kwargs)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1081, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 990, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/Users/karimfoda/Documents/STUDIES/PYTHON/KAIZAN/DATASETS/_env/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 173, in _forward
output_ids = output_ids.reshape(in_b, out_b // in_b, *output_ids[0].shape[1:])
AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'reshape'
```
Running the following code reproduces this error:
```
from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead, AutoModelForCausalLM
sentiment_t5_model = pipeline("text2text-generation", model = "mrm8488/t5-base-finetuned-imdb-sentiment")
print(sentiment_t5_model("hello this is a test", return_dict_in_generate=True, output_scores = True))
```
### Expected behavior
The expected output is:
`[{'generated_text': 'positive', 'scores': (tensor([[-19.5107, -12.7762, -13.3044, ..., -41.9292, -41.8459, -41.9196]]), tensor([[-69.0289, -8.4889, -31.7621, ..., -75.5579, -75.6114, -75.5323]]))}]`
I was able to produce this output and fix this issue by changing:
https://github.com/huggingface/transformers/blob/6d67664380c09a1e9e1e3771f2124cd49b72f6be/src/transformers/pipelines/text2text_generation.py#L188-L192
to:
```
out_b = output_ids['sequences'].shape[0]
if self.framework == "pt":
output_ids['sequences'] = output_ids['sequences'].reshape(in_b, out_b // in_b, *output_ids['sequences'].shape[1:])
elif self.framework == "tf":
output_ids['sequences'] = tf.reshape(output_ids, (in_b, out_b // in_b, *output_ids['sequences'].shape[1:]))
```
and
https://github.com/huggingface/transformers/blob/6d67664380c09a1e9e1e3771f2124cd49b72f6be/src/transformers/pipelines/text2text_generation.py#L201-L206
to:
```
record = {
f"{self.return_name}_text": self.tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
),
"scores":model_outputs["output_ids"]['scores']
}
```
if this is an acceptable fix happy to submit a PR for these changes. | 01-19-2023 07:07:40 | 01-19-2023 07:07:40 | This is only valid if we indeed have the argument `return_dict_in_generate`. Otherwise the pipeline will also fail because `output_ids` will not be a dictionary. Pipelines in general currently don't support outputting anything else than the text prediction. See #21274. @Narsil do you think we could support something like `output_generate_dict` which would just output everything? (might be useful for people who want to use all in one tokenizer, feature extractor and model but still post process)<|||||>> want to use all in one tokenizer, feature extractor and model but still post process
Feels a bit power usery to me.
Two options :
- Subclass pipeline and use it instead `pipeline(..., pipeline_class=MyOwnClass)` which will use your subclass where everything is free to modify (and still benefit from batching and such).
- Make it shareable to the world with a custom pipeline: https://huggingface.co/docs/transformers/v4.26.0/en/add_new_pipeline#how-to-create-a-custom-pipeline
There are **many** things that could be done within text-generation pipeline, but I fear we should be very sparse in what we agree to add and maintain. The main goal of the pipeline is to be useable by non-ML people, meaning, we need to refrain from adding many use cases which include understanding how the tokens work. Advances usage is always possible with lower level tools and I feel that's where they belong.
Does that make sense ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,184 | closed | ImportError: cannot import name 'LayoutLMv3ForTokenClassification' from 'transformers' (unknown location) | from transformers import LayoutLMv3ForTokenClassification
Unable to import LayoutLMv3 models
OS : Windows 10
Python : Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 4 2021, 13:27:16) [MSC v.1928 64 bit (AMD64)] on win32
Package versions: transformers==4.26.0.dev0, torch==1.13.1

| 01-19-2023 00:59:17 | 01-19-2023 00:59:17 | Hi,
I'm not able to reproduce this error. It might make sense to uninstall and install transformers in a new, clean environment.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,183 | closed | RuntimeError: Tensors must be contiguous error while finetuning with deepspeed. | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@stas00 @ArthurZucker @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am just trying to fine-tune "EleutherAI/gpt-neo-1.3B" for casuallm on google colab. Without anything, it gives out of memory error. I was checking what can I do and I found deepspeed. I added deepspeed='ds_config.json', to my training arguments in jupyter notebook and used configuration from the official page which is "ds_config_zero2.json".
### Expected behavior
start training. | 01-18-2023 23:29:30 | 01-18-2023 23:29:30 | 1.3B param of weights + grads + optim states in mixed precision would need about `18*1.3=24`GB of memory, plus you need more memory for activations and temps and cuda kernels.
The free colab account is too limited to do much on it with even a small model. It barely has any cpu memory so no memory to offload to.
You could use deepspeed to offload to local disk (`nvme`), it'll be slow but doable I think if your local disc is large enough. Please see: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nvme-support
<|||||>The other approach is to activate BNB's Adam, so it will cut down on a lot of optim states weights (2 bytes instead of 8) except the embedding params at full 8 bytes. so you will be looking at about 17GB for weights + grads + optim states in mixed precision - but it's still too large for colab without offloading.<|||||>my actual goal is finetuning gpt-j on google colab pro but since google colab uses credits I am experimenting with 1.3B on normal colab. I also used nvme settings with zero3 example but still I got the same error without aio part. if I add aio part I got `ValidationError: 1 validation error for DeepSpeedZeroConfig
aio
extra fields not permitted (type=value_error.extra)`<|||||>Understood.
deepspeed's `nvme` offload requires `libaio`.
As we only integrate deepspeed, any questions about deepspeed functionality itself and errors such as above should be posted at https://github.com/microsoft/DeepSpeed/issues since we aren't the maintainers of deepspeed.
Thank you.<|||||>but still `RuntimeError: Tensors must be contiguous` happens. I saw you made merge about fixing this but that still happens.
<|||||>I'm struggling here with supporting you, @FahriBilici - please kindly read
https://github.com/huggingface/transformers/blob/main/ISSUES.md#the-github-issues
and file a proper issue with the full traceback and invocation command, I will be able to help you then.
Thanks.<|||||>I will share my colab notebook and training set once I prepare.<|||||>I repeat what's needed is the command line and the full traceback. Thank you. <|||||>the full error is
```
The following columns in the training set don't have a corresponding argument in "GPTNeoForCausalLM.forward" and have been ignored: text. If text are not expected by "GPTNeoForCausalLM.forward", you can safely ignore this message.
Detected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB)
[2023-01-21 10:17:13,756] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.8.0, git-hash=unknown, git-branch=unknown
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-23-3435b262f1ae>](https://localhost:8080/#) in <module>
----> 1 trainer.train()
10 frames
[/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py](https://localhost:8080/#) in broadcast(tensor, src, group, async_op)
1402 group_src_rank = get_group_rank(group, src)
1403 opts.rootRank = group_src_rank
-> 1404 work = group.broadcast([tensor], opts)
1405 if async_op:
1406 return work
RuntimeError: Tensors must be contiguous
```
my config file is
```
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
my training code is
```
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments(
output_dir="neo",
evaluation_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=10,
weight_decay=0.01,
gradient_checkpointing=True,
deepspeed='config.json',
report_to=None
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation'],
data_collator=data_collator,
)
trainer.train()
```<|||||>ok, clearly we have a miscommunication here. I will try one last time.
To help you we need the **full traceback** and not the last line of it. <|||||>```
The following columns in the training set don't have a corresponding argument in `GPTNeoForCausalLM.forward` and have been ignored: text. If text are not expected by `GPTNeoForCausalLM.forward`, you can safely ignore this message.
Detected ZeRO Offload and non-DeepSpeed optimizers: This combination should work as long as the custom optimizer has both CPU and GPU implementation (except LAMB)
[2023-01-21 10:17:13,756] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.8.0, git-hash=unknown, git-branch=unknown
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-3435b262f1ae> in <module>
----> 1 trainer.train()
10 frames
/usr/local/lib/python3.8/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1526 )
-> 1527 return inner_training_loop(
1528 args=args,
1529 resume_from_checkpoint=resume_from_checkpoint,
/usr/local/lib/python3.8/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1594 )
1595 if args.deepspeed:
-> 1596 deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
1597 self, num_training_steps=max_steps, resume_from_checkpoint=resume_from_checkpoint
1598 )
/usr/local/lib/python3.8/dist-packages/transformers/deepspeed.py in deepspeed_init(trainer, num_training_steps, resume_from_checkpoint, inference)
342 )
343
--> 344 deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
345
346 if resume_from_checkpoint is not None:
/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params)
123
124 if not isinstance(model, PipelineModule):
--> 125 engine = DeepSpeedEngine(args=args,
126 model=model,
127 optimizer=optimizer,
/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params, dont_change_device)
299
300 # Configure distributed model
--> 301 self._configure_distributed_model(model)
302
303 self._get_model_parameters()
/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in _configure_distributed_model(self, model)
1185
1186 if not self.amp_enabled():
-> 1187 self._broadcast_model()
1188
1189 # check if parameters are duplicated in optimizer param_groups
/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py in _broadcast_model(self)
1100 else:
1101 if torch.is_tensor(p) and is_replicated(p):
-> 1102 dist.broadcast(p,
1103 groups._get_broadcast_src_rank(),
1104 group=self.data_parallel_group)
/usr/local/lib/python3.8/dist-packages/deepspeed/comm/comm.py in log_wrapper(*args, **kwargs)
125 # Return the op, then stop the op's timer
126 try:
--> 127 return func(*args, **kwargs)
128 finally:
129 if comms_logger.enabled:
/usr/local/lib/python3.8/dist-packages/deepspeed/comm/comm.py in broadcast(tensor, src, group, async_op, prof, log_name, debug)
230 debug=get_caller_func()):
231 global cdb
--> 232 return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
233
234
/usr/local/lib/python3.8/dist-packages/deepspeed/comm/torch.py in broadcast(self, tensor, src, group, async_op)
68
69 def broadcast(self, tensor, src, group=None, async_op=False):
---> 70 return torch.distributed.broadcast(tensor=tensor,
71 src=src,
72 group=group,
/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py in broadcast(tensor, src, group, async_op)
1402 group_src_rank = get_group_rank(group, src)
1403 opts.rootRank = group_src_rank
-> 1404 work = group.broadcast([tensor], opts)
1405 if async_op:
1406 return work
RuntimeError: Tensors must be contiguous
```<|||||>Excellent. Thank you for providing the full traceback, @FahriBilici
As you can see the issue comes from inside deepspeed and is unrelated to the fix I made earlier even though the error message is the same. Therefore you want to report it here https://github.com/microsoft/DeepSpeed/issues
Alternatively, you can traverse your model before you pass it to the Trainer and ensure that all tensors are contiguous.
Probably something along the lines of:
```
for p in model.parameters():
p = p.contiguous()
```
I haven't tested it, this is just an idea to try.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,182 | closed | Exclude the parameters with `requires_grad=False` in the `Trainer` optimizer. | ### Feature request
Attempt to optimize the training for models with weights/parameters that are set to `requires_grad=False`. This is done by excluding these parameters in the optimizer.
### Motivation
I am building a Seq2Seq model where I use a pre-trained model for the encoder. I freeze all the parameters of the encoder by setting `requires_grad=False`. I expected the training to speed up compared to a model where both the encoder and decoder weights are trainable. However, I found that there's no difference in speed and also memory.
I investigated a bit and found that all the model parameters, regardless of whether gradients are required to be computed, are included in the optimizer https://github.com/huggingface/transformers/blob/00ba7cadd812437708b380ab078a3cfe8cfaff31/src/transformers/trainer.py#L1021-L1030
I tested an idea and subclassed the `Seq2SeqTrainer`. So, I updated the above snippet with this:
```Python
optimizer_grouped_parameters = [
{
# Add here the `p.requires_grad` condition
"params": [p for n, p in opt_model.named_parameters() if (n in decay_parameters and p.requires_grad)],
"weight_decay": self.args.weight_decay,
},
{
# Add here the `p.requires_grad` condition
"params": [p for n, p in opt_model.named_parameters() if (n not in decay_parameters and p.requires_grad)],
"weight_decay": 0.0,
},
]
```
Doing this actually improved both the speed and the memory during the training.
I was wondering if this is something we can add to the codebase. If not, I am curious as to why we shouldn't exclude the parameters that are intended not to be trainable in the optimizer.
### Your contribution
I can make the PR if this is an acceptable change. 🤗 | 01-18-2023 19:46:27 | 01-18-2023 19:46:27 | Sounds like a welcome change! |
transformers | 21,181 | closed | Updates to computer vision section of the Preprocess doc | This PR expands the Computer Vision section of the Preprocess doc to include a small explainer on the difference between image augmentation and image preprocessing, and what ImageProcessor handles. It also refactors the code example to use `ImageProcessor` for normalizing and converting images to tensors instead of `torch.transforms`.
It mentions padding for certain cases (DETR), and the availability of post-processing methods for some models/tasks.
| 01-18-2023 19:45:09 | 01-18-2023 19:45:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,180 | closed | How to use Ipex via transformers | ### Feature request
I saw you implemented Ipex. intel cpu optimisation but there is no any example how to use it via python. Only this example is available:
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--use_ipex \
--jit_mode
And this is not very useful for me as I do interface via python script., not via command lines.
How would i use that here for example:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
### Motivation
documentation
### Your contribution
documentation | 01-18-2023 19:36:04 | 01-18-2023 19:36:04 | Please use the [forums](https://discuss.huggingface.co/) for such question as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,179 | closed | Whisper model adds "!" char in the beginning of each predicted audio transcription | ### System Info
Google Colab instance with Tesla T4
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi @gante @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the link to [Google Colab notebook](https://colab.research.google.com/drive/1VrcO9BW4OSEm8jmQjvzZ8vikMLsX5n3n?usp=sharing)
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import pipeline
pipe = pipeline(
task="automatic-speech-recognition",
model='ales/whisper-small-belarusian',
chunk_length_s=8, stride_length_s=1, device=0,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(
language='be', task='transcribe'
)
# run with transformers installed from latest commit: 00ba7cadd812437708b380ab078a3cfe8cfaff31 at the moment.
# all transcription have extra "!" in the beginning!
res = pipe('audio_sample.ogg')
res2 = pipe('audio_sample2.ogg')
print(res)
print(res2)
```
output:
```
{'text': '!Хацеў бы спаткацца з вамі на вуліцу ціхаю зорнаю ночы і сказаць, ці бачыце гэтыя зоркі, ясныя зоркі, іграб лес.!'}
{'text': '!Прывітанне, як вашыя справы.'}
```
### Expected behavior
The problem is when using `ales/whisper-small-belarusian` Whisper model that I've fine-tuned from `openai/whisper-small` each transcription that model now produces starts with an exclamation mark ("!"). This look like an error in model decoding.
* The problem occured when I upgraded my environment. I use `git+https://github.com/huggingface/transformers` as a version specifier to install `transformers` from source. The reason to install `transformers` from source is that current latest release `v4.25.1` does not have needed functionality (e.g. `WhisperTokenizer` class does not have `get_decoder_prompt_ids` method)
* Current latest commit in the `transformers` repository is `00ba7cadd812437708b380ab078a3cfe8cfaff31`
* When I deleted my Colab runtime and created a new one, now installing `transformers` from an older commit `a081f292ca8479eaf66d7396186021268f128829`, transcriptions returned to normal - no exclamation mark in the beginning.
* I guess this error was introduced by some latest Pull requests after `a081f292ca8479eaf66d7396186021268f128829` commit
Compare 2 transcriptions (examples could be found above and in the [Google Colab notebook](https://colab.research.google.com/drive/1VrcO9BW4OSEm8jmQjvzZ8vikMLsX5n3n?usp=sharing))
* Transcription with `transformers` installed from the latest commit (`00ba7cadd812437708b380ab078a3cfe8cfaff31`):
```
{'text': '!Прывітанне, як вашыя справы.'}
```
* Transcription if we use older commit (`a081f292ca8479eaf66d7396186021268f128829`):
```
{'text': 'Прывітанне, як вашыя справы.'}
```
And this happens to any audiofile I pass to the pipeline.
Thanks! | 01-18-2023 19:02:26 | 01-18-2023 19:02:26 | @ArthurZucker
Could it be the timestamps PR ? We did change the `force_bos_token_ids` but only when we want timestamps, right ?<|||||>I think it related to that indeed, I'll have a look!<|||||>Okay so the `WhisperTimestampProcessor` is always added to the list of logit processors. This is the cause of the error 😉
See this script were I added
```python
return_timestamps = generate_kwargs.pop("return_timestamps", False)
tokens = self.model.generate(
input_features=model_inputs.pop("input_features"),
logits_processor=[WhisperTimeStampLogitsProcessor()] if return_timestamps else None,
**generate_kwargs,
)
```
in the `_forward` call of pipeline.
```python
from transformers import pipeline
from datasets import load_dataset
libri = load_dataset("librispeech_asr", f"clean", split="test", cache_dir="/home/arthur_huggingface_co/.cache/huggingface/datasets")
pipe = pipeline(
task="automatic-speech-recognition",
model='openai/whisper-tiny',
chunk_length_s=8, stride_length_s=1, device=0,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(
language='fr', task='transcribe'
)
res = pipe(libri[0]["audio"]["array"], return_timestamps=False)
```<|||||>Two possible fixes :
- both the forward and the initialisation should be consistent. So the `return_timestamp` arg should be added to the `self.args`.
- Just add this to the generation config or the parameters of the _forward call as I did.
WDYT @Narsil <|||||>What about just keeping `return_timestamps` and just sending it to both `preprocess` and `_forward` ?
It seems odd to push it into `generate_kwargs` if the generation doesn't care about it (directly I mean) |
transformers | 21,178 | closed | Add disclaimer for necessary fake models | # What does this PR do?
This PR adds a disclaimer for the fake models we can't really remove since the canonical checkpoint is very big already. You can see the result in the [preview](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21178/en/model_doc/gptj#transformers.GPTJModel.forward.example).
If this is acceptable, I'll add it to the other places where we want a tiny random model instead of the huge one for the docstrings. | 01-18-2023 18:58:14 | 01-18-2023 18:58:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,177 | closed | Rewrite a couple of lines in the TF XLA doc | This PR makes a quick edit at the top of the TF XLA doc to clarify that for training/inference you can just pass `jit_compile` to `model.compile()`! (cc @sayakpaul ) | 01-18-2023 17:35:21 | 01-18-2023 17:35:21 | (Going to assume this one is small enough to merge without @sgugger approval, but feel free to yell at me if it wasn't!)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,176 | closed | FlaxGPTNeoForCausalLM not working properly with fp16 when using left padding. | ### System Info
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-01-18 15:47:59.442290: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hello there, I am having a bit of trouble to successfully generate text (beam search) with the FlaxGPTNeoForCausalLM model when using fp16.
I provide two colab notebooks to replicate this issue:
- torch version, which works fine both on fp32 and fp16: https://colab.research.google.com/drive/15Fy3VmTfUVGGGC1NAGP8p_DqZajDzxZk?usp=sharing
- flax version, which fails on fp16: https://colab.research.google.com/drive/1t588H8_1SGSj6g1yVXgkeRiIvsxQiOKA?usp=sharing
Very briefly in torch I am converting the model to fp16 by doing this: `GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", torch_dtype=torch.float16)`, while in Flax I am doing the following:
```python
jax_model = FlaxGPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", dtype=jax.numpy.float16)
jax_model.params = jax_model.to_fp16(jax_model.params)
```
For both cases, I am using the following sentences as input **with** left padding:
```
texts = ["My name is Julien and I like to", "Why float16 is giving such a strange outputs?"]
```
Output of the torch version:
```python
['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>My name is Julien and I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you Julien. I like to call you',
'<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Why float16 is giving such a strange outputs?\n\nA:\n\nfloat16 is giving such a strange outputs?\n\nYes, it does.\n\nA:\n\nYes, it does.\n\nA:\n\nYes, it does.\n\nA:\n\n']
```
Output of the flax version:
```python
['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>My name is Julien and I like to!<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>',
'<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>Why float16 is giving such a strange outputs?!<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>']
```
As you can see, in the case of flax I will always get the `!` token (which corresponds to id:0) and then I will get the `<|endoftext|>` token (id: 50256). Strangely if I dont do the left padding and process each sentence individually I will get the same output as the torch version.
### Expected behavior
Basically, I want to have the equivalent of `GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M", torch_dtype=torch.float16)` but in Flax. So, from the [docs](https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained.dtype), I get that using `to_fp16()` will convert the model params to fp16 and changing the dtype to `jnp.float16` will force the computation to be in fp16. However, when I set `dtype=jnp.float16` and using left padding, the generation does not work properly. If instead, I just use the `to_fp16()` to convert the params and leave `dtype=jnp.float32`, the code works properly, but it is two times slower than the pytorch version, which means that is not truly fp16.
I also want to add that this issue only seems to appear when I add left padding to the inputs in Flax.
Any idea why is this happening?
P.S. I am also not sure if what I am doing is correct, but I couldn't find anything similar to this issue.
**UPDATE**
I also noticed that in the pytorch version, if I use the default padding behaviour (right padding) I get the following warning, which **does not appear** in flax.
```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
```
So I tried using right padding in the case of Flax and for my surprise, it worked! It gave me the same outputs as the left padded version in torch.
I do not understand if this behaviour is intended or not, but I find it to be a bit confusing, since I believe that the left padding would make more sense. | 01-18-2023 17:03:29 | 01-18-2023 17:03:29 | Hey @T-Almeida 👋 To be candid with you, I've never played with Flax + generate + fp16, so I can't confirm whether it is a model, generate, or flax issue without a deep dive :) In any case, I can tell you that we've stopped development on Flax, and that is why you won't see newer features there (such as the left padding warning).
With decoder-only models, you must use left padding. Otherwise, the first generated token will take `<PAD>` as the previous token and will return different results (the results you get with right padding are slightly different if you pay attention).
______________________________________
@sanchit-gandhi nvm, I've located the bug. Numerical masking strikes again!
(OLD: TL;DR before going into debug mode, a quick check with you :D Flax + generate + fp16 on GPTNeo returns gibberish, where fp32 works fine. Have you seen anything similar before? There is also a chance that the example makes incorrect use of fp16.)
<|||||>Hi @gante ty for the feedback, you are right with right padding the outputs are indeed different (my bad). Sadly, I also notice that the Flax generate API is, indeed, the one that lacks more features when compared with torch or tf (specially the contrastive_search method, that I can only use in torch, because there is no TFGPTneo... implementation :'( )
Related to this issue, when using left padding with fp16 the model is outputting nan logits when predicting the next token, check the below snippet:
```python
outs = jax_model(input_ids, attention_mask=attention_mask).logits
DeviceArray([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]],
[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]], dtype=float16)
```
So, the issue should be in the model call and not in the generation?<|||||>@T-Almeida thank you for your comment! It turns out that I've seen this pattern of `nan` this week, and the same fix applies here. Will open a PR soon ;)
Re TFGPTNeo, we are always open to contributions 🙏 I'd be happy to guide anyone that'd like to contribute!<|||||>Hey @T-Almeida,
Sounds like you've done a good job at assimilating the different Flax dtype terms (which isn't straightforward)! And cool to see that you're running JAX on GPU!
As you've correctly specified, `to_fp16()` will convert all the Flax model params to float16, but will leave the computations untouched (i.e. they remain in float32 precision). We need to specify `dtype=jnp.float16` to ensure our forward pass is also done in float16 precision.
Looks like @gante is on the case with fixing the Flax attention masks (which seems to be the problem here)!<|||||>@T-Almeida merged, if you install the development version of transformers it should work :)<|||||>I can confirm that now it is working properly! Thanks a lot @gante, @sanchit-gandhi for the really quick fix!
|
transformers | 21,175 | closed | Fix `Mask2FormerForUniversalSegmentation` and failed tests | # What does this PR do?
For `Mask2FormerForUniversalSegmentation` , the tests `test_torchscript_xxx` fails due to `auxiliary_logits` in `Mask2FormerForUniversalSegmentationOutput`. This is a `list of dict of tensors`, which is not supported by torchscript tracing.
The related tests will pass if we don't output this value. Currently, we have
```python
output_auxiliary_logits = (
self.config.use_auxiliary_loss if output_auxiliary_logits is None else output_auxiliary_logits
)
```
where `use_auxiliary_loss` is `True`. However, this seems strange to me, and I believe it should be `self.config.output_auxiliary_logits` instead. So I change it.
And this also fixes the related tests too, as `output_auxiliary_logits` is `None` (during the tests)
| 01-18-2023 16:50:47 | 01-18-2023 16:50:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,174 | closed | Bump torch from 1.6.0 to 1.13.1 in /examples/research_projects/lxmert | Bumps [torch](https://github.com/pytorch/pytorch) from 1.6.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.6.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:53 | 01-18-2023 16:14:53 | OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21174). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,173 | closed | Bump future from 0.18.2 to 0.18.3 in /examples/research_projects/visual_bert | Bumps [future](https://github.com/PythonCharmers/python-future) from 0.18.2 to 0.18.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/PythonCharmers/python-future/releases">future's releases</a>.</em></p>
<blockquote>
<h2>v0.18.3</h2>
<p>This is a minor bug-fix release containing a number of fixes:</p>
<ul>
<li>Backport fix for bpo-38804 (c91d70b)</li>
<li>Fix bug in fix_print.py fixer (dffc579)</li>
<li>Fix bug in fix_raise.py fixer (3401099)</li>
<li>Fix newint bool in py3 (fe645ba)</li>
<li>Fix bug in super() with metaclasses (6e27aac)</li>
<li>docs: fix simple typo, reqest -> request (974eb1f)</li>
<li>Correct <strong>eq</strong> (c780bf5)</li>
<li>Pass if lint fails (2abe00d)</li>
<li>Update docker image and parcel out to constant variable. Add comment to update version constant (45cf382)</li>
<li>fix order (f96a219)</li>
<li>Add flake8 to image (046ff18)</li>
<li>Make lint.sh executable (58cc984)</li>
<li>Add docker push to optimize CI (01e8440)</li>
<li>Build System (42b3025)</li>
<li>Add docs build status badge to README.md (3f40bd7)</li>
<li>Use same docs requirements in tox (18ecc5a)</li>
<li>Add docs/requirements.txt (5f9893f)</li>
<li>Add PY37_PLUS, PY38_PLUS, and PY39_PLUS (bee0247)</li>
<li>fix 2.6 test, better comment (ddedcb9)</li>
<li>fix 2.6 test (3f1ff7e)</li>
<li>remove nan test (4dbded1)</li>
<li>include list test values (e3f1a12)</li>
<li>fix other python2 test issues (c051026)</li>
<li>fix missing subTest (f006cad)</li>
<li>import from old imp library on older python versions (fc84fa8)</li>
<li>replace fstrings with format for python 3.4,3.5 (4a687ea)</li>
<li>minor style/spelling fixes (8302d8c)</li>
<li>improve cmp function, add unittest (0d95a40)</li>
<li>Pin typing==3.7.4.1 for Python 3.3 compatiblity (1a48f1b)</li>
<li>Fix various py26 unit test failures (9ca5a14)</li>
<li>Add initial contributing guide with docs build instruction (e55f915)</li>
<li>Add docs building to tox.ini (3ee9e7f)</li>
<li>Support NumPy's specialized int types in builtins.round (b4b54f0)</li>
<li>Added r""" to the docstring to avoid warnings in python3 (5f94572)</li>
<li>Add <strong>subclasscheck</strong> for past.types.basestring (c9bc0ff)</li>
<li>Correct example in README (681e78c)</li>
<li>Add simple documentation (6c6e3ae)</li>
<li>Add pre-commit hooks (a9c6a37)</li>
<li>Handling of <strong>next</strong> and next by future.utils.get_next was reversed (52b0ff9)</li>
<li>Add a test for our fix (461d77e)</li>
<li>Compare headers to correct definition of str (3eaa8fd)</li>
<li><a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/322">#322</a> Add support for negative ndigits in round; additionally, fixing a bug so that it handles passing in Decimal properly (a4911b9)</li>
<li>Add tkFileDialog to future.movers.tkinter (f6a6549)</li>
<li>Sort before comparing dicts in TestChainMap (6126997)</li>
<li>Fix typo (4dfa099)</li>
<li>Fix formatting in "What's new" (1663dfa)</li>
<li>Fix typo (4236061)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/PythonCharmers/python-future/commit/af1db970b0879b59e7aeb798c27a623144561cff"><code>af1db97</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/613">#613</a> from PythonCharmers/lwan/0.18.3-release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/079ee9b75441d36447cec9981fa1b0032862f64d"><code>079ee9b</code></a> Prepare for 0.18.3 release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/02f7a8143d5b68f50a1cca44d8f5a58c1925a515"><code>02f7a81</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/610">#610</a> from wshanks/wshanks-patch-1</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c91d70b34ef0402aef3e9d04364ba98509dca76f"><code>c91d70b</code></a> Backport fix for bpo-38804</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/80523f383fbba1c6de0551e19d0277e73e69573c"><code>80523f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/569">#569</a> from jmadler/master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/5e5af71549c7a7fa0e28f881046e081e231e455d"><code>5e5af71</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/582">#582</a> from r3m0t/patch-6</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/17e4bbd7c676a9a8efd20601e51675c95f74b330"><code>17e4bbd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/596">#596</a> from abjonnes/fix-print-trailing-comma</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/1b427ba70191927706282840835e31ae0733ee7e"><code>1b427ba</code></a> Merge branch 'xZise-official-count' into master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c8eb497336c76d300c6753b47c7f5de505660d7a"><code>c8eb497</code></a> Merge branch 'official-count' of <a href="https://github.com/xZise/python-future">https://github.com/xZise/python-future</a> into ...</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/dffc579dbb7c882fc01fa0c0dfa6b59acef7827d"><code>dffc579</code></a> Fix bug in fix_print.py fixer</li>
<li>Additional commits viewable in <a href="https://github.com/PythonCharmers/python-future/compare/v0.18.2...v0.18.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:53 | 01-18-2023 16:14:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21173). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,172 | closed | Bump torch from 1.6.0 to 1.13.1 in /examples/research_projects/visual_bert | Bumps [torch](https://github.com/pytorch/pytorch) from 1.6.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.6.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:48 | 01-18-2023 16:14:48 | OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21172). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,171 | closed | Bump torch from 1.11.0 to 1.13.1 in /examples/research_projects/decision_transformer | Bumps [torch](https://github.com/pytorch/pytorch) from 1.11.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.11.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:43 | 01-18-2023 16:14:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it. |
transformers | 21,170 | closed | Bump torch from 1.11.0 to 1.13.1 in /examples/research_projects/codeparrot | Bumps [torch](https://github.com/pytorch/pytorch) from 1.11.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Apple’s new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia®, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Apple® silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Arm® Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pytorch/pytorch/commit/49444c3e546bf240bed24a101e747422d1f8a0ee"><code>49444c3</code></a> [BE] Do not package caffe2 in wheel (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87986">#87986</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90433">#90433</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/56de8a39c595777f35e342a7cde9d602d57cca32"><code>56de8a3</code></a> Add manual cuda deps search logic (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90411">#90411</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90426">#90426</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/a4d16e0fb670246f18d8c07396808cd5e3766f0b"><code>a4d16e0</code></a> Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90104">#90104</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/80abad3e7460415efe480ab21c1d5c90fc345a27"><code>80abad3</code></a> Handle Tensor.<strong>deepcopy</strong> via clone(), on IPU (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89129">#89129</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89999">#89999</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/73a852acd7946dff8beb818ec723ffa453e7b242"><code>73a852a</code></a> [Release only change] Fix rocm5.1.1 docker image (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90321">#90321</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/029ec163f2b3a7c46ccb3e8d8b377c9319db463a"><code>029ec16</code></a> Add platform markers for linux only extra_install_requires (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88826">#88826</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89924">#89924</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/197c5c0b849cfdb4f6844f90c49bb8adba85e1bb"><code>197c5c0</code></a> Fix cuda/cpu check on NoneType (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90068">#90068</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aadbeb7416e20a9be694f1da415626135c5c1097"><code>aadbeb7</code></a> Make TorchElastic timer importable on Windows (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88522">#88522</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/90045">#90045</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/aa9443306a3ba6e8412e24dd99d17eab3f90e818"><code>aa94433</code></a> Mark IPU device as not supports_as_strided (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89130">#89130</a>) (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89998">#89998</a>)</li>
<li><a href="https://github.com/pytorch/pytorch/commit/59b4f3be3bd073b1243e20284fbd09ff43bc66f5"><code>59b4f3b</code></a> Use the Python frame safely in _pythonCallstack (<a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89997">#89997</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pytorch/pytorch/compare/v1.11.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:43 | 01-18-2023 16:14:43 | OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21170). All of your documentation changes will be reflected on that endpoint. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.