repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,262 | closed | Restructure DETR post-processing, return prediction scores | # What does this PR do?
- Restructures `DetrFeatureExtractor.post_process_instance_segmentation` and `DetrFeatureExtractor.post_process_panoptic_segmentation` methods, adds `score` key to output dictionaries
- Updates feature extractor tests
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
This was discussed on [Notion](https://www.notion.so/Post-processing-of-vision-models-33a8165333144581862014d83e96191b?d=462e95b396fc4ea49ffa602146bd91f5#d152a0d70b4c4cf599f642e9b891dba1).
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-30-2022 16:21:19 | 09-30-2022 16:21:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,261 | closed | Skip `BloomEmbeddingTest.test_embeddings` for PT < 1.10 | # What does this PR do?
The test `BloomEmbeddingTest.test_embeddings` will load a pretrained-model in `bfloat16`. For PyTorch < 1.10, we have
```bash
RuntimeError: "LayerNormKernelImpl" not implemented for 'BFloat16'
```
so let's skip it to keep Past CI clean | 09-30-2022 15:52:12 | 09-30-2022 15:52:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik For this case, it is specific to test (checkpoint is `bigscience/bigscience-small-testing` which has `bfloat16` in the config file).
|
transformers | 19,260 | closed | Issue with Fairseq BART checkpoint conversion in Fairseq 0.12 | Hi, I pretrained a BART model using Fairseq 0.12.2. Then, I tried to convert it using the official script, https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py, and it didn't work out of the box due to unexpected and missing keys in the state dict. Changes I have introduced:
1. Rename keys with `_weight` to `.weight`, and the same with bias. Ok, no doubts here.
2. `decoder.output_projection.weight` is not present in HF's model. Check that `torch.isclose(state_dict["decoder.embed_tokens.weight"], state_dict["decoder.output_projection.weight"])`, and then delete `decoder.output_projection.weight` (it seems to be an additional shared weight present in Fairseq's dictionary). I think it's ok but not 100% sure.
3. Now, the most problematic part: Fairseq's BART has 4 additional parameter sets for each layer: `encoder.layers.0.in_proj_weight, encoder.layers.0.in_proj_bias, encoder.layers.0.out_proj_weight, encoder.layers.0.out_proj_bias`. They are declared here: https://github.com/facebookresearch/fairseq/blob/v0.12.2/fairseq/modules/transformer_layer.py#L75. If I'm not wrong, they are only used if BT (BetterTransformer, check: https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) is being used: https://github.com/facebookresearch/fairseq/blob/v0.12.2/fairseq/modules/transformer_layer.py#L308. Apparently, it's not in my case (because BT is only supported with torch >=1.12). So, if I understand correctly, I can safely delete these keys, which I did.
After these modifications, the state dict can be loaded. The assertion `assert fairseq_output.shape == new_model_outputs.shape` is ok, but the assertion `assert (fairseq_output == new_model_outputs).all().item()` isn't. The outputs seem to be similar enough at first glance:
```
>> fairseq_output
tensor([[[-1.8069, -0.4117, -0.9243, ..., 0.6273, -1.3608, 0.1720],
[ 2.0357, 1.1873, -1.8202, ..., 0.8068, -1.3662, 0.1693],
[ 5.2753, -0.7177, 0.6229, ..., 0.0101, -1.6409, 1.5729],
...,
[-1.0434, 3.7756, -3.0025, ..., -3.6483, 1.4942, 4.0406],
[-1.1989, 4.8781, -2.4865, ..., -4.0379, 0.7374, 3.6970],
[-1.0135, 4.2036, -2.6940, ..., -3.5853, 1.5794, 4.0704]]])
>> new_model_outputs
tensor([[[-1.8047, -0.4085, -0.9251, ..., 0.6256, -1.3601, 0.1714],
[ 2.0344, 1.1864, -1.8195, ..., 0.8076, -1.3673, 0.1676],
[ 5.2745, -0.7172, 0.6230, ..., 0.0093, -1.6411, 1.5726],
...,
[-1.0617, 3.6596, -2.9122, ..., -3.6041, 1.4983, 4.0248],
[-1.2374, 4.8001, -2.4190, ..., -3.9873, 0.7194, 3.6830],
[-1.0299, 4.0773, -2.6344, ..., -3.5592, 1.6123, 4.0465]]])
```
But using `torch.isclose` says otherwise, even if setting the default epsilon to very high values. Eg.:
```
torch.isclose(fairseq_output, new_model_outputs, atol=1e-01)
tensor([[[ True, True, True, ..., True, True, True],
[ True, True, True, ..., True, True, True],
[ True, True, True, ..., True, True, True],
...,
[ True, False, True, ..., True, True, True],
[ True, True, True, ..., True, True, True],
[ True, False, True, ..., True, True, True]]])
```
Trying to generate tokens with BartForConditionalGeneration seems to be leading to gibberish. Any help will be much appreciated, thanks in advance!
# System Info
```
transformers '4.22.2'
torch '1.10.2+cu113'
fairseq '0.12.2'
```
Tagging @patil-suraj
| 09-30-2022 15:20:07 | 09-30-2022 15:20:07 | Apparently, the issue was related to the tokenizer, and now all assertions hold. Meaning that the changes I made do work, in case someone is interested. Closing the issue. |
transformers | 19,259 | closed | Add ORT training notebooks to transformers' catalog | # What does this PR do?
Add the `ORTTrainer` examples for Optimum in transformers' notebook catalog.
| 09-30-2022 13:44:35 | 09-30-2022 13:44:35 | |
transformers | 19,258 | closed | Fix pytorch seq2seq qa | I created this pull request based on our discussion https://github.com/huggingface/transformers/issues/15398#issuecomment-1262926868
I also changed the typo in the README for the SQuADs feature. Then I also saw that the [trainer_seq2seq_qa.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/trainer_seq2seq_qa.py) changed since I had it locally. And it caused me an issue, below you can see the slight adjustment, basically similar to how it was before.
I had been using the [run_seq2seq_qa.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_seq2seq_qa.py) file with local text files, slightly adjusting the code, so I have tried with a `--do_predict` on my local setting, but not for an online dataset.
| 09-30-2022 13:25:25 | 09-30-2022 13:25:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pinging @sgugger for final approval.
`test_run_swag_no_trainer` failed but it is not related to this PR :thinking: |
transformers | 19,257 | closed | add return_tensor parameter for feature extraction | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10016
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [✔️](https://emojipedia.org/check-mark/#:~:text=See%20also%3A%20%E2%9C%85%20Check%20Mark,to%20Emoji%201.0%20in%202015.) Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [✔️ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [✔️ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x ] Did you write any new necessary tests?
Addresses [stale issue #10016](https://github.com/huggingface/transformers/issues/10016). Please review @LysandreJik and @Narsil. Thanks. | 09-30-2022 13:07:30 | 09-30-2022 13:07:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Feedback has been addressed :)<|||||>Not sure what's happening with CircleCI... <|||||>Last suggestion was addressed, but CircleCI Pipeline still seems broken...<|||||>@ajsanjoaquin do you have circleCI set up on your account/branch ? I remember people sometimes having issues because of this.
Otherwise, could you try to rebase on `main`?
Pinging @LysandreJik that might know more.<|||||>The CircleCI pipeline seems to be working again!<|||||>For quality can you try
```
pip install transformers[dev]
make fixup
```
?<|||||>@Narsil when running `make fixup`, I get the following error. Perhaps it's possible to replicate on your end?
```
-n was unexpected at this time.
make: *** [Makefile:10: modified_only_fixup] Error 255
```<|||||>> -n was unexpected at this time.
What OS/Linux flavor you're running on ? Shell maybe ?
Seems like `test -n` is not supported in your environment.<|||||>I'm in a temporary dev environment, so I'm using Windows 😅. I'll use WSL next time. So the formatting error was simply a result of missing newline before importing torch and tf?<|||||>> I'm in a temporary dev environment, so I'm using Windows sweat_smile. I'll use WSL next time. So the formatting error was simply a result of missing newline before importing torch and tf?
We use `black` and `isort` to normalize as much as possible the code, so yes import order is automatically done for you.<|||||>@Narsil can this PR be merged soon when it passes all tests? I just resolved a conflict made by a different commit on the same file.<|||||>@ajsanjoaquin
It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) ?<|||||>@Narsil the `check_repository_consistency` test is failing because of a .md file unrelated to my PR. Do I have to address this in this PR?
`check_code_quality` is again failing and I can't use `make fixup`. I request you re-fix the latter.
I hope this can be passed soon before another PR like #19382 modifies the exact code I was working on...<|||||>This PR was based on an old fork of the repo and as such, the pipeline tests were not run. They do not pass, so I will revert the commit. Could you open a new PR with fixed tests?<|||||>I am confused, should we re-open a PR or start from here https://github.com/huggingface/transformers/pull/19679
(I tried re-opening a PR but it resulted in a no-op...) |
transformers | 19,256 | closed | Docs - Guide to add a new TensorFlow model | # What does this PR do?
Adds a guide on how to add a new TensorFlow model (architecture and weights), as well as a few tips to debug TF<>PT mismatches. This means that we can stop relying on ad hoc tips and instructions to add TF models, and point towards this guide instead 🙌
I've also updated the test paths in the guide to add new models, which were outdated, and added a cheeky "from" in the "Converting from TensorFlow checkpoints" guide title to clearly distinguish it from the guide I'm adding in this PR.
Review suggestion: read the [live docs](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19256/en/add_tensorflow_model) themselves :) | 09-30-2022 12:42:58 | 09-30-2022 12:42:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Awesome! I will merge the PR soon and make some guide-related announcements next Monday 🙌 |
transformers | 19,255 | closed | Export TensorFlow models to ONNX with dynamic input shapes | # What does this PR do?
This PR exports TensorFlow models to ONNX with dynamic input shapes. Previously they were being exported with static input shapes with a batch size of 2 and sequence length of 8. This should bring TensorFlow to ONNX export mostly into parity with PyTorch Models.
Fixes https://github.com/huggingface/transformers/issues/19238
* While fixing this, I noticed the TensorFlow to ONNX export tests weren't actually exporting TensorFlow models because `FeaturesManager.get_model_class_for_feature` returns a PyTorch model class by default. I've exposed a `framework` argument on these tests so that `FeaturesManager.get_model_class_for_feature` can return TensorFlow models. *NOTE*: Exporting TensorFlow to ONNX seems to be much slower than exporting PyTorch to ONNX so CI duration will increase
* I've changed `validate_model_outputs` to check with a batch size/sequence length different than used during export (now 3 and 9 respectively). There was a TODO about this, but it surfaced an error for BERT, CamemBERT, and RoBERTa multiple-choice tasks `onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'tf_bert_for_multiple_choice/bert/encoder/layer_._0/attention/self/add_1' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/element_wise_ops.h:503 void onnxruntime::BroadcastIterator::Init(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1`, I suspect due to the way these models are defined (tracing fails to properly infer shape somewhere). IMO this is still a net improvement since the ONNX models exported under TensorFlow were previously non-functional except with their static input shapes. I'm skipping these specific configurations during testing for now, but someone should look into this
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@Rocketknight1, @LysandreJik, @lewtun
| 09-30-2022 12:40:42 | 09-30-2022 12:40:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Can you please confirm that the slow tests pass by running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py`
There were a few failures here (`16 failed, 400 passed, 16 skipped, 72972 warnings`):
```
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_051_deberta_v2_question_answering - onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Deserialize tensor onn...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_050_deberta_v2_multiple_choice - AssertionError: deberta-v2, multiple-choice -> Outputs values doesn't match between reference model and ONN...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_078_groupvit_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_109_owlvit_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_110_perceiver_image_classification - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTIO...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_111_perceiver_masked_lm - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zer...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_112_perceiver_sequence_classification - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEP...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_078_groupvit_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_125_roformer_multiple_choice - onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got ...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_125_roformer_multiple_choice - onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMEN...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_109_owlvit_default - TypeError: generate_dummy_inputs() got an unexpected keyword argument 'batch_size'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_110_perceiver_image_classification - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_111_perceiver_masked_lm - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION :...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_112_perceiver_sequence_classification - onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTI...
```
* `clip`, `groupvit`, and `owlvit` should be easy fixes to expose the relevant args (or consume via **kwargs) in their `generate_dummy_inputs`
* `deberta` is failing with my environment on `[49d62b0](https://github.com/dwyatte/transformers/commit/49d62b01783416a89acc0b865f7cb8dbab87cd6b)` which I branched from
* `perceiver` and `roformer` are real errors, but seem to be due to static input shapes e.g.,
```
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_57' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{3,256,256}, requested shape:{2,256,8,32}
```
```
E onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: token_type_ids for the following indices
E index: 2 Got: 17 Expected: 15
E Please fix either the inputs or the model.
```
A couple of options:
* Disable tests for `deberta`, `perceiver`, and `roformer` for this PR while we figure out what's going on there
* Don't include the code that automatically adds 1 to the batch size and sequence length during validation in this PR
* Refactor the code to pass in `batch_size` and `seq_length` to `validate_model_outputs` to give more control over which models are tested with dynamic input shapes
What do you think / any other ideas?
<|||||>> It would also be interesting to know how much slower the TF exports are compared to the PyTorch ones, e.g. can you share some timings for a few models?
* `bert-base-cased`
* **TensorFlow**: 7 passed, 6 skipped, 19031 warnings in **525.40s (0:08:45)**
* **PyTorch**: 7 passed, 6 skipped, 9 warnings in **83.50s (0:01:23)**
* `hf-internal-testing/tiny-albert`
* **TensorFlow**: 6 passed, 6 skipped, 4059 warnings in **28.33s**
* **PyTorch**: 6 passed, 6 skipped, 9 warnings in **14.30s**
* `distilbert-base-cased`
* **TensorFlow**: 6 passed, 6 skipped, 10241 warnings in **293.32s (0:04:53)**
* **PyTorch**: 6 passed, 6 skipped, 15 warnings in **40.01s**
So TF is around 2-8x slower on my machine (`2.3 GHz 8-Core Intel Core i9`). The warnings are mainly deprecation warnings from `tf2onnx`<|||||>@lewtun any further thoughts on this PR with the goal of supporting dynamic input shapes in ONNX models exported from TensorFlow?
It's not clear to me how `tests/onnx/test_onnx_v2.py` is used since it doesn't block checks here. Should we skip model/task/framework configurations known to fail a la https://github.com/huggingface/transformers/blob/6b36673779e0dcae583a2770ac3c9354e2fcbebf/tests/onnx/test_onnx_v2.py#L300-L303 Or is it ok to leave the failures if they don't block anything? Is the increase in test time for TF models a concern if it doesn't run regularly?
I suppose part of the answer is whether we want users to experience export failures related to dynamic shapes (which the current code in this PR would do) vs removing explicit dynamic shape validation from the user experience and limiting it to tests.
<|||||>Hey @dwyatte, thanks for sharing the timings! I'm currently working on dramatically shrinking all the ONNX models we use for internal testing, so a 2-8x slowdown for some models is probably OK.
Regarding how to handle the model validation:
> A couple of options:
>
> * Disable tests for `deberta`, `perceiver`, and `roformer` for this PR while we figure out what's going on there
> * Don't include the code that automatically adds 1 to the batch size and sequence length during validation in this PR
> * Refactor the code to pass in `batch_size` and `seq_length` to `validate_model_outputs` to give more control over which models are tested with dynamic input shapes
I am in favour of option (1) and creating a separate issue to figure out what's wrong in the ONNX export of these 3 models. You can skip these tests by following the same logic you linked to above :)<|||||>> I am in favour of option (1) and creating a separate issue to figure out what's wrong in the ONNX export of these 3 models. You can skip these tests by following the same logic you linked to above
Created https://github.com/huggingface/transformers/issues/19357 to track this. `tests/onnx/test_onnx_v2.py` should now be 100% passing/skipped (416 passed, 16 skipped in my env)<|||||>There was some problem with CircleCI which only ran part of the test suite (and I can't manually re-run it). Could you push an empty commit on your branch (`git commit -m "Trigger CI" --allow-empty`)?<|||||>> There was some problem with CircleCI which only ran part of the test suite (and I can't manually re-run it). Could you push an empty commit on your branch (git commit -m "Trigger CI" --allow-empty)?
I think I was having the same problem described here https://github.com/huggingface/transformers/pull/18351#issuecomment-1263565031
[9496836](https://github.com/huggingface/transformers/pull/19255/commits/949683675d83cc38620106626822279cd45b076b) ran the CI under the [huggingface org](https://app.circleci.com/pipelines/github/huggingface/transformers/48731/workflows/394ccf4c-ad7f-4904-9ad3-0fe915e63f8d), so should be good to go now<|||||>Thanks! |
transformers | 19,254 | closed | Add onnx support for VisionEncoderDecoder | # What does this PR do?
Fixes #14812
This PR enables the export of VisionEncoderDecoder models to ONNX.
The VisionEncoderDecoder models contains two parts, a vision transformer encoder and language modelling decoder. Both the models are exported to onnx separately as `encoder_model.onnx` and `decoder_model.onnx`.
To enable the export of the model, the export call in the __main__ file is segregated based on the model_kind.
Usage
```python
model_ckpt = "nlpconnect/vit-gpt2-image-captioning"
!python -m transformers.onnx --model={model_ckpt} --feature=vision2seq-lm onnx/ --atol 1e-3
``` | 09-30-2022 10:35:57 | 09-30-2022 10:35:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, I would like to help here, It would be good for using Donut easier with ONNX :) @mht-sharma I can help fixing the errors that @lewtun comments<|||||>> Hi, I would like to help here, It would be good for using Donut easier with ONNX :) @mht-sharma I can help fixing the errors that @lewtun comments
Hi @WaterKnight1998 thanks for the help. I have updated with the PR with a new commit addressing the comments.<|||||>> Thanks for adding ONNX support for this highly request model type @mht-sharma 🔥 !!
>
> Overall the PR looks great and there's a few small things we need to correct:
>
> * since this model type is kind of special, I think we should document somewhere in the `serialization.mdx` docs that these models produce _two_ files
> * you need to update the table in `serialization.mdx` by running `make fix-copies`
* Added a Tip for the generation of 2 onnx files or VisionEncoderDecoder models.
* Updated serialization.mdx with `make fix-copies`<|||||>> Thanks for iterating on this @mht-sharma - it's looking very good!
>
> I've left a bunch of nits, but once they're addressed I think this should be good to merge. Would you mind confirming that all the slow tests pass with these changes?
>
> ```
> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py
> ```
All tests pass<|||||>@mht-sharma For the command
```
python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx
```
It's throwing output value error
```
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[✓] (3, 1200, 1024) matches (3, 1200, 1024)
-[x] values not close enough (atol: 1e-05)
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 180, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 113, in main
args.atol if args.atol else encoder_onnx_config.atol_for_validation,
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 456, in validate_model_outputs
"Outputs values doesn't match between reference model and ONNX exported model: "
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0018157958984375 for [ 1.5980988 0.5988426 -14.8206215 ... -5.1114273 4.5024166
2.8833218] vs [ 1.5982218 0.59886694 -14.820812 ... -5.1115417 4.502474
2.883381 ]
```
Am I missing something? or any suggestions? Thank you.<|||||>> @mht-sharma For the command
>
> ```
> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx
> ```
>
> It's throwing output value error
>
> ```
> Validating ONNX model...
> -[✓] ONNX model output names match reference model ({'last_hidden_state'})
> - Validating ONNX Model output "last_hidden_state":
> -[✓] (3, 1200, 1024) matches (3, 1200, 1024)
> -[x] values not close enough (atol: 1e-05)
> Traceback (most recent call last):
> File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> "__main__", mod_spec)
> File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 180, in <module>
> main()
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 113, in main
> args.atol if args.atol else encoder_onnx_config.atol_for_validation,
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 456, in validate_model_outputs
> "Outputs values doesn't match between reference model and ONNX exported model: "
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0018157958984375 for [ 1.5980988 0.5988426 -14.8206215 ... -5.1114273 4.5024166
> 2.8833218] vs [ 1.5982218 0.59886694 -14.820812 ... -5.1115417 4.502474
> 2.883381 ]
> ```
>
> Am I missing something? or any suggestions? Thank you.
Hi @BakingBrains , you need to increase the atol for the model. Please pass `--atol <value>` in the above command to export the model.<|||||>Also @mht-sharma is there any reference for the usage of ```ORTModelForConditionalGeneration``` (for encoder-decoder models) for inferencing.
Thanks and Regards<|||||>> Also @mht-sharma is there any reference for the usage of `ORTModelForConditionalGeneration` (for encoder-decoder models) for inferencing.
>
> Thanks and Regards
Hi @BakingBrains, currently only text seq2seq encoder-decoder models are supported in optimum. Please find the reference here: [ORTModelForSeq2SeqLM](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM)
I will be adding support for other modalities in optimum soon. Stay tuned!
<|||||>@NielsRogge @mht-sharma I referred to some of issues in Model Conversion , I have transformed my Tr-OCR base printed model into Two File encoder.onnx and decoder.onnx. Now as per this (https://github.com/huggingface/transformers/issues/19811) which i have followed thoroughly,I was trying with ORTEncoder and ORTDecoder but there seems to be issues in model.generate and also gives backhooks issue.Can you help here?<|||||>Hi @umanniyaz , could you share what is the error message you are getting? Probably would be best to open a new issue or comment on existing issue with sample snippet and error message.
I will be adding the support for the above model in `optimum` onnxruntime soon, which would enable you to run inference with the model directly. I could update the PR here in some days to keep you in loop. <|||||>Hi @mht-sharma, I'm getting this error message
```
!python -m transformers.onnx --model="microsoft/trocr-large-printed" --feature=vision2seq-lm onnx/ --atol 1e-3
Framework not requested. Using torch to export to ONNX.
Some weights of VisionEncoderDecoderModel were not initialized from the model checkpoint at microsoft/trocr-large-printed and are newly initialized: ['encoder.pooler.dense.bias', 'encoder.pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using framework PyTorch: 1.13.0+cu116
/usr/local/lib/python3.8/dist-packages/transformers/models/vit/modeling_vit.py:176: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/usr/local/lib/python3.8/dist-packages/transformers/models/vit/modeling_vit.py:181: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if height != self.image_size[0] or width != self.image_size[1]:
tcmalloc: large alloc 1219141632 bytes == 0x12f19c000 @ 0x7fb5e825d887 0x7fb5e6b53c29 0x7fb5e6b54afb 0x7fb5e6b54bb4 0x7fb5e6b54f9c 0x7fb520720a74 0x7fb520720fa5 0x7fb50ff9bced 0x7fb5355c16b4 0x7fb53507e6af 0x5d80be 0x5d8d8c 0x4fedd4 0x4997c7 0x55cd91 0x5d8941 0x49abe4 0x55cd91 0x5d8941 0x49abe4 0x55d078 0x5d8941 0x49abe4 0x55cd91 0x5d8941 0x4990ca 0x5d8868 0x4990ca 0x55cd91 0x55d743 0x627376
tcmalloc: large alloc 1219141632 bytes == 0x177c46000 @ 0x7fb5e825b1e7 0x4d30a0 0x5dede2 0x7fb5355c16eb 0x7fb53507e6af 0x5d80be 0x5d8d8c 0x4fedd4 0x4997c7 0x55cd91 0x5d8941 0x49abe4 0x55cd91 0x5d8941 0x49abe4 0x55d078 0x5d8941 0x49abe4 0x55cd91 0x5d8941 0x4990ca 0x5d8868 0x4990ca 0x55cd91 0x55d743 0x627376 0x5aaeb9 0x4990ca 0x55cd91 0x5d8941 0x4990ca
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[✓] (3, 577, 1024) matches (3, 577, 1024)
-[x] values not close enough (atol: 0.001)
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 180, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 107, in main
validate_model_outputs(
File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/convert.py", line 472, in validate_model_outputs
raise ValueError(
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.002560138702392578 for [ -6.9223547 -1.2663026 -6.01969 -4.5837965 6.1642013
2.0628624 -3.5507686 1.9246662 -0.676687 -1.4317726
-2.3281486 -1.0843381 -8.45131 -5.4161043 6.8614235
4.190215 -5.2153773 3.0814483 0.63340956 -2.0609605
-8.46502 0.8696586 -6.97839 2.6996267 2.5350282
-8.500374 -4.0548806 1.1920781 -3.4029136 -5.475586
0.7265783 1.5491551 -11.724315 -10.578344 0.1786897
-0.5544502 -0.03817908 -1.506731 -6.7059665 2.4884484
-10.02477 3.6103365 4.648042 ] vs [ -6.921267 -1.2674816 -6.02225 -4.585174 6.1653543
2.0647495 -3.551866 1.9266967 -0.67827606 -1.4307456
-2.3269713 -1.0858293 -8.453766 -5.4173226 6.862612
4.1921353 -5.216509 3.0834184 0.6319726 -2.0598402
-8.4638815 0.8707002 -6.9765515 2.6985872 2.533873
-8.501539 -4.0534215 1.1903901 -3.401852 -5.477874
0.72761816 1.5506129 -11.725906 -10.577168 0.1799282
-0.55576885 -0.0398552 -1.5055205 -6.7073493 2.489525
-10.022912 3.6091 4.646897 ]
```
https://colab.research.google.com/drive/1CxngHndMjLmpRkDreOS2GJSHbtcH1Olr#scrollTo=fc5SIz6uzs6p
in colab when switching to gpu the result it similar.
<|||||>@matthewchung74 it works with ```--atol 1e-2```
```!python -m transformers.onnx --model="microsoft/trocr-large-printed" --feature=vision2seq-lm onnx/ --atol 1e-2```

<|||||>@BakingBrains thank you very much. do you see any improvement in performance with the onnx? I don't for both gpu and cpu. I'm not sure I really understand. if you have any general thoughts, it'd be appreciated. I have sample code below.
https://colab.research.google.com/drive/1CxngHndMjLmpRkDreOS2GJSHbtcH1Olr#scrollTo=I0xqkudSoxZw<|||||>@matthewchung74 There won't be that much difference you can see in terms of OCR performance. But there will be difference in inference speed as well as the consumption of resources.<|||||>odd, I must be doing something wrong, since my inference using onnx is 75% slower. thanks for the response. I'll have to work on it some more.<|||||>Inference of ONNX on GPU or CPU? because in one of my case the ONNX pipeline on GPU was taking 4.7 sec and same on CPU it was 6.2 sec
Regarding the original model the inference speed on CPU for the pipeline was 7.4 sec whereas for the ONNX was 4.1 sec<|||||>After getting ONNX -encoder.onnx and decoder.onnx , on running in Seq2seq ONNX, model inference improves but Accuracy of OCR gets worse<|||||>use this for inference @matthewchung74 @BakingBrains https://github.com/huggingface/transformers/issues/20644<|||||>@umanniyaz I tried a script which is pretty much the same as @mht-sharma has here. https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39. the inference script in #20644 also looks pretty much the same as ant-sharmas. I'm not really sure what I am missing. do you have your experiment with the better performance in a colab or something sharable?<|||||>Hi @matthewchung74 , I am working on the inference of such models on [optimum@588](https://github.com/huggingface/optimum/pull/588). This implementation would use the iobinding to make the inference faster on GPU.
As per the thread above, the model was exported using an `--atol` of 1e-2, which is quite high and may result in accuracy drop on inference. Would separately check this once the above implementation is completed.<|||||>Updated!
Hi @matthewchung74 , probably try this :
https://github.com/umanniyaz/TR-OCR-ONNX-Optimization
From this original script https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39 as shared by @mht-sharma
It gives good inference and models accuracy remains preserved.
Try to keep model initialisations at compile time <|||||>@umanniyaz thank you for sharing . I'm still seeing some performance issues. I'm running your code almost as is and getting the following as output.
```
Model Output Original is : TICKET
Original time: 1.5399658679962158 TICKET
Model Ouput ORT is : TICKET
ORT time: 3.5313572883605957 TICKET
```
here is the code. perhaps the difference is the test image. is your test image something you can share?
https://colab.research.google.com/drive/1ojsslQPxUO67_dGzI4ok4rRk6CSlcwf4?usp=sharing<|||||>hello, someone gives me an onnx model for TrOcr please
<|||||>> hello, someone gives me an onnx model for TrOcr please
Hi @Kamilya2020 pls use the following guide to export the model to onnx. https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model <|||||>Hi @mht-sharma ,
can u help me please
I have this code
`using Microsoft.AspNetCore.Mvc;
using Microsoft.ML;
using Microsoft.ML.Data;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.PixelFormats;
namespace OcrSolution.API.Controllers;
public class ModelInput
{
[ColumnName("input1")]
[VectorType(1,1, 64, 64)]
public float[,,,] Input { get; set; }
}
public class ModelOutput
{
[ColumnName("output")]
[VectorType(1,1, 97)]
public float[,,] Output { get; set; }
}
public class OcrController : ControllerBase
{
private readonly MLContext _mlContext;
private PredictionEngine<ModelInput, ModelOutput> _predictionEngine;
public OcrController()
{
_mlContext = new MLContext();
try
{
// Load the ONNX model
var modelPath =
"C:\\Users\\k.mimouni\\Desktop\\ocr web app\\sw-kamilia-2023\\components\\Server\\OcrSolution.API\\assets\\Model\\1_recognition_model.onnx";
var pipeline = _mlContext.Transforms.ApplyOnnxModel(modelPath);
var dataView = _mlContext.Data.LoadFromEnumerable(new[] { new ModelInput() });
var transformer = pipeline.Fit(dataView);
// Verify transformer is not null after fitting
if (transformer == null)
{
throw new Exception("Transformer is null after fitting the pipeline.");
}
_predictionEngine = _mlContext.Model.CreatePredictionEngine<ModelInput, ModelOutput>(transformer);
// Verify predictionEngine is not null after creation
if (_predictionEngine == null)
{
throw new Exception("Prediction engine is null after creation.");
}
Console.WriteLine("ONNX model loaded successfully.");
}
catch (Exception ex)
{
Console.WriteLine($"Error loading ONNX model: {ex.Message}");
_predictionEngine = null; // Set _predictionEngine to null in case of an error
}
}
[HttpPost("ocr")]
public IActionResult PerformOCR(IFormFile imageFile)
{
if (_predictionEngine != null)
{
try
{
// Check if a file is uploaded
if (imageFile != null && imageFile.Length > 0)
{
// Resize and load the image data
var image = ResizeImage(imageFile);
if (image == null)
{
Console.WriteLine("Failed to resize the image.");
return StatusCode(500, "Failed to resize the image.");
}
var imageData = LoadImageData(image);
// Create the model input
var input = new ModelInput { Input = imageData };
if (input == null)
{
Console.WriteLine("Failed to create the model input.");
return StatusCode(500, "Failed to create the model input.");
}
// Make a prediction
var prediction = _predictionEngine.Predict(input);
if (prediction == null || prediction.Output == null)
{
Console.WriteLine("Failed to make a prediction or prediction output is null.");
return StatusCode(500, "Failed to make a prediction or prediction output is null.");
}
// Process the output and extract the text
var extractedText = ExtractText(prediction.Output);
// Return the extracted text
return Ok(new { text = extractedText });
}
// No image file uploaded
Console.WriteLine("No image file uploaded.");
return BadRequest("No image file uploaded.");
}
catch (Exception ex)
{
// Error occurred while performing OCR
Console.WriteLine($"An error occurred while performing OCR: {ex}");
return StatusCode(500, $"An error occurred while performing OCR: {ex}");
}
}
// Error loading ONNX model or prediction engine
return StatusCode(500, "Error loading ONNX model or prediction engine.");
}
private Image<Rgba32> ResizeImage(IFormFile imageFile)
{
using (var memoryStream = new MemoryStream())
{
// Copy the file content to a memory stream
imageFile.CopyTo(memoryStream);
// Load the image using SixLabors.ImageSharp
memoryStream.Seek(0, SeekOrigin.Begin);
var image = Image.Load<Rgba32>(memoryStream, out var format);
if (image == null)
{
throw new Exception("Failed to load the image.");
}
// Resize the image
var resizedImage = image.Clone(x => x.Resize(new ResizeOptions
{
Size = new Size(754, 64),
Mode = ResizeMode.Stretch
}));
if (resizedImage == null)
{
throw new Exception("Failed to resize the image.");
}
return resizedImage;
}
}
private float[,,,] LoadImageData(Image<Rgba32> image)
{
var imageData = new float[1, 1, image.Height, image.Width];
// Iterate over the pixels and convert them to float values
for (int y = 0; y < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
var pixel = image[x, y];
var pixelValue = GetPixelValue(pixel);
imageData[0, 0, y, x] = pixelValue;
}
}
return imageData;
}
private float GetPixelValue(Rgba32 pixel)
{
// Normalize the pixel value to the range [0, 1]
return pixel.R / 255f;
}
private string ExtractText(float[,,] output)
{
string extractedText = "";
for (int b = 0; b < output.GetLength(0); b++) // batch size
{
for (int h = 0; h < output.GetLength(1); h++) // output height
{
for (int w = 0; w < output.GetLength(2); w++) // output width (number of characters)
{
int maxIndex = 0;
float maxValue = 0;
for (int c = 0; c < output.GetLength(2); c++)
{
if (output[b, h, c] > maxValue)
{
maxIndex = c;
maxValue = output[b, h, c];
}
}
char predictedChar = (char)maxIndex;
extractedText += predictedChar;
}
}
}
return extractedText;
}
}`
here's the error:
`An error occurred while performing OCR: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.ML.Data.TypedCursorable`1.TypedRowBase.<>c__DisplayClass8_0`1.<CreateDirectVBufferSetter>b__0(TRow row)
at Microsoft.ML.Data.TypedCursorable`1.TypedRowBase.FillValues(TRow row)
at Microsoft.ML.Data.TypedCursorable`1.RowImplementation.FillValues(TRow row)
at Microsoft.ML.PredictionEngineBase`2.FillValues(TDst prediction)
at Microsoft.ML.PredictionEngine`2.Predict(TSrc example, TDst& prediction)
at Microsoft.ML.PredictionEngineBase`2.Predict(TSrc example)
at OcrSolution.API.Controllers.OcrController.PerformOCR(IFormFile imageFile) in C:\Users\k.mimouni\Desktop\ocr web app\sw-kamilia-2023\components\Server\OcrSolution.API\Controllers\OcrController.cs:line 95`<|||||>my onnx model from this link https://github.com/JaidedAI/EasyOCR/issues/786 |
transformers | 19,253 | closed | Add `beautifulsoup4` to the dependency list | # What does this PR do?
Add `beautifulsoup4` to the dependency list - to enable the tests `MarkupLMFeatureExtractionTest` and `MarkupLMProcessorIntegrationTests`
On CircleCI, we now can have the following tested (those have `@require_bs4`)
```bash
PASSED tests/models/markuplm/test_feature_extraction_markuplm.py::MarkupLMFeatureExtractionTest::test_call
PASSED tests/models/markuplm/test_feature_extraction_markuplm.py::MarkupLMFeatureExtractionTest::test_feat_extract_from_and_save_pretrained
PASSED tests/models/markuplm/test_feature_extraction_markuplm.py::MarkupLMFeatureExtractionTest::test_feat_extract_to_json_file
PASSED tests/models/markuplm/test_feature_extraction_markuplm.py::MarkupLMFeatureExtractionTest::test_feat_extract_to_json_string
PASSED tests/models/markuplm/test_feature_extraction_markuplm.py::MarkupLMFeatureExtractionTest::test_init_without_params
``` | 09-30-2022 08:10:21 | 09-30-2022 08:10:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger I think I misread your message, and end up putting the installation in the CircleCI `config.yml` and GitHub workflow files. Should I instead put `beautifulsoup4` in `extras["testing"]`? That way might be much easier.<|||||>Yes I meant to add it in `extras["testing"]`.<|||||>test failure irrelevant, merge now. |
transformers | 19,252 | closed | Catch `HFValidationError` in `TrainingSummary` | # What does this PR do?
`transformers` now uses `huggingface_hub 0.10` for testing. We have a few `deepspeed` failing tests ([see here](https://github.com/huggingface/transformers/actions/runs/3148346207/jobs/5118805294]) or the error below).
According to @Wauplin We should catch `HFValidationError` in `TrainingSummary.__post_init__()` too in order to keep the test passing. Ran those tests with this PR and confirmed they pass now.
#### errors without this PR
```bash
E File "/workspace/transformers/examples/pytorch/translation/run_translation.py", line 645, in main
E trainer.create_model_card(**kwargs)
E File "/workspace/transformers/src/transformers/trainer.py", line 3332, in create_model_card
E training_summary = TrainingSummary.from_trainer(
E File "/workspace/transformers/src/transformers/modelcard.py", line 600, in from_trainer
E return cls(
E File "<string>", line 17, in __init__
E File "/workspace/transformers/src/transformers/modelcard.py", line 377, in __post_init__
E info = model_info(self.finetuned_from)
E File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 92, in _inner_fn
E validate_repo_id(arg_value)
E File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 136, in validate_repo_id
E raise HFValidationError(
E huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/tmp/tmph2e0bt7a'. Use `repo_type` argument if needed.
```
| 09-30-2022 06:59:39 | 09-30-2022 06:59:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For the record, below is the discussion on Slack
> If seems that [here](https://github.com/huggingface/transformers/blob/main/src/transformers/modelcard.py#L376), there is a try to load info from the Hub with a repo id that is a path. Previously it was caught with a HTTPError but now it should be with a HFValidationError . In fact this is actually the goal of this validation step, to avoid having a downstream library (transformers here) trying to send a repo_id that is not a repo_id at all => we are catching the error explicitly "hey you passed a wrong repo_id" rather than having an HTTP 404 that is less informative (the problem is not that the repo doesn't exist it's that it couldn't even be a repo in the first place).<|||||>@Wauplin Flagging this is a breaking change, so should probable be spelled out in the [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.10.0).<|||||>@sgugger I would not really consider this a breaking change. Sending a path as `repo_id` has never been a supported use case, the only difference is that we now raise a specific exception for it.
Anyway, I still updating the release notes, just in case someone reads them.<|||||>@Wauplin I believe Sylvain is talking about the `HFValidationError` which is now returned instead of previously an `HTTPError`. I agree this should also be mentioned in the "Breaking changes" part of the release notes as workflows that previously caught `HTTPErros` will now fail. |
transformers | 19,251 | closed | "local_files_only" mode fails on Windows due to path format inconsistency | ### System Info
When loading models using "local_files_only" mode, huggingface goes out to grab the commit hash from a resolved file name. Usually this works fine on POSIX standard OSes, but it fails on Windows due to hardcoded regex search pattern.
This issue can be fixed in one line (add one line in utils/hub.py), see this screenshot:

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce,
1. Install huggingface transformers on Windows (anaconda)
2. run the following code without "local_files_only" mode and let huggingface cache all the needed files
3. run it again with local_files_only=True
code:
```
from transformers import AutoTokenizer, AutoConfig
model_name = "google/t5-efficient-small" # any correct model name can used to reproduce the error here.
config = AutoConfig.from_pretrained(model_name )
tokenizer = AutoTokenizer.from_pretrained(model_name, config=config, local_files_only=True) # TypeError
```
### Expected behavior
Huggingface transformers should work correctly across all supported OSes, using unified file path standard.
One quick fix is provided above, no extended test was conducted on the fix though.
Environment:
Windows 11
Huggingface transformers: 4.22.1
python 3.9 | 09-30-2022 06:09:44 | 09-30-2022 06:09:44 | Fixed here https://github.com/huggingface/transformers/pull/19178<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,250 | closed | Fix Encoder-Decoder testing issue about repo. names | # What does this PR do?
Some tests for encoder-decoder models use repo. names `../gpt2`. It works before `huggingface-hub==0.10`, but failed with this version.
This PR changes `../gpt2` to `gpt2`. In any case, it is less confusing even if `../gpt2` worked before. | 09-30-2022 05:54:15 | 09-30-2022 05:54:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also flagging to @Wauplin that this is another breaking change in the last release that is not spelled out in the [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.10.0).<|||||>Same answer as [here](https://github.com/huggingface/transformers/pull/19252#issuecomment-1263651027). |
transformers | 19,249 | closed | Enabling custom TF signature draft | # What does this PR do?
This PR is related to the discussion on issue #19094 , which is related to `transformers` enabling users to provide custom signatures for models while saving them with [save_pretrained](https://github.com/huggingface/transformers/blob/ca485e562b675341409e3e27724072fb11e10af7/src/transformers/modeling_tf_utils.py#L2085).
Fixes #19094
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Did you write any new necessary tests? | 09-30-2022 01:07:17 | 09-30-2022 01:07:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Adding a test (perhaps inside [this test class](https://github.com/huggingface/transformers/blob/f3d2f7a6e08efe18debf59512325f02128394b43/tests/test_modeling_tf_common.py#L1913)) would be great!<|||||>Hey there, I think that this code is finished and ready to be merged, I fixed the code and added the tests, let me know if the tests look too hardcoded or if I could check something else.
It seems that the failed tests are not related to my changes.
cc @gante @Rocketknight1 @sgugger <|||||>@dimitreOliveira it seems ready to merge.
There is an apparently unrelated failure, but let's rule it out. Can I request you to rebase and commit? i.e. ensure your fork is up to date with the latest main and then run
```bash
git checkout main
git pull
git checkout custom_tf_signature
git rebase origin/main
git push origin custom_tf_signature -f
```<|||||>@gante should be good now, thanks! |
transformers | 19,248 | closed | Skip pipeline tests | # What does this PR do?
Skip tests failing for now (cc @alaradirik ). | 09-29-2022 16:16:22 | 09-29-2022 16:16:22 | |
transformers | 19,247 | closed | Update Protobuf dependency version to fix known vulnerability | # What does this PR do?
Protobuf v3.20.1 has a known security vulnerability, [described here on snyk](https://security.snyk.io/package/pip/protobuf/3.20.1). Bumping the protobuf dependency version fixes this vulnerability.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Library:
- pipelines: @LysandreJik | 09-29-2022 15:07:25 | 09-29-2022 15:07:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,246 | closed | Enable numpy behaviour across the TF codebase | This PR enables TF Numpy behaviour for all of our models, which will hopefully reduce the number of PRs we have to make to fix dtype incompatibilities. This flag mainly relates to type promotion (TensorFlow is strict by default, and won't automatically promote if you e.g. add an `int32` tensor to an `int64`), but it also enables some Numpy-like indexing that the standard TF indexing doesn't allow.
It shouldn't break anything, and despite being marked as experimental it's been in TF since 2.4 and actively promoted by Chollet on his Twitter, so I think it's stable and here to stay.
Relevant TF documentation is [here](https://www.tensorflow.org/api_docs/python/tf/experimental/numpy/experimental_enable_numpy_behavior) | 09-29-2022 13:47:41 | 09-29-2022 13:47:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19246). All of your documentation changes will be reflected on that endpoint.<|||||>Update: We realized some issues related to float promotion that would make this very messy, closing this PR for now! |
transformers | 19,245 | closed | Trainer very slow to start training | ### System Info
- `transformers` version: 4.19.2
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@sgugger , this is related to the trainer
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have adapted the run_summarisation.py script to perform dialogue state tracking. I use T5 base model, starting from Google's v1_1 pretrained model. My dataset contains 9,608,136 examples, which are split into 6 json files.
My dataset is loaded using:
```
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir, field='data')
```
Above `data_files` is a list of `6` json files, each containing 1/6 of the data.
Then it is processed using:
```
train_dataset = train_dataset.map(
preprocess_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on train dataset",
)
```
After this, I initialize the trainer and call train:
``` train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
In the past, I noticed that there is about a half an hour delay between the method call and when the training actually starts. However, without any change to the code the trainer seems to start very slowly. My logs look as follows:
```
09/29/2022 03:30:18 - INFO - __main__ - Initialising trainer...
09/29/2022 03:30:18 - INFO - __main__ - Starting training...
[INFO|trainer.py:470] 2022-09-29 03:30:19,248 >> max_steps is given, it will override any value given in num_train_epochs
09/29/2022 03:30:19 - INFO - __main__ - Starting training...
[INFO|trainer.py:1419] 2022-09-29 04:41:11,893 >> ***** Running training *****
[INFO|trainer.py:1420] 2022-09-29 04:41:11,893 >> Num examples = 9608136
[INFO|trainer.py:1421] 2022-09-29 04:41:11,893 >> Num Epochs = 2
[INFO|trainer.py:1419] 2022-09-29 04:41:11,894 >> ***** Running training *****
[INFO|trainer.py:1422] 2022-09-29 04:41:11,893 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1420] 2022-09-29 04:41:11,894 >> Num examples = 9608136
[INFO|trainer.py:1421] 2022-09-29 04:41:11,894 >> Num Epochs = 2
[INFO|trainer.py:1423] 2022-09-29 04:41:11,893 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1422] 2022-09-29 04:41:11,894 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1424] 2022-09-29 04:41:11,893 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1423] 2022-09-29 04:41:11,894 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1425] 2022-09-29 04:41:11,893 >> Total optimization steps = 40000
[INFO|trainer.py:1424] 2022-09-29 04:41:11,894 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1425] 2022-09-29 04:41:11,894 >> Total optimization steps = 40000
```
Notice it takes over 1 hour to actually start running the training loop. The expected training time is 12 hrs on 8 A100 gpus across two nodes. However, 4h and 20 minutes later I had trained 342 steps (usually there should be more than one step per second).
I have no idea what is actually going on, why is the trainer so slow to start? The processing of the data seems to be happening rather quickly. I would expect there is a delay at startup because presumably the large dataset has to be put in a table for all the workers to access. Why is this so slow?
@sgugger needless to say I can make the code and data available to you for a deep investigation.
### Expected behavior
A lag between the `.train` call and the training running is expected because the arrow table has to be built. However, I would expect that following the initial lag, the training happens rather quickly, but this does not happen. | 09-29-2022 13:28:30 | 09-29-2022 13:28:30 | Hi there. You should probably try the [forums](https://discuss.huggingface.co/) to have the community help you debug the code. Without a clear and concise reproducer, there is nothing I can to fix it on my side<|||||>@sgugger this issue has been flagged before but there is no answer. I'm 100% the code is correct (I wrote an entire paper based on experiments I did on this code). Hence why I was keen to make the code available to you so that we can figure out why the trainer gets slowed down so much.
Some posts I checked:
https://discuss.huggingface.co/t/it-takes-so-long-before-the-model-start-training-wav2vec2-fine-tuning/5384
This is not helpful at all.
I also checked this post (https://github.com/huggingface/datasets/issues/4394) ... My data is already split in six shards, about 1.7GB each so I am already following best practice?<|||||>For posterity, crazy slow training and waiting time for training to start can also be due to slow I/O, so people struggling should check their storage works fine. |
transformers | 19,244 | closed | Use `hf_raise_for_status` instead of deprecated `_raise_for_status` | # What does this PR do?
Replace `huggingface_hub.utils._errors._raise_for_status` by the officially supported `huggingface_hub.utils.hf_raise_for_status`.
Related to https://github.com/huggingface/huggingface_hub/pull/1019#issuecomment-1233864547.
Requires `huggingface_hub>=0.10.0`. | 09-29-2022 08:40:42 | 09-29-2022 08:40:42 | Yep true. Thanks for letting me know. Is it also required to run `make deps_table_update` ? I did it in [my last commit ](https://github.com/huggingface/transformers/pull/19244/commits/f0dcac9fdf512105f01a5dd55bc1229a63cb6e73) but I can remove it.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, I should be good now. @sgugger I let you merge it when it's green :) |
transformers | 19,243 | closed | Fix opt softmax small nit | # What does this PR do?
- Following the PR #18057 we found out that the argument `dtype_attn_weights ` is not defined outside the condition `if attention_mask is not None:` which makes the modeling code prone to potential bugs. Also, the solution provided in #18057 is clearer so this PR addresses the same change to make the implementations consistent across models that suffers from the same Softmax issue.
cc @ydshieh @sgugger
Can also confirm slow tests pass! | 09-29-2022 08:03:24 | 09-29-2022 08:03:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,242 | closed | Memory Retention Is fibonacci : (n-1) + fibonacci (n-2)? | Understanding wrap around.
HTML
Hierarchical
Multi Task Learning?
Tokens Access Inclusion | 09-29-2022 02:30:54 | 09-29-2022 02:30:54 | (this is not a `transformers` issue) |
transformers | 19,241 | closed | load_best_model_at_end | ### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.3
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: <yes
### Who can help?
@sgugger ,Hi, When I finish traning t5 model with args ```--load_best_model_at_end```, I found that it still have four checkpoints dir at output_dir(I set max_epoch=4),and there is also a ```pytorch_model.bin``` at the output file. Like they organized like this way
```
output_dir
--pytorch_model_1.bin
--ckpt1_dir
----pytorch_model_2.bin
--ckpt2_dir
----pytorch_model_3.bin
--ckpt3_dir
----pytorch_model_4.bin
--ckpt4_dir
----pytorch_model_5.bin
```
Does it means that the pytorch_model_1.bin is the best model or I can say I can AutoModel.from_pretrained(''./output_dir) would be correct?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
None | 09-29-2022 02:11:00 | 09-29-2022 02:11:00 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. As indicated by the name of the arg, the best model is the one loaded inside the Trainer. You should save it wherether you want for further use.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,240 | closed | Current BART Position Embeddings Implementation Seems Wrong | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.13.0a0+08820cb (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I understand that both BART and RoBERTa came from Facebook with original implementation in FairSeq and BART's offset in position embedding is copied from RoBERTa's implementation. When facebook first implemented RoBERTa in fairseq they had the offset and HF also copied that in their implementation. Based on this [issue](https://github.com/huggingface/transformers/issues/15292#issuecomment-1019116008) and this other [issue](https://github.com/huggingface/transformers/issues/10736#issuecomment-800175342) I understand that motivation for this was to use `nn.Embedding` `padding_idx` to make sure we don't learn position vector padding tokens. Since both RoBERTa and BART have `padding_id` as 1 we pass in `padding_idx=1` in `nn.Embedding` for the position embedding table. And therefore, the non-padding tokens get offset by `padding_idx + 1` and `num_embeddings += padding_idx + 1`.And going through the old transformers BART code [here](https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L740) and [here](https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_utils.py#L2067) the code makes sense. And on an example input the behavior of `create_position_ids_from_input_ids` makes sense: we offset the position ids of non-padding tokens and padding tokens get assigned position 1
```
import torch
from transformers import AutoTokenizer, AutoModel
def create_position_ids_from_input_ids(input_ids, padding_idx):
""" Replace non-padding symbols with their position numbers. Position numbers begin at
padding_idx+1. Padding symbols are ignored. This is modified from fairseq's
`utils.make_positions`.
:param torch.Tensor x:
:return torch.Tensor:
"""
# The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() + padding_idx
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
bart = AutoModel.from_pretrained("facebook/bart-base")
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
inputs = tokenizer(batch_sentences, padding=True, return_tensors="pt")
input_ids = tokenizer(batch_sentences, padding=True, return_tensors="pt").input_ids
create_position_ids_from_input_ids(input_ids, padding_idx=1)
tensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 1, 1, 1, 1, 1],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 2, 3, 4, 5, 6, 7, 8, 9, 1, 1, 1, 1, 1]])
```
But starting some transformers version (perhaps 4.8.0) BART code was changed and it replaced `LearnedPositionEmbedding` and `create_position_ids_from_input_ids` functions with just this: [BartLearnedPositionalEmbedding](https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/models/bart/modeling_bart.py#L120) which gets used in both encoder and decoder. The hard-coded offset of 2 here makes sense because the BART's pad token is 1 so `padding_idx=1 + 1 = 2` and we offset the non-padding tokens by 2. But what is not clear is why [we are not even passing](https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/models/bart/modeling_bart.py#L129) in `padding_idx` to `nn.Embedding` constructor anymore unlike the [old implementation](https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/src/transformers/modeling_bart.py#L755) because I thought the whole point for offset was to use the padding_idx in nn.Embedding for padding token.
Also in current implementation we add offset to all tokens (including pad tokens) which means padding positions are also learned in the current version of BART and doesn't that defeat the whole point of having the offset. Is current implementation of BART position embeddings wrong?
Here is example of modified BartLearnedPositionalEmbedding which return positions + offset
```
class BartLearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
"""
def __init__(self, num_embeddings: int, embedding_dim: int):
# Bart is set up so that if padding_idx is specified then offset the embedding ids by 2
# and adjust num_embeddings appropriately. Other models don't have this hack
self.offset = 2
super().__init__(num_embeddings + self.offset, embedding_dim)
def forward(self, input_ids: torch.Tensor, past_key_values_length: int = 0):
"""`input_ids' shape is expected to be [bsz x seqlen]."""
bsz, seq_len = input_ids.shape[:2]
positions = torch.arange(
past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
).expand(bsz, -1)
return positions + self.offset
pos_embed = BartLearnedPositionalEmbedding(num_embeddings=1024, embedding_dim=512)
pos_embed(input_ids)
tensor([[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]])
```
### Expected behavior
Expected new `BartLearnedPositionalEmbedding` to pass in `padding_idx=1` to `nn.Embedding` constructor and add the offset to only non-pad tokens to prevent learning padding positions like old implementation | 09-28-2022 23:53:53 | 09-28-2022 23:53:53 | This comment was posted because the issue had been automatically marked as stale due to lack of recent activity, but it was flagged by a user as an issue that still needs to be addressed.<|||||>Might be of interest to @ArthurZucker <|||||>Hey! So this was answered in #10200, where it the `padding_idx` was removed. It explains that adding `padding_idx` prevents the model from ever learning the first position (and other positional_tokens) that can be set to `0`.
Tell me if this answers your question! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Longer answer:
The reason why we are not using
```python
>>> embed_tokens = nn.embedding(vocab_dim, hidden_dim, padding_idx)
```
Is that this makes the positions at index `padding_idx` un-learnable , and it zeros them out.
What if you change the padding index to something bigger? Let’s say `4` then the embedding at index `4` will be zeroed out ( basically erased ) but for the model, that means that when it will never receive the embedding that should be at position 4.
→ Potential usage: Imagine if you need a new starting token in your BartModel. The padding token will no longer be 2 but 4. This means you just want to shift the inputs learned positions by 2, not that you want to zero-out the learned position embedding at position 4.
Snippet:
```python
# during training
>>> input_ids = [ 3, 13, 25, 1, 1 ,1 ,1]
>>> pad_token_id = 1
>>> positions = [ 0, 1, 2, 3, 4, 5, 6]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8]
>>> embedding = [ X2, X3, X4, X5, X6, X7, X8]
# finetuning with one more token
>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2
>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]
>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9]
>>> embedding = [ X2, X3, 0, X5, X6, X7, X8, X9]
# With the code fix:
# finetuning with one more token
>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2
>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]
>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]
>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9]
>>> embedding = [ X2, X3, X4, X5, X6, X7, X8, X9]
```
If you zero-out the embeddings corresponding to the index of the padding token, changing the ID of the padding token will result in a change of the inputs that are positioned at this index.
The subtil difference is that it does not matter if your padding token has index 0, 1, or 999.
The tokens that are at the position of the index ( let’s say the 999th token) should not have a zeroed-out embedding. But, if the token at that position is a padding token, then the loss will not make it contribute.
If we zero out at index 4, the 4th token will never have a learned positional embedding. |
transformers | 19,239 | closed | Fix TrainingArgs argument serialization | # What does this PR do?
Filter out arguments in TrainingArguments.to_dict() such as `field(init=False)` so arguments such as `_n_gpu` won't be serialized.
Fixes #19236
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 09-28-2022 23:48:08 | 09-28-2022 23:48:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,238 | closed | Exporting TensorFlow models to ONNX exports with a static batch size of 2 and sequence length of 8 | ### System Info
- `transformers` version: 4.22.0
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.12
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
With `tensorflow` and `onnx` installed, run
`python -m transformers.onnx --model=bert-base-uncased --feature=sequence-classification onnx/`
along with a simple script to test the exported model
```py
import numpy as np
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
session = InferenceSession("./onnx/model.onnx")
inputs = tokenizer(["A string", "A longer string", "An even longer string"], return_tensors="np", padding=True)
inputs = {k: v.astype(np.int32) for k, v in inputs.items()}
print(session.run(output_names=["logits"], input_feed=dict(inputs)))
```
This raises the error:
```
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: token_type_ids for the following indices
index: 0 Got: 3 Expected: 2
index: 1 Got: 6 Expected: 8
Please fix either the inputs or the model.
```
because we use the TensorSpec from the dummy inputs here https://github.com/huggingface/transformers/blob/0fc68a7e14b1e6450829e7be76f74abbc84f051e/src/transformers/onnx/convert.py#L265
### Expected behavior
This issue is related to https://github.com/huggingface/transformers/issues/16885 which was a feature request for setting the batch size and sequence size, but I'm not sure we want this to be configurable. Arguably, it should be at parity with the PyTorch -> onnx capability which configures dynamic batch size and and sequence length set by the model. Therefore I'm marking this as a bug instead of a feature request
Expected behavior is that the exported onnx model runs with any batch size and up to the model's max sequence length | 09-28-2022 22:24:04 | 09-28-2022 22:24:04 | @dwyatte Can you please look into #19231? Somehow the sequence length is capped to 5 tokens in the generated tflite. tflite files are used to run models on mobile devices.
Since your issue also has a fixed sequence length, the cause may be related. I tried converting BERT to tflite and the input size was 1x5, as with other models.
Please look into this if you can. Thanks for your time! :) |
transformers | 19,237 | closed | Fix test fetching for examples | # What does this PR do?
When re-working the circleCI config to execute test fetching first, I didn't do anything special for the examples, which means that if only an example file is modified, no tests are run (by default only tests found in the tests subfolder are kept, for the examples we need to add `--filters tests examples`.
You can check on [this report](https://github.com/huggingface/transformers/runs/8604472112) that when only example files are modifier, the tests for all examples are still run (next step would be to properly filter between TF/Flax and PyTorch but this is going to take slightly longer). | 09-28-2022 19:56:26 | 09-28-2022 19:56:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,236 | closed | TrainingArguments._n_gpus serialization | I'm not sure, but I think that `TrainingArguments._n_gpu` attribute is not supposed to be serialized in `TrainingArguments.to_dict()` .
Or even basically all the attributes that are defined as `dataclass.field(init=False...)` should be filtered out while doing `to_dict()`.
What do you think?
@sgugger | 09-28-2022 19:36:48 | 09-28-2022 19:36:48 | Indeed, this argument shouldn't be serialized.<|||||>I'm on it |
transformers | 19,235 | closed | Add and save job names in Past CI artifacts | # What does this PR do?
In order to ease the fix process for Past CI, it would be very helpful to have links to jobs for the failed tests.
The GitHub API could give a list of jobs and a list of artifacts, but there is no ensured and easy way to get the correspondence between each artifact/job, see [jobs here](https://api.github.com/repos/huggingface/transformers/actions/runs/3145556680/jobs) and [artifacts here](https://api.github.com/repos/huggingface/transformers/actions/runs/3145556680/artifacts).
Moreover, when the jobs are running, there is no context evn. variable that gives the job run link (or anything helpful to get this information)
However, the list of jobs given by the API contains the **job names** and the **job run page** link (given by `html_url`).
This PR therefore saves the job names in the corresponding test artifacts. Once the past CI is done:
- we extract the error information together the job names
- we fetch the list of jobs
- For each artifact, we find the corresponding job in the job list using the job name saved in the artifact
- We get the job run link by looking `html_url`
- We add the job run link in the error info.**
To check the effect of this PR, see the contents in the artifacts in [this run](https://github.com/huggingface/transformers/actions/runs/3145556680) | 09-28-2022 17:44:18 | 09-28-2022 17:44:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,234 | closed | Fix confusing working directory in Push CI | # What does this PR do?
⚠️ ~~Let me launch a tiny run to make sure before merge~~ 🙏
Previously, the job `setup` (to prepare the tests to run) in push CI ran on `ubuntu-latest` runners, and has to clone `transformers` with `actions/checkout@v2`.
In #19054, it is changed to use the docker image (and run on our runner). This is no particular reason for it, just to make the sequence of jobs looks more linear.
But I forgot to change the working directory in #19054. The job still run without failure (as the 2 `transformers` directory exist at the same time), but it looks somehow confusing. This PR is to clean this. | 09-28-2022 16:39:05 | 09-28-2022 16:39:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,233 | closed | Fixes HFTracer getattr for PyTorch 1.13+ | # What does this PR do?
There was an API change where `torch.fx.Tracer._module_getattr` became `torch.fx.Tracer.getattr`, causing issues when using the `HFTracer` with `torch-nightly`. This PR fixes that.
Fixes: https://github.com/pytorch/pytorch/issues/84840 (@pbelevich) | 09-28-2022 15:26:42 | 09-28-2022 15:26:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Chillee |
transformers | 19,232 | closed | Cast TF generate() inputs | Enable TF `generate()` to support any integer dtype by adding casts at the beginning of the main method. | 09-28-2022 13:26:54 | 09-28-2022 13:26:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante Let me know if you're okay with the slightly cheeky `-1`, and other than that I'm happy to merge whenever!<|||||>haha the cheeky `-1` is fine for me.
I do have one additional nit: can it be moved to after the code in step `0`? The idea would be to run a quick validation of arguments and classes before doing any actual work. (My bad, I should have been more precise in my prior comment 🤦 )<|||||>@gante Done! The failing tests are hub server-side issues.<|||||>Thanks 🙏 |
transformers | 19,231 | closed | Input shape fixed at 1x5 when converting transformers to tflite | ### System Info
Transformers version: 2.8.2
Python version: 3.7.14 on linux
Platform: Linux (Google Colab)
### Who can help?
@patil-suraj, @patrickvonplaten, @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi,
I am following this tutorial written by the Hugging Face team to convert GPT2 to tflite: [https://towardsdatascience.com/on-device-machine-learning-text-generation-on-android-6ad940c00911](https://towardsdatascience.com/on-device-machine-learning-text-generation-on-android-6ad940c00911)
As per the tutorial, the generated tflite file should have an input shape of 1x64. However, the input shape turns out as 1x5. There is a Google Colab notebook linked in the tutorial that you can refer to: [https://colab.research.google.com/drive/18JPzizAH995pd0iFWx4Xdf-sqjmZwHUD](https://colab.research.google.com/drive/18JPzizAH995pd0iFWx4Xdf-sqjmZwHUD)
This is the script that was used in the tutorial for conversion (this script is also in the notebook):
```
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
print(model.inputs)
print(model.outputs)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("gpt2-fp16.tflite", "wb").write(tflite_model)
```
Notice in the script that the input shape of 1x64 is defined at the beginning
> input_spec = tf.TensorSpec([1, 64], tf.int32)
> model._set_inputs(input_spec, training=False)
However, the input shape of the generated tflite is 1x5.
The input shape can be checked using a website like [Netron](https://netron.app) or by running the following code:
```
import numpy as np
import tensorflow as tf
from PIL import Image
from os import listdir
from os.path import isfile, join
from random import choice, random
interpreter = tf.lite.Interpreter(model_path="gpt2-fp16.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
print(f"Required input shape: {input_shape}")
output_shape = output_details[0]['shape']
print(f"Required output shape: {output_shape}")
```
It's not just GPT2 that produces an input shape of 1x5. I also tried converting t5-small to tflite and got the same input shape of 1x5. The tflite files for [GPT2 on Hugging Face](https://huggingface.co/gpt2/tree/main) have an input shape of 1x64, though.
The input to GPT-2 can be up to 1024 tokens, and yet the token context length is somehow fixed at 5. A similar issue is present on [StackOverflow](https://stackoverflow.com/questions/67252208/tf-savedmodel-has-fixed-input-size-after-conversion-of-gpt-2-to-onnx-and-tf-js) where the user exported GPT2 as a TF SavedModel and then further to ONNX and TF.js. In both cases, the input shape was 1x5.
I also tried performing the conversion with TFLite Converter v1 API as suggested [here](https://github.com/tensorflow/tensorflow/issues/42873#issuecomment-685190449):
```
converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(
saved_model_dir, input_arrays=['inputA'], input_shapes={'inputA': [1, 640, 640, 1]})
```
However, using the v1 API still produced the 1x5 input shape. There was another suggestion, [here](https://github.com/tensorflow/tensorflow/issues/30180#issuecomment-505959220), to convert the model to SavedModel first and then set the input shape, followed by calling concrete_functions
```
model = tf.saved_model.load(export_dir)
concrete_func = model.signatures[
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([None, 1280, 720, 3])
converter = TFLiteConverter.from_concrete_functions([concrete_func])
...
```
When using the concreate_functions solution to set the input shape, I got the following error:
> _InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 5 and 64. Shapes are [?,5] and [1,64]._
The error is resolved if I use:
`concrete_func.inputs[0].set_shape([1,5])`
I could not check the input shape accepted by the saved model but from the error above we can get an idea that the SavedModel also uses the 1x5 shape.
I used this code to save the model:
```
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
model.save('gpt2')
```
Can someone suggest how I can set the input shape to 1x64? Thanks for your time, I appreciate it! :)
### Expected behavior
The input shape of the generated tflite file should be 1x64 because that's what we are explicitly defining it as. However, both in the cases of T5 and GPT2, the input shape does not change from 1x5 | 09-28-2022 12:51:30 | 09-28-2022 12:51:30 | I wonder if @gante @Rocketknight1 have some insights<|||||>Yes! There are some workarounds you could use @androbada525, such as saving the model as `SavedModel` and making your TFLite model with the from saved model constructor instead, which will give you the correct signature.
However, `from_keras_model` should also work, and we have a solution for that exact issue in development right now. I'll ping you in this issue when it's ready!<|||||>Hi @androbada525, this should now be fixed. However, to use the fix you will need to install `transformers` from `main` until we release the next version. To do that, replace `pip install transformers` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. After our next release you can change your code back to just `pip install transformers`.
If this doesn't resolve your problem, feel free to comment and re-open this issue! |
transformers | 19,230 | closed | Wav2Vec2CTCTokenizer does not add blank token between repeated letters | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.19.11-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.0
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten, @anton-l
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When encoding/decoding with `Wav2Vec2CTCTokenizer`, I expect blank tokens between repeated letters. If they are left out, a model will never be able to learn to correctly spell any word with repeated letters.
```python3
import transformers
tokenizer = transformers.Wav2Vec2CTCTokenizer.from_pretrained(
"facebook/wav2vec2-base-960h"
)
transcription = "ee".upper()
input_ids = tokenizer(text=transcription)['input_ids']
print(input_ids)
print(tokenizer.decode(input_ids))
```
Output:
```
> [5, 5]
> E
```
### Expected behavior
Output should be
```
> [5, 0, 5]
> EE
``` | 09-28-2022 12:33:54 | 09-28-2022 12:33:54 | |
transformers | 19,229 | closed | Clamping hidden state values to allow FP16 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Following the discussion in [#9295](https://github.com/huggingface/transformers/issues/9295) and the solution proposed by [#9487](https://github.com/huggingface/transformers/pull/9487).
Implement the solution that enables the FP16 for LongT5 models.
Who can review:
@patrickvonplaten, @patil-suraj | 09-28-2022 11:41:46 | 09-28-2022 11:41:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, it seems that I have a test error, however, I didn't change the code that is falling.
Does anyone know how I can pass these tests?<|||||>Thanks @SSamDav.
And sorry, I forgot to mention in order to avoid the test failure you saw yesterday, you can rebase the working branch on `main`. (If it still appears in the new run)<|||||>Ok for me if we know that it helps for inference. In T5 it didn't really help for training at all in the end, so if this was implemented to enable training in fp16, I'm not sure it's a good idea (I think @patil-suraj won't have time to look into it btw).
Also cc @ArthurZucker here just FYI<|||||>Since @patrickvonplaten prefers to have an issue where this PR will solve, I think we are not going to merge this PR at this moment. Let's see if there will be such issues reported for `LongT5` in the future. We can make an investigation and decide if to re-open/merge this PR (or with a different fix). WDYT? cc @younesbelkada @ArthurZucker .
----
And regarding `nan`: My though is that it's most likely coming from the sequences with all `-inf` after adding the mask to the attention scores, and `nan` after `softmax`, like what we observed in OPT or Bloom recently, where we provided a fix `as close as to` where `nan` happens. (However, in the case of `T5`, a clamp is done `T5LayerFF` which is not attention-related).<|||||>I my tests when I run a finetuned version of the `google/long-t5-tglobal-base` in FP16 I got `Nan` in the forward step, I could check if the values com from the `LongT5LayerFF`.
<|||||>> I my tests when I run a finetuned version of the `google/long-t5-tglobal-base` in FP16 I got `Nan` in the forward step, I could check if the values com from the `LongT5LayerFF`.
Hi, so it is running the inference, right? Is that finetuned checkpoint uploaded to Hub?<|||||>> Hi, so it is running the inference, right?
Yes
> Is that finetuned checkpoint uploaded to Hub?
No, it was trained in confidential data.<|||||>> > Is that finetuned checkpoint uploaded to Hub?
>
> No, it was trained in confidential data.
Got it. However, it would be really nice if we have a public available checkpoint (on another dataset) that can show the issue and the effect of the fix. I understand that it may not easy to obtain another such checkpoint - and potentially time consuming.
@patrickvonplaten Any further comment?<|||||>Hi!
I second what @ydshieh said, probably the root cause of this is happening inside the attention score computation as it has been observed for BLOOM and OPT - maybe it's worth investigating a bit before merging!
As a simple test, we could try to reproduce what has been done in #18057 & #17437 and see if this fixes the initial issue<|||||>Hey @SSamDav,
If the PR as is now solves your problem for **inference** - it's good for me to merge! I don't think it'll fix problems with fine-tuning though<|||||>Hey @patrickvonplaten, good thanks for the help!
|
transformers | 19,228 | closed | Update Past CI report script | # What does this PR do?
To make the fix process easier:
- A single file `errors.json` (instead of 2 files) that contains `line of error`, `error`, `failed test` and `job_link` together for each element. See below.
- The table for reporting on GitHub page get a new `status` column - in order to update the progress.
Elements in `errors.json` looks now
```python
[
"/transformers/src/transformers/configuration_utils.py:642",
"OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for this model name. Check the model page at 'https://huggingface.co/bigscience/bloom-350m' for available revisions.",
"tests/models/bloom/test_modeling_bloom.py::BloomModelTest::test_batch_generation",
"https://github.com/huggingface/transformers/actions/runs/3145556680/jobs/5112996392"
],
```
Having a job link attached to each element, so we can click or copy/paste to go the the job run page for the full traceback. However, this information is not available yet. ~~I will try to obtain them~~ #19235 is for this goal. | 09-28-2022 11:40:01 | 09-28-2022 11:40:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,227 | closed | Adding links to pipelines parameters documentation |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Improving documentation, adding links to text generation pipelines documentation with more complete list of available parameters based on suggestion in this issue https://github.com/huggingface/transformers/issues/19038#issuecomment-1259592359
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil or any other suitable reviewer
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-28-2022 11:14:35 | 09-28-2022 11:14:35 | Hi, in order to pass the checks you might want to run
```python
pip install transformers[dev]
make fixup
```
Which should fix the docs for you.<|||||>I get the following error when calling `make fixup`
```
fatal: Not a valid object name main
Traceback (most recent call last):
File "/Users/projects/transformers/utils/get_modified_files.py", line 27, in <module>
fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8")
File "/Users/.pyenv/versions/3.10.4/lib/python3.10/subprocess.py", line 420, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/Users/.pyenv/versions/3.10.4/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['git', 'merge-base', 'main', 'HEAD']' returned non-zero exit status 128.
No library .py files were modified
python utils/custom_init_isort.py
python utils/sort_auto_mappings.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
make: doc-builder: No such file or directory
make: *** [extra_style_checks] Error 1
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Hi, in order to pass the checks you might want to run
>
> ```python
> pip install transformers[dev]
> make fixup
> ```
>
> Which should fix the docs for you.
Hi @Narsil it's done. Feel free to approve the PR if you're happy, thanks<|||||>@sgugger any idea why the tests are not passing ?<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>> It seems there is an issue with your CircleCI permissions, the tests won't run.
> Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?
I'm unable to follow the guidance as when I click on the last link
`5.` Refresh permissions at https://app.circleci.com/settings/user
I get the following message
> **Something Unexpected Happened**
> This is certainly annoying, our apologies. Maybe a refresh would help. If that doesn't work, you may not have permissions to view this page or you may have the incorrect URL.
Can these checks be run on your end somehow or must they be run from my GitHub account?<|||||>@sgugger
You can see the error I get here https://app.circleci.com/pipelines/github/AndreaSottana/transformers/4/workflows/49a7c689-e6b8-430b-b324-32968f9a0962/jobs/42
```
#!/bin/bash -eo pipefail
python utils/tests_fetcher.py | tee tests_fetched_summary.txt
Traceback (most recent call last):
File "utils/tests_fetcher.py", line 689, in <module>
if not diff_with_last_commit and not repo.head.is_detached and repo.head.ref == repo.refs.main:
File "/home/circleci/.pyenv/versions/3.7.12/lib/python3.7/site-packages/git/util.py", line 1083, in __getattr__
return list.__getattribute__(self, attr)
AttributeError: 'IterableList' object has no attribute 'main'
Exited with code exit status 1
CircleCI received exit code 1
```
I have no idea where this comes from as my PR only edited some documentation. This whole process doesn't really make life easier for those who just want to make quick contributions to the repository<|||||>I have never seen that error before, will report this to CircleCI to see if they have a better idea of what went wrong. In any case, the tests are not needed here, the quality checks are the ones I wanted to make sure were passing before merging.
Just waiting for the doc build job and will merge after! |
transformers | 19,226 | closed | Replace instances of tf.int32 with tf.int64 in generation code | null | 09-28-2022 11:05:28 | 09-28-2022 11:05:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,225 | closed | BART model generates drastically worse output when labels are passed to forward method | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
- Python version: 3.10.4
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (but same issue appears even when running on CPU)
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
According to [BART documentation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1341) adding the labels to the forward pass should result in the loss also being returned in the output `Seq2SeqLMOutput` object. However, this also seems to have the very bad side effect of drastically degrading output's performance and I cannot understand why this would be the case. I've used an example below taken from the CNN/DailyMail dataset.
```python3
data = {"input": """March 14 is my favorite day to be a nerd. Across the country, math geeks in museums, schools, private groups and elsewhere gather to celebrate the number pi, approximately 3.14. That's why March 14 -- 3-14 -- is Pi Day. What's more, Albert Einstein was born on this day. A quick refresher: Pi is defined as the distance around a perfect circle, or the circumference, divided by the distance across it, or the diameter. It is also involved in calculating the area of a circle, the volume of a sphere, and many other mathematical formulas you might need in the sciences. Throughout history, people have been captivated by this number because there is no way to calculate it exactly by a simple division on your calculator. What's more, its digits go on infinitely, without any pattern in the numbers. 3.1415926535897932 ... etc. Even that many digits are more than most people would need for everyday use, but some folks have been inspired to memorize thousands of digits of pi, or even use the digits to create poetry or music. On Pi Day, one number 'reeks of mystery'. Math may be scary, but pi is not -- as evidenced by the widespread revelry on Pi Day. One might even say -- gasp! -- it's cool to like pi these days. Even the House of Representatives supported the designation of March 14 as National Pi Day in 2009. In countries where the day is written before the month, Friday is 14-3, which looks less like pi. "And so Pi Day is an acquired taste," mathematician Jonathan Borwein, at the University of Newcastle in Australia, said in an e-mail. Conveniently, "pi" sounds like "pie," and pies are round. You could celebrate Pi Day in a casual way by grabbing a slice of pastry, or pizza. If you're in enrolled in school, your math class or math department might be doing something special already. But if you happen to live in a particularly pi-happy place, you might be able to take part in some larger-scale, pi-inspired activities. Where Pi Day began. If you want to go where the day is said to be "invented," look no further than San Francisco's Exploratorium. Larry Shaw, who worked in the electronics group at the museum, began the tradition in 1988. Last year was Pi Day's 25th anniversary there. Pi Day began as a small gathering with mostly museum staff. Now it's a public pi extravaganza featuring a "Pi procession," whose attendees get a number -- 0 to 9 -- and line up in the order of pi's digits: 3.14159265 ... you get the idea. The parade ends at the "pi shrine" -- a pi symbol with digits spiraling around it embedded in the sidewalk, which was unveiled last year. For those who can't attend in person, the Exploratorium has a Second Life Pi Day event that includes "irrational exhibits, fireworks, cheerleaders, music, and dancing." The museum also lists a bunch of educational activities to teach about the concept of pi. On Pi Day, is 'pi' under attack? Where Einstein lived. On the opposite coast, the leafy university town where Albert Einstein spent the last 22 years of his life is showing community-wide exuberance for pi. Princeton, New Jersey, kicks off Pi Day weekend on Thursday night with a reading by physicist Charles Adler, then heads into a full day of activities on Friday, including a walking tour of Einstein's neighborhood and a pizza pie-making contest. The pie-eating contest takes place at McCaffrey's supermarket, while an Einstein look-alike competition will match mustaches and wild gray hair at the Princeton Public Library. Pi fans who have been spending the last year memorizing digits can show off and compete at the library, where the winner among 7- to 13-year-olds can take home a cool pi-hundred (That is, $314.15). The Historical Society of Princeton will have an Einstein birthday party. Tetsuya Miyamoto, inventor of the KENKEN puzzle, will speak at the library as well. """,
"output": "March 14 is my favorite day to be a nerd, because in museums, schools, private groups and elsewhere peoplegather to celebrate the number pi, approximately 3.14"}
```
Using the example above, run the following code
```python3
import pandas as pd
import torch
from datasets import Dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from torch.utils.data import DataLoader
train_dataset = Dataset.from_pandas(pd.DataFrame(data=[data]))
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-cnn')
def preprocess_function(dataset):
model_inputs = tokenizer(
dataset["input"], max_length=1024, truncation=True, padding='do_not_pad'
)
# Set up the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(
dataset["output"], max_length=1024, truncation=True, padding='do_not_pad'
)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_all_data = train_dataset.map(preprocess_function, batched=True)
tokenized_all_data = tokenized_all_data.remove_columns(['input', 'output'])
train_dataloader = DataLoader(tokenized_all_data)
for sample in train_dataloader:
input_ids = torch.tensor(sample['input_ids'])
attention_mask = torch.tensor(sample['attention_mask'])
labels = torch.tensor(sample['labels'])
labels = torch.cat((labels, (-100 * torch.ones(1024 - labels.shape[0], dtype=int))), axis=0)
model = AutoModelForSeq2SeqLM.from_pretrained('facebook/bart-large-cnn')
model.eval()
with torch.no_grad():
output_1 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0), labels=labels.unsqueeze(0))
output_2 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0))
```
Now check the output text (assume greedy decoding for now just for quick visualisation, although better options could be used with beam search of course)
- Output 1
```python3
print(tokenizer.decode(output_1['logits'].squeeze().argmax(axis = 1)))
```
```MarchMarch 14 is Pi favorite day to be a nerd. says it math, schools, private groups and elsewhere, celebrateather to celebrate pi number pi. approximately 3.14.......gngngngggneralgngngngngngnggggggngngngngngnigeng.igenigenigengi.igenigenigenigenigenigenigenigenigenbergbergigenigenigengiangiangiangiangiangiangiangiangiangiangian688giangianigengiangiangiangiangiangian.giangiangiangianindaindaivalentindagiangianindaivalentificent,,,,indainda,..........,..,.,..........igenangan...angananganigen..angangigiigengiigengigigigiigengianigengiangiangiangiangiangiangiangiangiangiangiangianigiigigianigigiangigiangiangianigiigiigiigiivalentivalentivalentgiangiangiangiangiangiangiangiangianivalentivalentivalentivalentgiangiangiangiangiangiangiangiangianivalentgianivalentivalentivalentivalentivalentivalentgiangiangiangiangiangiangianivalentuperuperivalentgiangiangiangiangiangiangiangian,giangiangiangianuperiberigenigen , igenuperuper uperuperuperuperuperuperuperuperuperuperuperuperuperuper uper uper uperuperuperuperuperiberiberiberiberuperiberiberiberiberiberiberiberiberiberiberiberiberiber..gigi.........iber.giiberiberiberiberiberivalentiberiberiberiberiberbergiberiberbergbergbergbergbergbergbergbergbergbergbergbergiberiberuperuperuperiberiberiberuperuperuperuperiberuperuperiberindauperindaindaindainda,,,,,,,,,,,,.,,,,,,,,,,,,,,,,,,,umberumber,umberumber,,,,,gigiumber,umberuperuper,,,,,..,,giigg,............................,,,,,,,,,,.,,,,,.,,,,,,,,,,,,,,,,,,,gigigigigi.</s>.......................................................itableitable..........itableitable...gi.....trusttrusttrusttrust.........trust.....squsqu..................squ.................................................................gi...............................,......................................................................................................................................................................................................................................................... and...............</s> and. and and and and and and and and and..- and and and and and and and and and and and and and...```
- Output 2
```python3
print(tokenizer.decode(output_2['logits'].squeeze().argmax(axis = 1)))
```
```MarchMarch 14 is Pi favorite day to be a nerd. Pi the country, math geeks gather museums, schools, private groups and elsewhere gather to celebrate the number pi. approximately 3.14. In's why March 14 -- 3-14 -- is Pi Day.</s>'s more, Albert Einstein was born on this day.</s> quick refresher: Pi is defined as the distance around a perfect circle, or the circumference. divided by the distance across it. or the diameter.</s> is also involved in calculating the area of a circle, the volume of a sphere, and many other mathematical formulas. might need in the sciences.</s> history, people have been captivated by this number because there is no way to calculate it exactly by a simple division on your calculator.</s>'s more, its digits go on infinitely, without any pattern in the numbers.</s>.1415926535897932... etc.</s> that many digits are more than most people would need for everyday use. but some folks have been inspired to memorize thousands of digits of pi. or even use the digits to create poetry or music.</s> Pi Day, " might'reeks of mystery', Math may be scary, but pi is not -- as evidenced by the widespread revelry on Pi Day.</s> might even say -- gasp! -- it's cool to like pi these days.</s> the House of Representatives supported the designation of March 14 as National Pi Day in 2009.</s> countries where the day is written before the month, Friday is 14-3, which looks less like pi.</s>pi so Pi Day is an acquired taste," mathematician Jonathan Borwein, at the University of Newcastle in Australia, said in an e-mail.</s>veniently, "pi" sounds like "pie," and pies are round.</s> could celebrate Pi Day in a casual way by grabbing a slice of pastry, or pizza.</s> you're in enrolled in school, your math class or math department might be doing something special already.</s> if you happen to live in a particularly pi-happy place, you might be able to take part in some larger-scale, pi-inspired activities.</s> Pi Day is.</s> you want to go where the day is said to be "invented," look no further than San Francisco's Exploratorium.</s> Shaw, who worked in the electronics group at the museum, began the tradition in 1988.</s> year was Pi Day's 25th anniversary there.</s> Day began as a small gathering with mostly museum staff.</s> it's a public pi extravaganza featuring a "Pi procession," whose attendees get a number -- 0 to 9 -- and line up in the order of pi's digits: 3.14159265... you get the idea.</s> parade ends at the "pi shrine" -- a pi symbol with digits spiraling around it embedded in the sidewalk, which was unveiled last year.</s> those who can't attend in person, the Exploratorium has a Second Life Pi Day event that includes "irrational exhibits, fireworks, cheerleaders, music, and dancing"</s> winner also lists a bunch of educational activities to teach about the concept of pi.</s> Pi Day, the 'pi' under attack?</s> Einstein spent.</s> the opposite coast, the leafy university town where Albert Einstein spent the last 22 years of his life is showing community-wide exuberance for pi.</s>, New Jersey, kicks off Pi Day weekend on Thursday night with a reading by physicist Charles Adler. then heads into a full day of activities on Friday. including a pizza tour of Einstein's neighborhood. a pizza pie-making contest.</s> winner-eating contest takes place at McCaffrey's supermarket. while an Einstein look-alike competition will match mustaches and wild gray hair at the Princeton Public Library.</s> fans who have been spending the last year memorizing digits can show off and compete at the library, where the winner among 7- to 13-year-olds can take home a cool pi-hundred (That is, $314.15).</s> Historical Society of Princeton will have an Einstein birthday party.</s>etsuya Miyamoto, inventor of the KENKEN puzzle, will speak at the library, well.</s> ```
### Expected behavior
Output 1 and Output 2 above are completely different and I would expect them to be the same. Output 1 is by far much worse than Output 2.
Why does passing or not passing the labels (see snippet below taken from code above)
```python3
with torch.no_grad():
output_1 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0), labels=labels.unsqueeze(0))
output_2 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0))
```
causes such a huge difference in the generated logits, when I would expect the only difference being that the loss is returned when passing the labels?
On a side note, the output 2 also contain a huge amount of end of sequence tokens `</s>`. Are we supposed to ignore everything after the first `</s>` has been generated? | 09-28-2022 10:32:53 | 09-28-2022 10:32:53 | @ArthurZucker could you take a look here? <|||||>Hi @ArthurZucker did you manage to take a look by any chance?
I tried to do some debugging for the issue above and found that this _may_ be a contributing cause:
As mention in this [code comment](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1206), unlike other seq2seq models, Bart automatically creates `decoder_input_ids` from `input_ids` if no `decoder_input_ids` are provided. However, if the `labels` are provided, then the `labels` become the `decoder_input_ids` instead (see [these lines of code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1350)).
As far as I can see, this seems to also happen regardless of whether the model is in train or eval mode, whereas in eval mode the labels should not be fed into the decoder as only the trained model and the input should be used. This would explain why we get different outputs depending on whether I pass the labels to the forward pass or not, whereas my expectation would be that feeding the labels while in eval mode should only have the additional effect that the loss is included in the `Seq2SeqLMOutput` object, but it should not affect the logits.<|||||>Hey! Sorry gonna have a look tomorrow, very nice debugging 🤗 pretty sure that we would want to have the correct outputs along with the loss.
Computing the loss at evaluation is not really important? But is informative. Is that why you are trying to do that ? <|||||>Hi @ArthurZucker
The reason for me calculating the evaluation loss is because I'm implementing a loss-based curriculum learning training strategy (something re-adapted from an idea introduced in [this paper](https://aclanthology.org/2021.emnlp-main.281.pdf)).
Essentially I need to order the training samples (not the validation samples) based on their loss, used as a proxy for difficulty (based on the assumption that samples with higher loss are harder for the model) so as to build an ordered curriculum. However, even though I'm calculating the loss on the training dataset, I need to do it in eval mode, as I don't want the model to do any learning or backprop during this process otherwise the order in which I feed the samples to the model would affect the loss calculation as the model would start learning from previous samples, which would skew the difficulty scoring, which should be based on the loss of each training sample at a fixed snapshot of the model in time. This is not part of the model training, it's done beforehand (and potentially at regular intervals during training), so as to rank the samples in order of difficulty. Hence why I need to do it in eval mode and stumbled upon the issue mentioned above<|||||>Ok! I got it.
You were also wondering about the `</s>`, you can skip them when decoding using `tokenizer.decode(...., skip_special_tokens = True`.
About your issue, I checked, and I am pretty sure we want to change the current behavior, but it could be not backward compatible. A possible update would be to remove the update of the `decoder_input_ids` depending on the `labels`. I don't really know where that comes from, but if the results are clearly better (as it is now) we might want to do this. Let me check that and come back to you!
In the mean time, I found a quick fix :
```python
train_dataloader = DataLoader(tokenized_all_data)
for sample in train_dataloader:
input_ids = torch.tensor(sample['input_ids'])
attention_mask = torch.tensor(sample['attention_mask'])
labels = torch.tensor(sample['labels'])
labels = torch.cat((labels, (-100 * torch.ones(input_ids.shape[0] - labels.shape[0], dtype=int))), axis=0)
model = AutoModelForSeq2SeqLM.from_pretrained('facebook/bart-large-cnn')
model.eval()
with torch.no_grad():
output_1 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0), labels=labels.unsqueeze(0), decoder_input_ids = shift_tokens_right(input_ids.unsqueeze(0), model.config.pad_token_id, model.config.decoder_start_token_id))
output_2 = model(input_ids=input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0))
```
Tell me if that works for you?!
<|||||>Hi @ArthurZucker
Thanks so much for providing an interim solution, I can confirm your fix works fine, and returns the correct loss I was looking for. Glad that we uncovered this. Thanks also for confirming how to skip `</s>` tokens. Feel free to close the issue or keep it open for future work at your discretion, in the meantime I will use the solution you provided.<|||||>Closing this as it makes more sens to keep the current behavior |
transformers | 19,224 | closed | [Don't merge] check cache in CircleCI | # What does this PR do?
[Don't merge] check cache in CircleCI | 09-28-2022 07:42:29 | 09-28-2022 07:42:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,223 | closed | Fix cache names in CircleCI jobs | # What does this PR do?
Fix cache names in CircleCI jobs.
According to [this doc](https://circleci.com/docs/caching#restoring-cache), the cache names in `CircleCI` jobs should follow the patterns:
```yaml
- restore_cache:
keys:
- v0.5-torch-{{ checksum "setup.py" }}
- v0.5-torch-
- save_cache:
key: v0.5-torch-{{ checksum "setup.py" }}
paths:
- '~/.cache/pip'
```
- The 2nd key in `restore_cache` should be the 1st key without `{{ checksum "setup.py" }}`
- The key in `save_cache` should be the 1st key in `restore_cache`
We need the trailing `-` in the 2nd key in `restore_cache` to avoid the collapse between `v0.5-torch-` and `v0.5-torch_and_tf-` ect.
(Thanks @stas00 for finding this issue, and @gante for the comment on Slack) | 09-28-2022 07:34:44 | 09-28-2022 07:34:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,222 | closed | ALIBI position embedding support for other models ( BERT, ELECTRA ) | ### Feature request
Since BLOOM is using ALIBI for position embedding, is there any plans to add ALIBI to other existing models such as BERT, ELECTRA etc?
ALIBI have so far proven to be [extrapolate sequence length beyond training sequence](https://arxiv.org/pdf/2108.12409.pdf) (figure 4). I think this idea should be added to existing architecture since existing models such as BERT, ELECTRA already supports more than one types of position embedding strategy.
### Motivation
If existing models such as BERT, Roberta or ELECTRA supports ALIBI position strategy, this will motivate community pretrained BERT which supports beyond 512 sequence length. One of the largest impact would be long form input sequence in summarization task.
### Your contribution
I have implemented ALIBI mechanism on my own private transformer fork and could submit a PR if this idea is accepted. | 09-28-2022 01:24:09 | 09-28-2022 01:24:09 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@theblackcat102 I would be quite interested in that. Could you also link your fork?
I would be quite interested to train BERT et al for longer sequences. |
transformers | 19,221 | closed | Updated README and docs with ESM-1b and ESM-2 separately | # Improve ESM documentation
We split out the papers for ESM-1b and ESM-2 separately, and improve the documentation of esm model_doc.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@Rocketknight1
| 09-27-2022 21:45:23 | 09-27-2022 21:45:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @tomsercu! I checked with other team members and we prefer to have only one entry in the list per model class (which would include ESM-1b as well as ESM-2), but it's totally okay to have a longer entry that includes the citations for ESM-1b and ESM-2, and optionally ESM-1v as well.<|||||>Gotcha that makes sense @Rocketknight1 , updated to a single entry in all 5 places. Lmk if this is too much :D <|||||>@tomsercu Don't worry, we have some scripts to handle the copies, so you only have to worry about the English one! I'll run them now and see if I can get these tests to pass.<|||||>Remaining issues seem to just be from the specific commit this was branched from - will merge into my PR and hopefully they'll go away. Thanks for this! |
transformers | 19,220 | closed | Setting tokenizer's return_offsets_mapping to True runs ~10x slower than when set to False | ### System Info
- `transformers` version: 4.19.0
- Platform: macOS-12.6-x86_64-i386-64bit
- Python version: 3.9.7
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
from datasets import Dataset, load_dataset
import time
# checkpoint = 'bert-base-uncased' # same issue despite different checkpoints
checkpoint = 'roberta-base'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
data = load_dataset("rotten_tomatoes", split='train')
def tokenize_fn(ex):
return tokenizer(ex['text'])
def tokenize_fn_w_offsets(ex):
return tokenizer(ex['text'],
return_offsets_mapping=True
)
def tokenize_fn_w_other(ex):
return tokenizer(ex['text'],
return_attention_mask=True,
return_special_tokens_mask=True,
return_token_type_ids=True,
return_offsets_mapping=False)
start = time.time()
tokenized_data = data.map(tokenize_fn, batched=True, load_from_cache_file=False)
end = time.time()
wall = end - start
print(f'Wall time simple: {wall:.4f} seconds') # took ~0.22 seconds
start = time.time()
tokenized_data_w_offsets = data.map(tokenize_fn_w_other, batched=True, load_from_cache_file=False)
end = time.time()
wall = end - start
print(f'Wall time with other items returned: {wall:.4f} seconds') # took ~0.28 seconds
start = time.time()
tokenized_data_w_offsets = data.map(tokenize_fn_w_offsets, batched=True, load_from_cache_file=False)
end = time.time()
wall = end - start
print(f'Wall time with offsets: {wall:.4f} seconds') # took ~2.5 seconds
```
### Expected behavior
I wouldn't expect the wall time to increase by so much when setting return_offsets_mapping to True. | 09-27-2022 19:14:00 | 09-27-2022 19:14:00 | cc @Narsil as well<|||||>Hi @tristinb !
Thank you for reporting this issue. I just tested your code snippet on a [google colab ](https://colab.research.google.com/drive/1OV1ICDidCPKFQ9O6XygTzKQKDCMEK8-m?usp=sharing) (with the latest available versions of the libraries) and I observe much smaller time differences, respectively the code takes: 0.9231 seconds, 0.8868 seconds and 1.1825 seconds.
Eventually, it might be worth trying to rerun your code with the latest versions of the libraries (`transformers==4.22.2 datasets==2.5.1 tokenizers==0.12.1`) to see if this reduces the time difference for you too.<|||||>Hi @SaulLu!
Thanks for your response. It looks like bumping `datasets==2.5.1` from 2.4.0 fixed the issue even with an older version of the transformers library. Thanks again!<|||||>So glad this solves your problem! Let me close this issue |
transformers | 19,219 | closed | Added tests for yaml and json parser | # What does this PR do?
Fixes #19116
## Before submitting
- [x] adding as parse_yaml_file method to HfArgumentParser with the code above
- [x] refactor the dupe code between parse_json_file and parse_dict similar to the code above
- [x] add a small test of parse_yaml_file
- [x] add a small test of parse_json_file
## Who can review?
@sgugger
| 09-27-2022 18:49:45 | 09-27-2022 18:49:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Where can one find examples of yaml or json configs ? |
transformers | 19,218 | closed | [Wav2Vec2] Fix None loss in doc examples | # What does this PR do?
Doc examples in [Wav2Vec2ForPreTraining](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining.forward.example) and [Wav2Vec2ConformerForPreTraining](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForPreTraining.forward.example) produce None loss due to missing sampled_negative_indices parameter in the model. See #15232
* pass sampled_negative_indices parameter to the model to avoid getting a None loss
* The sequence length is a tensor when it should be an integer. Add .item() call to address this issue.
Fixes #15232
## Who can review?
@patrickvonplaten
| 09-27-2022 14:21:18 | 09-27-2022 14:21:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten Could you take a look at this PR for Wav2Vec2, thanks :-) |
transformers | 19,217 | closed | Fix deprecation warning for return_all_scores | # What does this PR do?
Fixes the deprecation warning for return_all_scores to reflect what the code is doing.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#19207
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-27-2022 14:09:24 | 09-27-2022 14:09:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For the quality test you can run
```
pip install transformers[dev] # or pip install -e .[dev] in the directory
make fixup
```
And push the changes.<|||||>Pinging @gante in turn, who will know better than me :-) |
transformers | 19,216 | closed | More tests for regression in cached non existence | # What does this PR do?
This PR adds more tests as a follow-up to #19206. None of those tests pass without the fixes in #19206 | 09-27-2022 12:44:31 | 09-27-2022 12:44:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,215 | closed | Translation italian: add new pipeline | ## What does this PR do?
Italian translation of doc related to the add_new_pipeline of :hugs: Transformers.
* updated _toctree.yml
* added add_new_pipeline.mdx
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
See issue: [#17459](https://github.com/huggingface/transformers/issues/17459)
@omarespejel
@sgugger
@mfumanelli | 09-27-2022 12:41:10 | 09-27-2022 12:41:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,214 | closed | ERROR: MPNetTokenizerFast' object has no attribute '_in_target_context_manager | HI Team,
Can anyone help me on above error. I was using SentenceTransformer('paraphrase-mpnet-base-v2'). When deploy the model and testing performed I got this error. | 09-27-2022 12:03:52 | 09-27-2022 12:03:52 | Also, I get this error:
HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/paraphrase-MiniLM-L6-v2 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f2cc8a5e8d0>: Failed to establish a new connection: [Errno 111] Connection refused'))<|||||>i'm getting same issue too
python : 3.8.12
keybert==0.5.0
transformers==4.22.2
full Traceback
```Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.8/site-packages/keybert/_model.py", line 113, in extract_keywords
keywords = self._extract_keywords_single_doc(doc=docs,
File "/usr/local/lib/python3.8/site-packages/keybert/_model.py", line 182, in _extract_keywords_single_doc
doc_embedding = self.model.embed([doc])
File "/usr/local/lib/python3.8/site-packages/keybert/backend/_sentencetransformers.py", line 53, in embed
embeddings = self.embedding_model.encode(documents, show_progress_bar=verbose)
File "/usr/local/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 161, in encode
features = self.tokenize(sentences_batch)
File "/usr/local/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 319, in tokenize
return self._first_module().tokenize(texts)
File "/usr/local/lib/python3.8/site-packages/sentence_transformers/models/Transformer.py", line 113, in tokenize
output.update(self.tokenizer(*to_tokenize, padding=True, truncation='longest_first', return_tensors="pt", max_length=self.max_seq_length))
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2482, in __call__
if not self._in_target_context_manager:
AttributeError: 'DistilBertTokenizerFast' object has no attribute '_in_target_context_manager'```
<|||||>@Eeshvardhanshet , please share if you reached a solution for this <|||||>@Eeshvardhanshet , i think i figured it out, i was using a third part library [keybert](https://github.com/MaartenGr/KeyBERT) , that was trying to install the latest version from transformers (4.22.2) , I checked the last running deployment and found that it was using `transformers==4.21.1` . <br>
so you need to downgrade your transformers version to either 4.21.1 or the last successful running version
<|||||>@msaoudallah . You are right, I have downgraded transformers version to 4.21.1. It did work. Anyways, thanks @msaoudallah <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>>need downsizing to transformers==4.21.1
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
might be more beneficial to fix the issue
|
transformers | 19,212 | closed | T5 model predict <UNK> | config = T5Config(num_layers=12,
num_decoder_layers=12,
pad_token_id=0,
es_token_id=VOCAB_SIZE + 2,
model_parallel=True,
vocab_size=VOCAB_SIZE,
num_heads=12,
d_model=1024,
d_kv=64,
d_ff=3072,
decoder_start_token_id= VOCAB_SIZE + 1,
)
model = T5ForConditionalGeneration(config)
this is my T5 model config.
epoch is 200 .I use it to train a translation model,
but the results of prediction is <UNK>.
How can I use T5ForConditionalGeneration to train my custom model?
my config is right?
| 09-27-2022 11:50:39 | 09-27-2022 11:50:39 | Hi @moseshu 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 |
transformers | 19,211 | closed | T5model result get <UNK> | ### Model description
config = T5Config(num_layers=12,
num_decoder_layers=12,
pad_token_id=0,
es_token_id=VOCAB_SIZE + 2,
model_parallel=True,
vocab_size=VOCAB_SIZE,
num_heads=12,
d_model=1024,
d_kv=64,
d_ff=3072,
decoder_start_token_id= VOCAB_SIZE_ZH + 1,
)
model = T5ForConditionalGeneration(config)
this is my T5 model config.
epoch is 200 .I use it to train a translation model, but the result is <UNK>.
How can I use T5ForConditionalGeneration to train my custom model?
my config is right?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 09-27-2022 11:27:11 | 09-27-2022 11:27:11 | |
transformers | 19,210 | closed | Fix torch.fx supports (ViT Model) | # What does this PR do?
Fixes #19209
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@michaelbenayoun
## Contents
Originally, torch.fx was not working on vit model
because torch.fx proxy object does not working on if-statement
for example,
```if num_channels != self.num_channels:```
can raise error.
1. I replaced if-statements with torch._assert function.
2. below code should be splitted to two lines.
```
if height != self.image_size[0] or width != self.image_size[1]:
```
because `torch._assert(`statement1` and `statement2`, "error message")` will not work.
so, I wrote as below.
```
err_message = f"Input image size ({height}*{width}) doesn't match model({self.image_size[0]}*{self.image_size[1]})."
torch._assert(height == expected_height, err_message)
torch._assert(width == expected_width, err_message)
```
| 09-27-2022 07:35:35 | 09-27-2022 07:35:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I replied on the issue thread. It seems like we do not need that.<|||||>Yes, I agree with that
I didn't know that
thank you for your comment
I'll close this PR :) |
transformers | 19,209 | closed | torch.fx not working on ViT model | ### System Info
transformer version: 4.23.0.dev0
platform: windows 11, AMD64
python version: 3.7.9
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. write below code and execute
```
import torch
import numpy as np
from transformers import ViTFeatureExtractor, ViTModel
from datasets import load_dataset
from torch.fx import symbolic_trace
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
model = ViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
img = torch.Tensor(np.random.randn(1, 3, 224, 224))
tracing_info = {"head_mask": None, "interpolate_pos_encoding": None, "bool_masked_pos": None, "output_hidden_states": None, "output_attentions": None, "return_dict": None}
traced = symbolic_trace(model, tracing_info) # bug here
with torch.no_grad():
outputs = model(img)
outputs2 = traced(img)
assert torch.allclose(dict(outputs)["last_hidden_state"], outputs2["last_hidden_state"])
```
2. error traceback
```
symbolically traced variables cannot be used as inputs to control flow
File "C:\Users\NOTA2001\Desktop\abab\transformers\src\transformers\models\vit\modeling_vit.py", line 166, in forward
if num_channels != self.num_channels:
File "C:\Users\NOTA2001\Desktop\abab\transformers\src\transformers\models\vit\modeling_vit.py", line 118, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "C:\Users\NOTA2001\Desktop\abab\transformers\src\transformers\models\vit\modeling_vit.py", line 558, in forward
pixel_values, bool_masked_pos=bool_masked_pos, interpolate_pos_encoding=interpolate_pos_encoding
File "C:\Users\NOTA2001\Desktop\abab\transformers\do_something.py", line 19, in <module>
traced = symbolic_trace(model, inputs2)
```
### Expected behavior
when this issue was fixed then below code will work
```
import torch
import numpy as np
from transformers import ViTFeatureExtractor, ViTModel
from datasets import load_dataset
from torch.fx import symbolic_trace
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
model = ViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
img = torch.Tensor(np.random.randn(1, 3, 224, 224))
tracing_info = {"head_mask": None, "interpolate_pos_encoding": None, "bool_masked_pos": None, "output_hidden_states": None, "output_attentions": None, "return_dict": None}
traced = symbolic_trace(model, tracing_info) # bug here
with torch.no_grad():
outputs = model(img)
outputs2 = traced(img)
assert torch.allclose(dict(outputs)["last_hidden_state"], outputs2["last_hidden_state"])
``` | 09-27-2022 07:26:42 | 09-27-2022 07:26:42 | I am a bit surprised with this situation, as `ViTModel` is tested against `test_torch_fx_xxx` methods, which go through
https://github.com/huggingface/transformers/blob/2d956958252617a178a68a06582c99b133fe7d3d/tests/test_modeling_common.py#L772
However, I realized that `symbolic_trace` is `from transformers.utils.fx import symbolic_trace`
https://github.com/huggingface/transformers/blob/2d956958252617a178a68a06582c99b133fe7d3d/src/transformers/utils/fx.py#L1107
and not `from torch.fx import symbolic_trace`.<|||||>Just a remark: It looks like, (if we want) to fully support `from torch.fx import symbolic_trace` in the library, there are more places to be changed.<|||||>Thank you for your comment @ydshieh
I didn't know the fact that `from transformers.utils.fx import symbolic_trace` exists.
Which do you think better, `from torch.fx import symbolic_trace` or `from transformers.utils.fx import symbolic_trace`?
because I am a newbie contributor to transformers, so I want to hear.
If you think it is better to use `from transformers.utils.fx import symbolic_trace`, then I want to close this issue.<|||||>Hi @dwlim-nota,
We actually do support `torch.fx` symbolic tracing though our custom tracer, which is supposed to handle the ViT case and what you do in your PR.
```python
from transformers.utils.fx import symbolic_trace
traced = symbolic_trace(vit_model, input_names=[ "pixel_values"])
```
Compared to the original `torch.fx.symbolic_trace` function, you need to specify which inputs the traced model must have (because our models usually support different inputs, which is not possible in fx), so that is why I specified `pixel_values` here.<|||||>thank you for your comment @michaelbenayoun
I will close this issue :) |
transformers | 19,208 | closed | Fix trainer seq2seq qa.py evaluate log and ft script | # What does this PR do?
<!--
This PR fix the If eval tries to log eval logs with prediction_loss_only and logging_dir, it will not be logged, so I changed it so that logs will be saved.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @patil-suraj | 09-27-2022 06:20:46 | 09-27-2022 06:20:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Synchronized data from the main branch. |
transformers | 19,207 | closed | Pipeline return_all_scores deprecation warning might be incorrect | ### System Info
Transformers version: 4.21.3
Python version: 3.8.10
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Load a pipeline
2. Call it using `return_all_scores=True`
### Expected behavior
Expected behaviour would be the warning to advise to use `top_k=None` to get a similar result from what I've seen in the code.
The warning states to use `top_k=1`, which does not return all scores. | 09-26-2022 21:30:03 | 09-26-2022 21:30:03 | @ogabrielluiz ,
You are correct, we could adapt the error message based on the value of `return_all_scores` (`True -> top_k=None` , `False -> top_k=1`) .
Would you be willing to open a PR on this ?<|||||>On it.
Thanks! |
transformers | 19,206 | closed | Fix cached_file in offline mode for cached non-existing files | # What does this PR do?
In offline mode, we sometimes returned the `_CACHED_NO_EXIST` constant instead of `None` for non-existing objects, since we now cache that non-existence. This PR fixes it.
Fixes #19186 | 09-26-2022 19:49:07 | 09-26-2022 19:49:07 | Thank you for such a quick fix!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,213 | closed | Non-descriptive error when initiating gpt2 tokenizer without internet | ```
File "C:\[path]\api\utilities\tokenizers.py", line 4, in <module>
gpt2 = GPT2TokenizerFast.from_pretrained("gpt2")
File "C:\[path]\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1750, in from_pretrained
commit_hash = extract_commit_hash(resolved_vocab_files[file_id], commit_hash)
File "C:\[path]\venv\lib\site-packages\transformers\utils\hub.py", line 225, in extract_commit_hash
search = re.search(r"snapshots/([^/]+)/", resolved_file)
File "C:\Program Files\Python310\lib\re.py", line 200, in search
return _compile(pattern, flags).search(string)
TypeError: expected string or bytes-like object
```
Just started getting this error today, any idea for a workaround or a good version to rollback to? | 09-26-2022 18:38:21 | 09-26-2022 18:38:21 | Update
======
This was caused by a poor internet connection. It is not obvious that this is being caused by a bad or non-existent internet connection. Could better error logging be added here?<|||||>I took the liberty of moving this issue to `transformers` since the error seemed to have occured here.<|||||>Tagging @sgugger <|||||>Looks like a duplicate of #19186 so should be fixed by #19206 .
@tayler6000 You shouldn't have the error if you do a source install, we will also do a patch release today once we confirm the bug is fully fixed. |
transformers | 19,205 | closed | Improve DETR post-processing methods | # What does this PR do?
- Adds `post_process_object_detection`, `post_process_semantic_segmentation`, `post_process_instance_segmentation` and `post_process_panoptic_segmentation` methods with optional resizing and thresholding to filter out low probability predictions.
This PR is part of a larger effort to ensure post-processing methods have consistent naming, input arguments and output. Deprecation warnings are added to existing post-processing methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-26-2022 14:51:24 | 09-26-2022 14:51:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts @sgugger thank you both for the feedback! Could you take a final look at the PR? If everything looks fine, I'll apply the same changes to MaskFormer and update the model cards.<|||||>> Thanks for working on this. As mentioned internally, don't hesitate to separate PRs in focused bits: here there could have been:
>
> * one PR to quickly fix the comments
> * one PR to add the post processing to DETR
> * one PR for the deprecation warnings
Makes sense, I'll split my future PRs into smaller, issue-specific ones.
> Also wondering if we should target a lower version than v5 for the removal of the methods, since this is all a bit of an experimental API.
I agree, we can target a lower version for the removal of deprecated methods. |
transformers | 19,204 | closed | Implement multiple span support for DocumentQuestionAnswering | # What does this PR do?
Extends DocumentQuestionAnswering pipeline to include support for multiple spans.
Fixes #18414
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @sgugger @Narsil | 09-26-2022 14:40:01 | 09-26-2022 14:40:01 | I still need to extend the tests to set the `max_seq_len` to be low (e.g. 10) and make sure that they work.
@NielsRogge another thing we could do in this change, or a separate one if y'all prefer, is to support multiple pages. The [DocQuery pipeline](https://github.com/impira/docquery/blob/main/src/docquery/ext/pipeline_document_question_answering.py#L227) already does this, because it's common to have multi-page documents and quite useful to be able to search across them. It's important to have this in the pipeline itself, so that you can properly handle the "no answer" score and top-k across pages. The main consideration is how we change the input shape of the pipeline to accommodate it, since we currently accept a (single) image and (optional) list of word_boxes. The options I can think of are:
- Image can also be a list of images, and then `word_boxes` can be a list of list of word boxes
- Combine image and `word_boxes` into one list of `(image, [(word, box)])` tuples
- Add a new optional argument, called pages, and use that instead of the image/word_box arguments if it's set.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>gentle nudge @NielsRogge <|||||>Hey @ankrgyl ,
Thanks for this PR !
Do you mind adding a few tests showcasing the switch to `ChunkPipeline` ? (Ideally a test that wouldn't work before and would work after this PR).
No need to add tests for ALL supported models, but at least one would go a long way.<|||||>Absolutely. Apologies for the delay -- I will update the PR within the next few days.<|||||>@Narsil apologies again for the delay. I wanted to make sure I had some time to test it thoroughly. I added tests for the two flavors of LayoutLM models, but not for Donut, since there are no spans for Donut (i.e. there is no parameter or argument I can provide that hits two different code paths with the Chunk Pipeline, at least as far as I can tell).
I found several small bugs along the way and fixed them too.<|||||>Hi @ankrgyl Thank you for this PR. We need some help for CI failure regarding this new addition 🙏
After this PR, we have 5 CI failures
```bash
FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_chunk - AssertionError: Lists differ: [{'score': 0.9974, 'answer': '1110212019', 'start': 23, [69 chars] 16}] != [{'score': 0.9967, 'answer': '1102/2019', 'start': 22, '[67 chars] 15}]
FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_donut - TypeError: forward() got an unexpected keyword argument 'page'
FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_layoutlm_chunk - AssertionError: Lists differ: [{'sc[39 chars]t': 16, 'end': 16}, {'score': 0.9998, 'answer'[31 chars] 16}] != [{'sc[39 chars]t': 15, 'end': 15}, {'score': 0.9924, 'answer'[3...
FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_pt_LayoutLMConfig_LayoutLMForQuestionAnswering_LayoutLMTokenizerFast_nofeature_extractor - IndexError: tuple index out of range
FAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_pt_LayoutLMConfig_LayoutLMForQuestionAnswering_LayoutLMTokenizer_nofeature_extractor - IndexError: tuple index out of range
```
- (For `test_large_model_pt_layoutlm_chunk`, I got different error from our CI where it is just output not equal expected value)
- The 2 tests with `_chunk` at the end are new tests. The other 3 tests worked well in previous commits before this PR
- We need to install `tokenizers==0.13.1` (as it has some new fix on its own)
Could you take a look, please 🙏? You can find more detailed information [here](https://github.com/huggingface/transformers/actions/runs/3231701081/jobs/5291537526)
<|||||>I'm so sorry about that! I will take a look ASAP. |
transformers | 19,203 | closed | `MCTCTFeatureExtractor` requires `torchaudio >= 0.10` | # What does this PR do?
In past CI with PyTorch 1.9, we got for `MCTCTFeatureExtractor`
```bash
AttributeError: module 'torchaudio.functional' has no attribute 'melscale_fbanks'
```
This PR only adds a warning in `feature_extraction_mctct`, so we have this information tracked. Let's discuss if we should skip the corresponding tests. | 09-26-2022 14:35:44 | 09-26-2022 14:35:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,202 | closed | Use repo_type instead of deprecated datasets repo IDs | # What does this PR do?
In the next release of `huggingface_hub` trying to access any repo ID with the `"datasets/"` prefix will fail. `repo_type="dataset"` must be used instead.
This PR fixes all instances except for research projects. | 09-26-2022 13:26:01 | 09-26-2022 13:26:01 | Research projects are examples we do not actively maintain and pin all dependencies to a specific version.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,201 | closed | Use `math.pi` instead of `torch.pi` in `MaskFormer` | # What does this PR do?
`MaskFormer` uses `torch.pi` which is only available in `torch >= 1.10`. The Past CI with PyTorch 1.9 gives
```bash
AttributeError: module 'torch' has no attribute 'pi'
```
This PR adds a warning if PT <= 1.9 for this model, similar in `ViltModel` https://github.com/huggingface/transformers/blob/71fc33174664738d8c8d93025ebc810180e69c20/src/transformers/models/vilt/modeling_vilt.py#L44
**Question: Should we also skip the whole test suite for `MaskFormer` depending on the torch version? The objective is to make Past CI cleaner if we ever run it again with previous torch version.** | 09-26-2022 12:47:23 | 09-26-2022 12:47:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> If it's just for `torch.pi` can we change this to `math.pi` and have the model work with PyTorch 1.9?
I can change it to `math.pi`. |
transformers | 19,200 | closed | Use `assertAlmostEqual` in `BloomEmbeddingTest.test_logits` | # What does this PR do?
`BloomEmbeddingTest.test_logits` currently uses `assertEqual` to compare 2 floats. We should instead use `assertAlmostEqual`.
Currently we don't see this test failing. However, with different PyTorch versions, we might get test failure - this happens for Past CI (PyTroch 1.10), where we got
```bash
AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05
```
| 09-26-2022 12:23:28 | 09-26-2022 12:23:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,199 | closed | Can't load too big text file for dataset (RAM exhausts), HELP PLEASE !!!!! | ### System Info
MacBook Pro (13-inch, M1, 2020)
Chip Apple M1
Memory 16 GB
(Also tried with Google Colab)
### Who can help?
copy of #19161
Hello All,
A bit new to HuggingFace environment.
I am trying to load a big text file for pertaining a BERT model from scratch the size of the txt file is about 11GB and trying to load the model for pertaining is exhausting all the RAM on the system.
Is it possible to load all the data in batches and then perform training.
I am a bit new to hugging face ecosystem so I would request you all to help me around if you have any clue about this.
I am using Google Colab for the purpose.
Please share code snippets, If possible.
Cheers !
`#construct dataset
from transformers import LineByLineTextDataset
file_path="/content/drive/MyDrive/full_text_data.txt"
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=32)`
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
try to load the data with a big text file
### Expected behavior
The data should be read in the batches to combat OOM. | 09-26-2022 11:19:57 | 09-26-2022 11:19:57 | try using iterable dataset and decrease the batch_size<|||||>@Kunlun-Zhu
I am a bit new to HF environment, can you please provide an example code snippet to do so ?
Thanks !!<|||||>> @Kunlun-Zhu
>
> I am a bit new to HF environment, can you please provide an example code snippet to do so ?
>
> Thanks !!
https://pytorch.org/docs/stable/data.html
more examples could be found on stackoverflow and many blogs<|||||>Hi @mv96 👋
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,198 | closed | Add MarkupLM | # What does this PR do?
This PR implements the [MarkupLM](https://arxiv.org/abs/2110.08518) model by Microsoft Research.
MarkupLM is very similar to LayoutLM(v1), namely a Transformer encoder pre-trained on another domain than just text (HTML web pages). Similar to how LayoutLM adds additional embeddings for the layout information, MarkupLM adds additional embeddings for the XPATH information of nodes within an HTML string.
To do:
- [x] add soft dependency check for `MarkupLMFeatureExtractor`. cc @sgugger, will need some help here. New dummy objects probably need to be created for bs4 (Beautiful Soup 4). These currently make the entire CI fail :(
- [x] fix fast tokenizer | 09-26-2022 11:16:46 | 09-26-2022 11:16:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, so I added requires_backends("bs4"), however the dependency check is not working for me:
```
(env) niels@brutasse:~/python_projects/transformers$ python
Python 3.9.10 (main, Jan 25 2022, 09:49:28)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import bs4
>>> from transformers.utils import is_bs4_available
>>> is_bs4_available()
False
```
The import check is defined as follows:
```
_bs4_available = importlib.util.find_spec("bs4") is not None
try:
_bs4_version = importlib_metadata.version("bs4")
logger.debug(f"Successfully imported bs4 version {_bs4_version}")
except importlib_metadata.PackageNotFoundError:
_bs4_available = False
```<|||||>@sgugger I've addressed all your comments, feel free to approve :) |
transformers | 19,197 | closed | Converting TFBartForConditionalGeneration to BartForConditionalGeneration does not work | ### System Info
Transformers version: 4.22.1
Platform: SMP Debian 4.19.249-2 (2022-06-30) x86_64 GNU/Linux
Python version: 3.7.12
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BartConfig, BartForConditionalGeneration, TFBartForConditionalGeneration
config = BartConfig(vocab_size = 32001,
pad_token_id=0,
bos_token_id=2,
eos_token_id=3,
is_encoder_decoder=True,
decoder_start_token_id=2,
forced_eos_token_id=3,
max_position_embeddings = 1024,
d_model = 768,
encoder_layers = 6,
encoder_ffn_dim= 768 * 4,
encoder_attention_heads=12,
decoder_layers=6,
decoder_ffn_dim=768 * 4,
decoder_attention_heads=12)
#### Init and Build TFBart
tf_model = TFBartForConditionalGeneration(config)
tf_model.build(input_shape = (None, None)) # Without this, tf weights are not initialized
#### Save Tensorflow Model
tf_model.save_pretrained('tf_model')
weights = [(tf_model.weights[idx].name, tf_model.weights[idx].numpy()) for idx in range(len(tf_model.weights))]
#### Load Tensorflow Model into PyTorch Model
pt_model = BartForConditionalGeneration.from_pretrained('tf_model', from_tf=True)
params = [(name, param.detach().numpy()) for name, param in pt_model.named_parameters()]
### Expected behavior
I build a TFBart, save its weights, and then load its weights into PTBart.
When I debug and inspect weights (tf) and params(pt), only the first layer (shared embedding) weights are correctly loaded.
However, when I initialize a PT model and load its weights to TF one, all of the weights are correctly loaded.
I thought the issue could be the input_shape when building tf model so I tried various values such as 1024, 1026, and such with no result.
I also want to be able to set these weights manually (almost one-by-one) from a numpy matrix. The use case is to train a BART model from scratch using custom tensorflow model and load its weights into huggingface's BART model.
So what is the method to set weights manually for both TF and PT models? | 09-26-2022 09:47:32 | 09-26-2022 09:47:32 | Hi @meliksahturker 👋
Loading TF weights into a PT model is an uncommon task, so our codebase may have issues. I'd like to request two things:
1. I've recently touched TFBart code -- can you confirm that the problem persists with the current `main` (`pip install git+https://github.com/huggingface/transformers.git`)?
2. If running from `main` doesn't solve it, can you share the code you're using to compare the weights?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,196 | closed | Benchmarking text generations | ### Feature request
Currently, we can benchmark Bert or Roberta with `PyTorchBenchmark`. I am just wondering, can we do the same thing for text-generation models, e.g., T5?
```
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
config = AutoConfig.from_pretrained('t5-base')
benchmark = PyTorchBenchmark(args, configs=[config], batch_sizes=[1,2,4, 8], sequence_lengths=[128,256, 512])
```
The above code throws the following exception:
```
ValueError Traceback (most recent call last)
<ipython-input-6-df0caab4d791> in <module>
----> 1 results = benchmark.run()
2 print(results)
/usr/local/lib/python3.7/dist-packages/transformers/benchmark/benchmark_utils.py in run(self)
708 if self.args.inference:
709 if self.args.memory:
--> 710 memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
711 inference_result_memory[model_name]["result"][batch_size][sequence_length] = memory
712 if self.args.speed:
ValueError: too many values to unpack (expected 2)
```
### Motivation
It would complete the benchmark API.
### Your contribution
I can submit an initial PR for benchmarking the text generation models. | 09-26-2022 08:46:59 | 09-26-2022 08:46:59 | Already discussed here: https://github.com/huggingface/transformers/issues/15512 |
transformers | 19,195 | closed | [wip: testing doc-builder] | Testing https://github.com/huggingface/doc-builder/pull/301
| 09-26-2022 08:11:30 | 09-26-2022 08:11:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19195). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,194 | closed | After using transformers.Trainer, batch data in my dataset become empty | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.7
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.First I define my own type of dataset
```
class Seq2SeqDataset(data.Dataset):
def __init__(self, data_dir):
self.datas = read_json(data_dir)
def __len__(self):
return len(self.datas)
def __getitem__(self, index):
return self.datas[index]
```
2.then I load the training data
`trainset = Seq2SeqDataset(args.train_data_dir)`
I make sure I successfully load the dataset
by print the len and try to call item within the dataset, all work fine
3.then I define my own collator such as the following one
```
class DataCollatorForSeq2Seq:
def __init__(self, tokenizer, padding: bool = True, max_length: int = 512):
self.tokenizer = tokenizer
#self.model = model
self.padding = padding
self.max_length = max_length
def __call__(self, batch):
features = self.collator_fn(batch)
return features
def collator_fn(self, batch):
print(batch)
results = map(preprocess, batch)
inputs, targets, _ = zip(*results)
input_tensor = self.tokenizer(inputs,
truncation=True,
padding=True,
max_length=self.max_length,
return_tensors="pt",
)
................
```
4.
this collator is used in the Trainer
```
collator = DataCollatorForSeq2Seq(T5_tokenizer, max_length=args.max_seq_len)
trainer = Trainer(
tokenizer=T5_tokenizer,
model=T5_model,
args=training_args,
data_collator=collator,
train_dataset=trainset,
eval_dataset=devset
)
```
5.the strange things is that all the data in batch become empty, each item of the origin dataset is a dict
6.the trainer argument is defined as follow
```
training_args = TrainingArguments(
num_train_epochs=args.epochs,
per_device_train_batch_size=args.batch_size,
per_device_eval_batch_size=args.batch_size,
logging_steps=args.logging_steps,
weight_decay=args.weight_decay,
evaluation_strategy=args.evaluation_strategy,
eval_steps=args.eval_steps,
load_best_model_at_end=True,
learning_rate=args.learning_rate,
warmup_steps=args.warmup_steps,
warmup_ratio=args.warmup_ratio,
output_dir=args.output_dir,
save_total_limit=args.save_total_limit,
lr_scheduler_type=args.lr_scheduler_type,
gradient_accumulation_steps=args.gradient_accumulation_steps,
dataloader_num_workers=args.dataloader_num_workers)
```
### Expected behavior
the batch data suppose to be the batch from our dataset, however it is all empty | 09-26-2022 08:09:15 | 09-26-2022 08:09:15 | It's very hard to help without knowing what your dataset is. The `Trainer` will drop columns in your dataset that are not accepted by your model, so it might be that. Use `remove_unused_columns=False` in your `TrainingArguments` and see if it changes anything.<|||||>> It's very hard to help without knowing what your dataset is. The `Trainer` will drop columns in your dataset that are not accepted by your model, so it might be that. Use `remove_unused_columns=False` in your `TrainingArguments` and see if it changes anything.
Thanks for the hint. I change the dict into tuple, in the dataset and it works fine now. It's not ideal though. Not sure whether if the trainer won't support the dict format in the torch Dataset<|||||>>
```
class Seq2SeqDataset(data.Dataset):
def __init__(self, data_dir):
self.datas = read_json(data_dir)
def __len__(self):
return len(self.datas)
def __getitem__(self, index):
return self.datas[index]['passage'], self.datas[index]['question'], self.datas[index]['answer'], self.datas[index]['answer_start']
```
That' s how I modified my dataset now, which will work |
transformers | 19,193 | closed | [FIX] Document Question Answer warning with Donut model | # What does this PR do?
Currently if you try to use the Document Question Answering pipeline with Donut model a warning is thrown telling that the model is not supported. However, the model appears as supported in the documentation.
Reviewers:
Models:
- donut: @NielsRogge
Library:
- pipelines: @ankrgyl
| 09-26-2022 07:40:58 | 09-26-2022 07:40:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there @WaterKnight1998, we have an open PR (https://github.com/huggingface/transformers/pull/19027) that aims to solve this problem. I believe @NielsRogge and team prefer not to include `donut-swin` in the list of `AutoModelForDocumentQuestionAnswering`. @NielsRogge would you mind clarifying?<|||||>Closing this as it's fixed by #19027 |
transformers | 19,192 | closed | add doc for hyperparameter search | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
add doc for hpo
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger
| 09-26-2022 01:30:51 | 09-26-2022 01:30:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@yao-matrix |
transformers | 19,191 | closed | Fix small use_cache typo in the docs | # What does this PR do?
Fixes #19079
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante | 09-25-2022 23:34:16 | 09-25-2022 23:34:16 | @gante from what I can tell, generation_flax_utils.py does not have a `use_cache` parameter.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,190 | closed | LayoutLMV3 Training with Morethan 512 tokens. | Hi, It is mentioned in the Research paper and in Model docs that the model is trained with `maximum_length = 512` tokens. But when we have large tokens with an average of 1124 tokens, I thought to increase `maximum_length = 1024`. But since it is limited, I have used `Stride` and implemented a dataclass.

So when I have a long text sentence with 900 in length in this example. I got

But when the same sent to trainer

I got error like below:

From the Error, I understood that My `input_ids` are in the shape of [2,512] and once after going into the trainer, they become [1,2,512] after batch size is added.
But the trainer is expecting My `input_ids` to be in [512] after squeezed, so the trainer adds batch size 1 to make [1,512].
Now how to send my datasamples of length more than 512, with stride in processor, though i am able to load the data. But the trainer is not expecting sizes more than 512. Any suggestions here
Thank you
| 09-25-2022 20:00:43 | 09-25-2022 20:00:43 | cc @NielsRogge <|||||>>
@purnasai-cyient
Hi,
I have same issue with you. Have you solved it?
> Hi, It is mentioned in the Research paper and in Model docs that the model is trained with `maximum_length = 512` tokens. But when we have large tokens with an average of 1124 tokens, I thought to increase `maximum_length = 1024`. But since it is limited, I have used `Stride` and implemented a dataclass.
>
> 
>
> So when I have a long text sentence with 900 in length in this example. I got 
>
> But when the same sent to trainer 
>
> I got error like below: 
>
> From the Error, I understood that My `input_ids` are in the shape of [2,512] and once after going into the trainer, they become [1,2,512] after batch size is added. But the trainer is expecting My `input_ids` to be in [512] after squeezed, so the trainer adds batch size 1 to make [1,512].
>
> Now how to send my datasamples of length more than 512, with stride in processor, though i am able to load the data. But the trainer is not expecting sizes more than 512. Any suggestions here
>
> Thank you
<|||||>Hello @purnasai-cyient and everyone!
I had the same issue with LayoutLMv3 and because I think this problem is common for document information extraction task so I will describe how I dealt with that:
### 1. Training:
As you may know, first of all we have to change configurations of processor by using `stride` and `padding` and `offset_mapping`:
```
....
......
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
encoding = processor(images, words, boxes=boxes, word_labels=word_labels, truncation=True, stride =128,
padding="max_length", max_length=512, return_overflowing_tokens=True, return_offsets_mapping=True)
offset_mapping = encoding.pop('offset_mapping')
overflow_to_sample_mapping = encoding.pop('overflow_to_sample_mapping')
```
I'm not completely sure why should we turn offset_mapping True while we have to pop it after encoding but I think it's necessary to do that. To clarify, if you followed the [NielsRogge](https://github.com/NielsRogge)'s notebook ([you can find the notebook here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb)), we have to change the `prepare_examples` method like this:
```
def prepare_examples(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples[text_column_name]
boxes = examples[boxes_column_name]
word_labels = examples[label_column_name]
encoding = processor(images, words, boxes=boxes, word_labels=word_labels, truncation=True, stride =128,
padding="max_length", max_length=512, return_overflowing_tokens=True, return_offsets_mapping=True) offset_mapping = encoding.pop('offset_mapping')
overflow_to_sample_mapping = encoding.pop('overflow_to_sample_mapping')
return encoding
```
next you have to follow the steps normally, without any changes and train the model!
Note: It's completely normal if the number of rows in your dataset after `mapping` doesn't match with to number of your data. It's due to that now if we pass 512 tokens for a document, we would store the next tokens in another row of data (from token 512 to 1024, next from 1025 to 1536 and...)
### 2. Inference:
Inference section is a little bit tricky. Again I will describe the setups based on [mentioned notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb). Obviously we have to implement our processor with `stride` and `padding` like training phase.
```
encoding = processor(images, words, boxes=boxes, word_labels=word_labels, truncation=True, stride =128,
padding="max_length", max_length=512, return_overflowing_tokens=True, return_offsets_mapping=True)
offset_mapping = encoding.pop('offset_mapping')
overflow_to_sample_mapping = encoding.pop('overflow_to_sample_mapping')
```
Next, we have to change the shape of encoding to handle multiple pages (as I said in part 1, we divided large tokens to separate entities, so now the results are 2D. I have shaped them in this way:
```
# change the shape of pixel values
x = []
for i in range(0, len(encoding['pixel_values'])):
x.append(encoding['pixel_values'][i])
x = torch.stack(x)
encoding['pixel_values'] = x
```
so if we print encoding items, we will have something like this:
```
for k,v in encoding.items():
print(k,v.shape)
```
results:
```
input_ids torch.Size([3, 512])
attention_mask torch.Size([3, 512])
bbox torch.Size([3, 512, 4])
pixel_values torch.Size([3, 3, 224, 224])
```
As we can see, in my case, document is divided to 3 parts (for example size of `input_ids` is [3,512] or 3x512 which if we use normal processing, we will get just one array [1, 512] for all the cases). So we're doing fine till now. We have to pass encoding to the model to get the predictions:
```
with torch.no_grad():
outputs = model(**encoding)
# The model outputs logits of shape (batch_size, seq_len, num_labels).
logits = outputs.logits
print(logits.shape)
# We take the highest score for each token, using argmax. This serves as the predicted label for each token.
predictions = logits.argmax(-1).squeeze().tolist()
token_boxes = encoding.bbox.squeeze().tolist()
if (len(token_boxes) == 512):
predictions = [predictions]
token_boxes = [token_boxes]
```
Last lines (if clause) in code above, is because in cases that the number of tokens are less than 512, we will get 1D array, we have to put them in a list to prevent errors for next step.
Finally, now we have `predictions` and `token_boxes` from the model, you can also reach the text of each bbox by using:
`processor.tokenizer.decode(encoding["input_ids"][i][j])` which` i `and` j `corresponds to the entity that you want to extract the text of it. Just as an example, we could find predictions by traversing `token_boxes` with for loop (you could do whatever you want, because we needed predictions and bboxes and we have them now! processing them is up to you ;) )
```
# this is just an example, change this code for your project!
for i in range(0, len(token_boxes)):
for j in range(0, len(token_boxes[i])):
print("label is: {}, bbox is: {} and the text is: {}".format(predictions[i][j],
token_boxes[i][j], processor.tokenizer.decode(encoding["input_ids"][i][j]) )
```
<|||||>Hi @alitavanaali,
Thank you for sharing your solution - I will try it out soon.
One question, if you don't mind sharing your thoughts: if you break a document into 3 chunks, wouldn't the prediction of the last text (in chunk 3) in the sequence miss the context from the very first text (in chunk 1), thus, degrading the performance of the model for later chunks? In other words, since the self-attention is still of shape 512 x 512 and information is not shared across chunks, is chunking a document a good idea?<|||||>Hello,
Thank you for the solution @alitavanaali . I have several questions:
- The inference solution does only work for one image right? since after the encoding, the x is stacked together so we loose the index of every image. My question is how to make it work for a batch inference? while having as output the predicted labels by image?
- In theory, Layoutlm is trained on Publaynet dataset which has a lot of documents with a lot of text (most of them with more than 512 tokens). How so that Layoutlmv3 is limited to 512 and thus for so little text in image?<|||||>I'm leaving this comment in case it may help others. If you follow @alitavanaali's approach, and you wish to align back to the original text (by combining the strided window regions together), I do the following:
```
true_predictions =[]
true_boxes = []
STRIDE_COUNT = 128
for i, (pred, box, mapped) in enumerate(zip(predictions, token_boxes, offset_mapping)):
is_subword = np.array(mapped.squeeze().tolist())[:,0] != 0
if i == 0:
true_predictions += [id2label[pred_] for idx, pred_ in enumerate(pred) if (not is_subword[idx])]
true_boxes += [unnormalize_box(box_, width, height) for idx, box_ in enumerate(box) if not is_subword[idx]]
else:
true_predictions += [id2label[pred_] for idx, pred_ in enumerate(pred) if (not is_subword[idx])][STRIDE_COUNT - sum(is_subword[:STRIDE_COUNT]):]
true_boxes += [unnormalize_box(box_, width, height) for idx, box_ in enumerate(box) if not is_subword[idx]][STRIDE_COUNT - sum(is_subword[:STRIDE_COUNT]):]
``` |
transformers | 19,189 | closed | Add decisiontransformer to onnx config | # What does this PR do?
Adds an onnx configuration for decision transformer
Fixes # (issue)
https://github.com/huggingface/transformers/issues/18191
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
In progress while PR is being reviewed
## Who can review?
@ChainYo @regisss Here's a first cut, I will be reading the documentation on testing this while the PR is being reviewed, again appreciate any help you can provide as this is my first time through this process
| 09-25-2022 17:24:24 | 09-25-2022 17:24:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19189). All of your documentation changes will be reflected on that endpoint.<|||||>@ChainYo @regisss bumping this PR<|||||>> @ChainYo @regisss bumping this PR
It looks good to me!
Thanks for iterating on this, @skanjila.
**EDIT**: Did you do ci tests before pushing? (linting and stuff?)<|||||>@ChainYo @regisss I reran make style and everything passed locally and also added an entry to DecisionTransformer in the test_onnx_v2.py file, please have a look now and lmk what else is missing<|||||>@skanjila Hmm too many files got reformatted, I think your version of Black is different from what we use. Could you run `pip install transformers["quality"]` and then `make style`?<|||||>@regisss done, and make style worked, please recheck<|||||>@skanjila You also need to run `make fix-copies`, I forgot this one my bad
And there are still many reformatted files, did you remove the files from the previous commit?<|||||>@regisss here is what I did
1) I reset git to an earlier commit before all my changes
2) I added back the entry for the decision_tranformer in test_onnx_v2.py
3) I reran make style after installing transformers["quality"]
4) I ran make fix-copies
5) steps 3/4 were successful
Should be good to go, please have a look , I look forward to finishing this work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,188 | closed | Updated hf_argparser.py | # What does this PR do?
Works on #19116
## Summary
- [x] adding as parse_yaml_file method to HfArgumentParser with the code above
- [x] refactor the dupe code between parse_json_file and parse_dict similar to the code above
- [ ] add a small test of parse_yaml_file
- [ ] add a small test of parse_json_file
| 09-25-2022 16:20:54 | 09-25-2022 16:20:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR! Can you just run `make style` on your branch to fix the code quality issue?<|||||>Yeah Sure!
<|||||>Done @sgugger!!<|||||>Thanks again! |
transformers | 19,187 | closed | `$from transformers import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP` | `$from transformers import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP`
this solved my issue. Thank you everyone for support. you are great.
https://hjlabs.in
_Originally posted by @hemangjoshi37a in https://github.com/huggingface/transformers/issues/5848#issuecomment-1242951220_
| 09-25-2022 13:28:02 | 09-25-2022 13:28:02 | Can you please mention the directory and the file name where you are changing this MAP to LIST.
Thank You.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,186 | closed | Unable to instantiate tokenizer with `TRANSFORMERS_OFFLINE=1` | Just some context, we use `TRANSFORMERS_OFFLINE=1` in the NeMo CI to ensure we load from the local cache. With the latest transformers version we noticed this bug in our CI!
### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Create this script `reprod.py`:
```python
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained(pretrained_model_name_or_path='gpt2')
```
run:
```
python reprod.py
TRANSFORMERS_OFFLINE=1 python reprod.py
```
First one runs successfully, second one fails:
```
Traceback (most recent call last):
File "/home/snarenthiran/NeMo/reprod.py", line 3, in <module>
AutoTokenizer.from_pretrained(pretrained_model_name_or_path='gpt2')
File "/home/snarenthiran/anaconda3/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 549, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/snarenthiran/anaconda3/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 418, in get_tokenizer_config
commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
File "/home/snarenthiran/anaconda3/lib/python3.9/site-packages/transformers/utils/hub.py", line 225, in extract_commit_hash
search = re.search(r"snapshots/([^/]+)/", resolved_file)
File "/home/snarenthiran/anaconda3/lib/python3.9/re.py", line 201, in search
return _compile(pattern, flags).search(string)
TypeError: expected string or bytes-like object
```
### Expected behavior
To create the tokenizer from the local files. | 09-25-2022 12:50:53 | 09-25-2022 12:50:53 | |
transformers | 19,185 | closed | Update hf_argparser.py | Added parse_yaml_file function as mentioned in issue #19116 | 09-25-2022 12:29:34 | 09-25-2022 12:29:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,184 | closed | Trainer().train() just idles, neccesitates kernel restart | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger because I think you're the original author of the example
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb, Section *Fine-tuning a language model*. (change `push_to_hub=False).
2. trainer.train() prints epoch and step counts, then doesn't seem to do anything useful (python process cpu usage stays at +-0, so does GPU utilization), and also process interrupt doesn't work
### Expected behavior
Either training starts or some kind of error is shown | 09-25-2022 07:38:40 | 09-25-2022 07:38:40 | How are you running the notebook? It works fine on my side.<|||||>I'm unable to upload it here, but it's literally the original example, sans the `notebook_login` and `push_to_hub=False`.
And it *works* on my local machine (installed the same libraries as listed above, except my kernel is `5.19.8-200.fc36.x86_64`)
As reported, it fails when I'm running on a Vertex AI instance. That's what the instance says about itself:
```
Environment
PyTorch 1.11 (with Intel® MKL-DNN/MKL)
Environment version
M94
Machine type
n1-standard-8 (8 vCPUs, 30 GB RAM)
GPU
NVIDIA Tesla V100 x 1
```
I also tried running it (on the instance) with `os.environ['CUDA_VISIBLE_DEVICES'] = '-1'`, but that didn't help.
<|||||>This Vertex AI instance type *works*:
```
Environment
Python 3 (with Intel® MKL and CUDA 11.0)
Environment version
M97
Machine type
n1-standard-4 (4 vCPUs, 15 GB RAM)
GPU
NVIDIA Tesla T4 x 1
```
So I can now get my work done, but I think the reported issue is still valid<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,183 | closed | chore: add expected output to the sample code. | Fixes the issue as described here: https://github.com/huggingface/transformers/pull/18815#issuecomment-1256338065
@ydshieh FYI. | 09-25-2022 06:28:25 | 09-25-2022 06:28:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh here's what I have done so far:
* Update the weight conversion script, so we have the ImageNet-1k label mapping in the model config.
* Update the `modeling_vit_msn.py` script with the expected output (which is basically an ImageNet-1k class).
I also pushed the model classes with the updated configs. I realized if I only updated the `config.json` in the respective Hub repositories, the member config associated with the model classes may not get appropriately updated. So, I pushed everything updated with the new config to my HF account.
So, if you go to https://huggingface.co/models?filter=vit_msn now, you'd notice two versions of the same model checkpoint: one is hosted from `facebook` and another one is hosted from `sayakpaul`. I need someone from the HF team to transfer the updated model repositories to `facebook`.
Let me know if anything here is unclear. <|||||>> I realized if I only updated the config.json in the respective Hub repositories, the member config associated with the model classes may not get appropriately updated.
Could you explain a bit more about `the member config associated with the model classes may not get appropriately updated.
`?<|||||>The current model classes living under the Facebook org don't have the updated configs that have the ImageNet-1k labels.
The model classes residing under `sayakpaul` do. They need to be transferred to the facebook org.
If you search with https://huggingface.co/models?filter=vit_msn then you will get the complete list.
Let me know if anything is unclear now. <|||||>My question is about why uploading the new config.json is not enough - this is what I understand from your comments.
I think simply updating the config.json on the facebook model Hub repo. would be enough.<|||||>Okay, let me verify a few things and get back to you. <|||||>@ydshieh @sgugger the PRs to the Hub repos are up:
* https://huggingface.co/facebook/vit-msn-small/discussions/4
* https://huggingface.co/facebook/vit-msn-base/discussions/2
* https://huggingface.co/facebook/vit-msn-large/discussions/2
* https://huggingface.co/facebook/vit-msn-base-4/discussions/2
* https://huggingface.co/facebook/vit-msn-large-7/discussions/2
Indeed, just updating the config.json made it work. I have verified it locally. <|||||>Hub PRs all merged, thank you @sayakpaul (also for checking) |
transformers | 19,182 | closed | Adding State-of-the-art Contrastive Search to the Codebase of model.generate() | ### Feature request
****
<span id='all_catelogue'/>
## Catalogue:
* <a href='#abstract'>1. Abstract</a>
* <a href='#introduction'>2. Introduction</a>
* <a href='#demonstration'>3. Demonstration of the Awesome Results from Contrastive Search</a>
* <a href='#opt_demonstration'>3.1. Demonstration with OPT</a>
* <a href='#gpt_demonstration'>3.2. Demonstration with GPT</a>
* <a href='#example_usage'>4. Example Usage</a>
* <a href='#installation'>4.1. Environment Setup</a>
* <a href='#reproduce_opt'>4.2. Reproduce Results of OPT</a>
* <a href='#reproduce_gpt'>4.3. Reproduce Results of GPT</a>
* <a href='#code_snippet'>5. Code Snippet</a>
* <a href='#inference_latency'>6. Inference Latency</a>
* <a href='#reference'>References</a>
****
<span id='abstract'/>
### 1. Abstract: <a href='#all_catelogue'>[Back to Top]</a>
In this issue, we try to integrate contrastive search into the codebase of `model.generate()` as an additional option for text generation. We believe it would greatly benefit the research community.
All related resources of our work have been open-sourced, please check them as below.
* **(1) Paper:** ["A Contrastive Framework for Neural Text Generation"](https://arxiv.org/abs/2202.06417)
* **(2) Code:** [https://github.com/yxuansu/SimCTG](https://github.com/yxuansu/SimCTG)
* **(3) An Easy-to-use Pypi Package (SimCTG):** [https://github.com/yxuansu/SimCTG/tree/main/simctg](https://github.com/yxuansu/SimCTG/tree/main/simctg)
****
<span id='introduction'/>
### 2. Introduction: <a href='#all_catelogue'>[Back to Top]</a>
Open-ended text generation is one core task in NLP. However, the maximization-based decoding methods (e.g., greedy search and beam search) of neural language models often lead to degenerate problems, i.e., the generated text is unnatural and contains undesirable repetitions. Existing approaches address the text degeneration problem by introducing stochasticity via sampling (e.g. top-k sampling <a href='#reference'>[1]</a> and nucleus sampling <a href='#reference'>[2]</a>), but they often lead to solutions that lack coherence.
In our recent **NeurIPS 2022** paper <a href='#reference'>[3]</a>, "A Contrastive Framework for Neural Text Generation", we propose a new decoding method, i.e. `contrastive search`, which can be directly applied to **all** families of **off-the-shelf** language models (e.g. GPT and OPT). Specifically, during the decoding process, contrastive search selects from the most probable candidates predicted by the model while taking into account the degeneration penalty computed from the previous context. Formally, at each decoding step, given the context $\boldsymbol{x}_{< t}$, the selection of the output token $\boldsymbol{x}_t$ follows:

where $V^{(k)}$ is the set of top-k predictions from the model's probability distribution. The second term, degeneration penalty, measures how discriminative of candidate $v$ with respect to tokens in the previous context. And $s(h_v,h_{x_j})$ is the cosine similairty between the representations of the candidate $v$ and the previous token ${x_j}$. (The core implementations of contrastive search can be found in <a href='#code_snippet'>Section 5</a>.)
Our proposed contrastive search can generate text that is coherent to the prefix text as well as can maintain the diversity of its generated result. Through extensive experiments, we demonstrate the clear superiority of contrastive search over all existing methods on both automatic (e.g. MAUVE <a href='#reference'>[4]</a>) and human evaluations.
****
<span id='demonstration'/>
### 3. Demonstration of the Awesome Results from Contrastive Search: <a href='#all_catelogue'>[Back to Top]</a>
In this section, we provide direct demonstrations to show how awesome contrastive search is. To this end, we select two representative models to demonstrate the results, including: (i) OPT <a href='#reference'>[5]</a>, the recently released model by Meta which successfully replicates the results of GPT-3 <a href='#reference'>[6]</a>; (ii) GPT-2 <a href='#reference'>[7]</a>, (arguably) the most widely used model for text generation.
<span id='opt_demonstration'/>
#### 3.1. Demonstration with OPT: <a href='#all_catelogue'>[Back to Top]</a>
First, let's see the results of OPT models.
Take the **conversation generation** task from the original paper <a href='#reference'>[5]</a> (Figure 9 at the Appendix E) as an example. The prefix text is shown as below.
```
A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?
```
**(i)** The generated text with **OPT-175B** + nucleus sampling, as shown in Figure 9 at the Appendix E from the original paper, is as following:
```
A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?
Statue: I have been there since 1886.
Human: What is your job?
Statue: I was built to welcome immigrants to the United States.
Human: What is your favorite food?
Statue: I like ice cream and apples.
Human: What is your favorite color?
Statue: I like red, white, and blue.
Human: What is your favorite animal?
Statue: I like dogs.
Human: What is your favorite song?
```
Obviously, even with 175B parameters, the model gets stuck in some _simple_ and _linguistically repetitive_ generations with nucleus sampling.
**(ii)** Now, let's what result we can get using contrastive search with OPT-6.7B [[model card]](https://huggingface.co/facebook/opt-6.7b) (a more than 20 times **smaller** model compared to OPT-175B).
**[Reproduce Results]** In <a href='#reproduce_opt'>Section 4.2</a>, we show how to reproduce our results with a few lines of code.
```
A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?
Statue: Since 1884.
Human: Why did you come to America?
Statue: I was given to the United States by France as a gift for helping the French during the Franco-Prussian War.
Human: What do you think of America?
Statue: I love it. It is the greatest country in the world.
Human: What’s the weather like in New York?
Statue: It is cold.
Human: Is it safe to walk around at night?
Statue: Yes. There are policemen everywhere.
Human: Do you have any children?
Statue: Not yet. My pedestal is empty.
Human: What would you like to say to people who want to immigrate to America?
Statue: Come on over. You will be happy here. We have everything you need.
```
We see that the generated text with contrastive search is **much more** _diverse_ and _interesting_.
**[Comparison]** For a more direct comparison, the generated results with the **same** OPT-6.7B using greedy search and nucleus sampling are:
<details>
<summary><b>(1) Text generated by greedy search: [click to expand]</b></summary>
```
----------------------------------------------------------------------------------------------------
A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?
Statue: I have lived here for over 100 years.
Human: What do you do?
Statue: I welcome people from all over the world to come to America.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human: What do you think of America?
Statue: I love America.
Human: What do you think of immigrants?
Statue: I love immigrants.
Human...
----------------------------------------------------------------------------------------------------
```
</details>
<details>
<summary><b>(2) Text generated by nucleus sampling: [click to expand]</b></summary>
```
----------------------------------------------------------------------------------------------------
A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?
Statue: Since 1876.
Human: Why is the Statue of Liberty guarded?
Statue: Because there are many people trying to steal her.
a comparison about an unexpressed thought
I would also share the story of “A Humble Fear.” At a conference in New York the Dalai Lama gave a
speech to the International Thinkers Congress in New York. The whole thing was recorded, and the
video is quite interesting. (on a side note, I love the fact that there were some people who laughed
when he described himself as a humble being… I think the video is hilarious, there is a reason why
I put up the video. Because if you cannot find the humor in this you’re sadly lacking…)
In the speech, the Dalai Lama compares the search for truth to searching for treasure. He says:
“However there is a huge difference between being a thief and a collector. A thief simply takes things,
whereas a collector looks for the beauty, even if it is just a single object.”
The above quote is perhaps the most cliched Buddhist philosophy of our times. However the comparison
between a collector and a thief is quite interesting. I like to think that the Buddha...
----------------------------------------------------------------------------------------------------
```
</details>
We see that (i) greedy search generates repetitive text; and (ii) nucleus sampling produces text that is incoherent.
<span id='gpt_demonstration'/>
#### 3.2. Demonstration with GPT: <a href='#all_catelogue'>[Back to Top]</a>
Next, let's see the results of GPT models.
We provide a simple prefix text (`DeepMind Company is`) with only three words and asks the model to generate a long text with **512** tokens. In this example, we use GPT-2-large [[model card]](https://huggingface.co/gpt2-large) for text generation.
**[Reproduce Results]** In <a href='#reproduce_gpt'>Section 4.3</a>, we show how to reproduce our results with a few lines of code.
(1) Generated result with contrastive search:
```
----------------------------------------------------------------------------------------------------
DeepMind Company is a leader in artificial intelligence (AI). We have a long history of working with
companies such as Google, Facebook, Amazon, and Microsoft to build products that improve people's lives,
and today we are excited to announce that DeepMind's AlphaGo program has won the game of Go, becoming
the first program to defeat a professional Go player.
The victory is a testament to the power of deep learning, and to the incredible work of our research team,
which has been at the forefront of AI research for the past five years. AlphaGo is one of the most advanced
Go programs ever created, and its performance is an important step towards the goal of human-level AI.
"This is the culmination of a decade of hard work," said Andy Ng, co-founder and CTO of DeepMind. "We are
thrilled to have achieved this milestone and look forward to continuing to develop AI that can be used in
a wide range of applications and to help people live better lives."
DeepMind's work on Go began in 2010, when it began to train a neural network to play Go using millions of
games played by top Go players around the world. Since then, the team has refined the algorithm, adding
more and more layers of reinforcement learning to make it better at recognizing patterns and making decisions
based on those patterns. In the past year and a half, the team has made significant progress in the game,
winning a record-tying 13 games in a row to move into the top four of the world rankings.
"The game of Go is a complex game in which players have to be very careful not to overextend their territory,
and this is something that we have been able to improve over and over again," said Dr. Demis Hassabis, co-founder
and Chief Scientific Officer of DeepMind. "We are very proud of our team's work, and we hope that it will inspire
others to take the next step in their research and apply the same techniques to other problems."
In addition to the win in Go, DeepMind has also developed an AI system that can learn to play a number of different
games, including poker, Go, and chess. This AI system, called Tarsier, was developed in partnership with Carnegie
Mellon University and the University of California, Berkeley, and is being used to teach computer vision and machine
learning to identify objects in images and recognize speech in natural language. Tarsier has been trained to play
the game of Go and other games on a number of different platforms...
----------------------------------------------------------------------------------------------------
```
From the results, we can see that the entire generated document is very **high-quality** and **human-like**.
**[Comparison]** For a more direct comparison, the generated results with the **same** model using greedy search and nucleus sampling are:
<details>
<summary><b>(2) Text generated by greedy search: [click to expand]</b></summary>
```
----------------------------------------------------------------------------------------------------
DeepMind Company is a leading AI research company, with a focus on deep learning and deep learning-based systems.
The company's research is focused on the development of deep learning-based systems that can learn from large
amounts of data, and that can be used to solve real-world problems.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's research is also used by the UK government to develop new technologies for the UK's National Health Service.
DeepMind's
----------------------------------------------------------------------------------------------------
```
</details>
<details>
<summary><b>(3) Text generated by nucleus sampling: [click to expand]</b></summary>
```
----------------------------------------------------------------------------------------------------
DeepMind Company is a Cardiff-based start-up with an exclusive mission to build the world's largest
ever deep-learning system to analyse the world's digital content and in particular, super-sized image
content.
The system, the largest in the world with no previous expertise in image or digital content detection,
will have previously relied on a mixture of machine learning, artificial neural networks, and storage,
processing and retrieval techniques.
The AI system, called ImageNet, will take new approach to our challenge of data science and machine
learning, significantly improving efficiency, natural language processing and full understanding of
complex, high-dimensional images, with an Eye of the Tiger framework for extracting techniques to
ensure correct detection of particular images in complex scenes.
Dr. Mark Ward, Dr. Alex Kudle, Dr. Ralph Pinchbeck and CTO, DeepMind Dr. Alex Kudle
Case Study: Derpy's Most Wanted: Fighting Cybersecurity, building a robot-aided smuggling network
InfoSec News, 06/07/2017
Dimitrios Papadimitriou (left) and Chris Bardy (right) at G+ XE, July 2017
How to model an industrial malware botnet
In this case study, we show how to build a deep-learning environment to model a new, massive ransomware
botnet. Our model computes the distribution of user credentials stored on infected machines and produces
a toolkit for open-source "modeling-as-code" (MATC) simulation. We elaborate on the resource management
aspect of the toolkit, and how it can be adapted to working offline on embedded or cloud-based networks.
Hacking Networked: The industrial botnets of the future
InfoSec News, 04/11/2017
Intensive analysis of state sponsored malicious cyber activity, published by KBB Strategic
The major single source of IoT malware networks in 2017
The global commercial botnet equivalent count grew to 31.5% in 2017, up from 21.1% the year before,
according to a comprehensive report from the Government Accountability Office (GAO). According to the
report, various malware operators continued to convert massive amounts of wasted data into profits as
well as enable sophisticated cyber operations targeting critical infrastructure.
Industrial malware blasts up to 31\% of malware within the IP space over 2017...
----------------------------------------------------------------------------------------------------
```
</details>
Obviously, greedy search generates repetitive text while nucleus sampling produces text that is incoherent and quickly goes off-the-topic.
****
<span id='example_usage'/>
### 4. Example Usage: <a href='#all_catelogue'>[Back to Top]</a>
In our [[main repo]](https://github.com/yxuansu/SimCTG), we have provided detailed huggingface-style tutorials ([[tutorial 1]](https://github.com/yxuansu/SimCTG#2-contrastive-search-with-gpt-2-and-opt-back-to-top), [[tutorial 2]](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top)) on how to apply contrastive search on different models across different languages.
In the following, we show how to easily reproduce our results in <a href='#demonstration'>Section 3</a> with a few lines of code.
<span id='installation'/>
#### 4.1. Environment Setup:
For an easy usage, we have provided a Pypi package which can be installed as below. More details of our package can be found [[here]](https://github.com/yxuansu/SimCTG/tree/main/simctg).
```yaml
pip install simctg --upgrade
```
<span id='reproduce_opt'/>
#### 4.2. Reproduce Results of OPT:
To reproduce our results in <a href='#opt_demonstration'>Section 3.1</a> using OPT,
(i) We first load the OPT model as
```python
import torch
from simctg.simctgopt import SimCTGOPT
model_name = 'facebook/opt-6.7b'
model = SimCTGOPT(model_name)
tokenizer = model.tokenizer
model.eval()
bos_token_id = tokenizer.bos_token_id
eos_token_id = tokenizer.eos_token_id
```
(ii) Then, we provide the prefix text as
```python
prefix_text = r"""A chat between a curious human and the Statue of Liberty.
Human: What is your name?
Statue: I am the Statue of Liberty.
Human: Where do you live?
Statue: New York City.
Human: How long have you lived there?"""
```
(iii) Thirdly, we prepare the input ids as
**[Important Tip]** As the authors suggested in their [[tutorial]](https://huggingface.co/docs/transformers/model_doc/opt), OPT adds the EOS token </s> to the beginning of every prompt. So make sure the special token is added at the front of the prompt.
```python
tokens = tokenizer.tokenize(prefix_text)
input_ids = [bos_token_id] + tokenizer.convert_tokens_to_ids(tokens) # adds </s> to the beginning of every prompt
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
(iv) Lastly, we generate the text with contrastive search as
```python
beam_width, alpha, decoding_len = 5, 0.6, 256
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len,
end_of_sequence_token_id = eos_token_id, early_stop = True)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output[1:]))
print("" + 100 * '-')
```
<span id='reproduce_gpt'/>
#### 4.3. Reproduce Results of GPT:
To reproduce our results in <a href='#gpt_demonstration'>Section 3.2</a> using GPT,
(i) We first load the GPT-2 model as
```python
import torch
from simctg.simctggpt import SimCTGGPT
model_name = r'gpt2-large'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
eos_token_id = tokenizer.eos_token_id
```
(ii) Then, we prepare the prefix text as
```python
prefix_text = r"DeepMind Company is"
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
(iii) Last, we generate the text with contrastive search as
```python
beam_width, alpha, decoding_len = 4, 0.6, 512
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len,
end_of_sequence_token_id = eos_token_id, early_stop = True)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output))
print("" + 100 * '-')
```
****
<span id='code_snippet'/>
### 5. Code Snippet: <a href='#all_catelogue'>[Back to Top]</a>
The main implemetations of contrastive search involves two parts: (i) candidates collection; and (ii) candidate re-ranking.
For more details, please find our open-sourced implementations for [[GPT-2 models]](https://github.com/yxuansu/SimCTG/blob/main/simctg/utlisgpt.py) and [[OPT models]](https://github.com/yxuansu/SimCTG/blob/main/simctg/utlisopt.py).
(i) The collection of candidates can be implemented as below:
```python
def ContrastiveSearchOneStep(model, input_ids, beam_width, alpha):
'''
model: the generation model, e.g., gpt2
input_ids: 1 x seqlen
'''
prev_hidden_states, logits = model.compute_logits_and_hidden_states(input_ids)
_, seqlen, embed_dim = prev_hidden_states.size()
_, _, vocab_size = logits.size()
p = random.uniform(0, 1)
logit_for_next_step = logits[:,-1,:]
assert logit_for_next_step.size() == torch.Size([1, vocab_size])
next_probs = F.softmax(logit_for_next_step, dim = -1)
assert next_probs.size() == logit_for_next_step.size()
_, top_k_ids = torch.topk(logit_for_next_step, dim = -1, k = beam_width)
assert top_k_ids.size() == torch.Size([1, beam_width])
top_k_probs = torch.gather(next_probs, dim = 1, index=top_k_ids)
assert top_k_probs.size() == top_k_ids.size()
# compute new hidden
expanded_context = [input_ids for _ in range(beam_width)]
expanded_context = torch.cat(expanded_context, dim = 0)
assert expanded_context.size() == torch.Size([beam_width, seqlen])
top_k_ids = top_k_ids.view(beam_width, 1)
next_input_ids = torch.cat([expanded_context, top_k_ids], dim = -1)
assert next_input_ids.size() == torch.Size([beam_width, seqlen+1])
new_hidden_states, next_logits = model.compute_logits_and_hidden_states(next_input_ids)
assert new_hidden_states.size() == torch.Size([beam_width, seqlen+1, embed_dim])
context_hidden = new_hidden_states[:,:seqlen,:]
assert context_hidden.size() == torch.Size([beam_width, seqlen, embed_dim])
next_hidden = new_hidden_states[:,seqlen:,:]
assert next_hidden.size() == torch.Size([beam_width, 1, embed_dim])
next_id = ranking(context_hidden, next_hidden, top_k_ids, top_k_probs, alpha)
next_input_ids = torch.cat([input_ids, next_id], dim = -1)
assert next_input_ids.size() == torch.Size([1, seqlen+1])
return next_input_ids
```
(ii) The re-ranking of candidates can be implemented as below:
```python
def ranking(context_hidden, next_hidden, next_top_k_ids, next_top_k_probs, alpha):
'''
context_hidden: beam_width x context_len x embed_dim
next_hidden: beam_width x 1 x embed_dim
next_top_k_ids: beam_width x 1
'''
beam_width, context_len, embed_dim = context_hidden.size()
assert next_hidden.size() == torch.Size([beam_width, 1, embed_dim])
norm_context_hidden = context_hidden / context_hidden.norm(dim=2, keepdim=True)
norm_next_hidden = next_hidden / next_hidden.norm(dim=2, keepdim=True)
cosine_matrix = torch.matmul(norm_context_hidden, norm_next_hidden.transpose(1,2)).squeeze(-1)
assert cosine_matrix.size() == torch.Size([beam_width, context_len])
scores, _ = torch.max(cosine_matrix, dim = -1)
assert scores.size() == torch.Size([beam_width])
next_top_k_probs = next_top_k_probs.view(-1)
scores = (1.0 - alpha) * next_top_k_probs - alpha * scores
_, selected_idx = torch.topk(scores, k = 1)
assert selected_idx.size() == torch.Size([1])
selected_idx = selected_idx.unsqueeze(0)
assert selected_idx.size() == torch.Size([1,1])
next_id = torch.gather(next_top_k_ids, dim = 0, index=selected_idx)
assert next_id.size() == torch.Size([1,1])
return next_id
```
****
<span id='inference_latency'/>
### 6. Inference Latency: <a href='#all_catelogue'>[Back to Top]</a>
Lastly, we compare the inference latency of contrastive search with other widely used decoding methods. The results are shown in the Figure below.

We see that the inference latency of contrastive search is comparable with other widely used methods, which further verifies the practical usage of our proposed approach.
****
<span id='reference'/>
### References:
> [1] Fan et al., 2018, ["Hierarchical Neural Story Generation"](https://arxiv.org/abs/1805.04833), ACL 2018
> [2] Holtzman et al., 2020, ["The Curious Case of Neural Text Degeneration"](https://arxiv.org/abs/1904.09751), ICLR 2020
> [3] Su et al., 2022, ["A Contrastive Framework for Neural Text Generation"](https://arxiv.org/abs/2202.06417), NeurIPS 2022
> [4] Pillutla et al., 2021, ["MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers"](https://arxiv.org/abs/2102.01454), NeurIPS 2021
> [5] Zhang et al., 2022, ["OPT: Open Pre-trained Transformer Language Models"](https://arxiv.org/abs/2205.01068), Arxiv 2022
> [6] Brown et al., 2020, ["Language Models are Few-Shot Learners"](https://arxiv.org/abs/2005.14165), NeurIPS 2020
> [7] Radford et al., 2018, ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
### Motivation
Given the exceptional performances of contrastive search, we certainly believe that it would greatly benefit a wide range of NLP researchers/practitioners in the text generation community.
### Your contribution
I can submit a PR for this request feature ASAP. | 09-25-2022 02:07:14 | 09-25-2022 02:07:14 | @patrickvonplaten @sgugger @stas00 Thank you very much for your contributions to the codebase of `generation_utils`. Could you please take a moment to review our request, which we believe could significantly facilitate the development of the text generation community?<|||||>Wow this is very cool! @gante do you have time to take a look here by any chance? Otherwise @ArthurZucker maybe? <|||||>Hi @patrickvonplaten, @gante, and @ArthurZucker,
We believe our contrastive search would greatly benefit the research community! Really looking forward to seeing it to be added in the `transformers library`!
Please do let us know if you need any assistance from our end. Many thanks for your kind help!
Best,
Yixuan
<|||||>Hi @yxuansu -- this is really cool! The method is clear and makes total sense, and the results seem to back it up. Also, this is probably the clearest feature request I've seen here <3
I'd be happy to support you throughout the process, including adding it to the three frameworks (PT/TF/JAX), creating demos, and communicating it. You mentioned that you were willing to open a PR -- how may I be of help? 🤗
Looking at the resources that you shared, it seems like the workflow can be coded within a [`LogitsProcessor`](https://github.com/huggingface/transformers/blob/4a0b958d61f2c99a1cfb3b0d146596efafa9aa58/src/transformers/generation_logits_process.py#L51), which would automatically make your method compatible with other logit manipulation strategies (e.g. forbidding certain words) and with all generation strategies (sampling, beam search, ...). In essence, the processor would apply the top k filtering, compute the cosine similarities, compute the logits according to your method, and return them. The caveat is the need for the hidden states, to compute the coside similarities.
Model-specific details like forcing the EOS token in OPT are handled inside `generate()`, so no further changes should be needed. I'm curious to see the performance in other model types and types of text (like generating code)!<|||||>Hi @gante -- thank you so much for your reply! I wonder if could you advise us (me and @gmftbyGMFTBY ) on what should be our next step? It is our first time trying to commit to `huggingface` :-)
Many thanks!<|||||>This looks fantastic. I'm looking forward to having this new feature in `transformers`.
Also IMHO you actually would probably get an even more impressive improvement using BLOOM-176B which by default with greedy search suffers from getting stuck in repetition a lot.<|||||>Hi @stas00 -- Thank you very much for your interest! We will work with @gante and try to add this new feature to `transformers` ASAP!<|||||>Hi @gante,
For your convenience, you can find our key implementations of contrastive search for **GPT** models below:
> 1. [Candidate Ranking](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/utlisgpt.py#L14)
> 2. [One Step Decoding](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/utlisgpt.py#L31)
> 3. [Contrastive Search Interface](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/simctggpt.py#L81)
For **OPT** models, the resources are referred as below:
> 1. [Candidate Ranking](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/utlisopt.py#L15)
> 2. [One Step Decoding](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/utlisopt.py#L35)
> 3. [Contrastive Search Interface](https://github.com/yxuansu/SimCTG/blob/bad59066dc5874567c3dde77ad9aaafe21abd4a4/simctg/simctgopt.py#L81)
Hope these pointers are useful!
Best,
Yixuan
<|||||>@yxuansu @gmftbyGMFTBY fantastic! The first step is to discuss the design before jumping to the implementation itself. Since it will be your first commit, I'll be assuming that you are not very familiar with the code base, so I'll give extra pointers 🙌
I thought deeper about the design, and I realized that my suggestion above, to use a [`LogitsProcessor`](https://github.com/huggingface/transformers/blob/4a0b958d61f2c99a1cfb3b0d146596efafa9aa58/src/transformers/generation_logits_process.py#L51), would require needlessly complicated code. Obtaining $h_v$, according to your implementation, requires running an additional forward pass, and `LogitsProcessor` isn't the place to do it for a myriad of reasons.
The points above lead to the following proposal of implementation: a dedicated generation method, like `sample` or `beam_search`. It will be much easier to implement and test -- you can simply:
1. make a copy of [`greedy search`](https://github.com/huggingface/transformers/blob/9d732fd2dd99cd5c353a6e50c2fc5059d99e1172/src/transformers/generation_utils.py#L1602)
2. rewrite some of its parts so as to implement your new method
3. add a new argument to [`generate`](https://github.com/huggingface/transformers/blob/9d732fd2dd99cd5c353a6e50c2fc5059d99e1172/src/transformers/generation_utils.py#L893), `alpha` (I'm assuming we'll repurpose the existing `top_k` argument into your method)
4. add the needed piping in `generate` so as to call your method when `alpha` is set (follow the example [here](https://github.com/huggingface/transformers/blob/9d732fd2dd99cd5c353a6e50c2fc5059d99e1172/src/transformers/generation_utils.py#L1298), which triggers `greedy_search` when certain conditions are met)
5. Play around with it and confirm that it is working as expected. Then we can design some tests for the codebase.
The only drawback of this design is that we won't be able to mix and match your method with other generation methods that are not coded as a `LogitsProcessor`, like [`contrained_beam_search`](https://github.com/huggingface/transformers/blob/9d732fd2dd99cd5c353a6e50c2fc5059d99e1172/src/transformers/generation_utils.py#L3069). But that would only be icing on the cake, not the cake itself 🤗
What do you think?<|||||>Hi @gante -- Thank you for your super cool advice! We will start right away on adapting the greedy search method and get back to you ASAP. Many thanks for your suggestion!
P.S. Would it be more convenient that we add you to our private repo in which we test our implementations? This way, we might be able to test the demos together.<|||||>@yxuansu Yeah, that's a good idea -- that way I'm also able to push ad hoc changes if needed 👍 After we're all happy with the state of the method, we can open a PR from the fork<|||||>@gante -- Cool! @gmftbyGMFTBY will send you an invitation after we create the repo, it would not take long :-)
Many thanks! <|||||>@gante Hi, thank you so much for your suggestions, we've almost prepared the PyTorch version codebase of `contrastive_search` in our fork. I have sent you an invitation to our repo.
All the changes are in `src/tranformer/generation_utils.py` and you could check them.
Furthermore, we also prepare the test script for you to run the `contrastive_search` simply. To run this test scripts, please conduct the following commands:
```bash
cd tests/generation
CUDA_VISIBLE_DEVICES=0 python test_generation_contrastive_search.py
```
Looking forward to your valuable questions and suggestions.
Best,
TianLan<|||||>Hi @gmftbyGMFTBY 👋 Thank you for adding me to your fork!
I have looked at the code and at the results of the script you mentioned. It's great that you were able to massage `past_key_values` to fit your method 💪 From the test script we can see that we are getting the same output for GPT-2 as in your paper, which is a great starting point 🚀
From here, my recommendation would be to open a draft PR. There are a few points that I'd like to sort together with you before opening the review to others:
1. There is separate logic for decoder-only and encoder-decoder models -- it would be great if we could unify it, even if at expense of a few if/elses
2. We don't host test scripts, only unit tests, so `test_generation_contrastive_search.py` has to be removed. We could use a few of its examples for integration tests, though
3. Readability is very important to us 🤗 A random user reading the code should be able to understand the basics of what is going on (and why) without going to the paper. A few more docstrings, comments, and potentially more informative variable names would go a long way
(there are a few more nits, but I want to focus on the important parts first)
Let me know if you'd like a hand tackling any of the points above!<|||||>Okay, thank you so much for your suggestions!
I'd like to solve points [1] and [3] first. If there is any progress, I will continue to discuss it with you.
Best,
TianLan<|||||>Hi @gante -- Many thanks for your kind help!
We'd like to ask if you would like to join a slack channel with me and @gmftbyGMFTBY? In this way, we can more timely and easily discuss on our PR. If you'd like to do so, could share us with your slack account then we can add you to our private channel? Many thanks! <|||||>@yxuansu surely, you can add the email of my GH account ([email protected])<|||||>Hi @gante -- We have created a private channel and sent an invitation to you. Let's communicate in our channel!
Many thanks for your help!
Best,
Yixuan
<|||||>> This looks fantastic. I'm looking forward to having this new feature in `transformers`.
>
> Also IMHO you actually would probably get an even more impressive improvement using BLOOM-176B which by default with greedy search suffers from getting stuck in repetition a lot.
Hi @stas00 -- Many thanks for your interest in our work! Contrastive search now has been merged to `transformers`. [Here](https://github.com/yxuansu/Contrastive_Search_Is_What_You_Need#21-using-huggingface-transformers-back-to-top), we provide a short tutorial on how to apply contrastive search within `transformers`. Please feel free to try it out :-)<|||||>@yxuansu @stas00 actually it works for nearly all models, except for Bloom (which has a different shape for the past key values output) -- working on it :) |
transformers | 19,181 | closed | RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward | ### System Info
Hi,
I got the following issue `RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward` when I was finetuning the stsb by using a finetuned roberta. I use [this code](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py) to finetune the stsb task.

Does anyone have idea to fix the bug? The label for stsb is float type, so the code is using regression to train the model.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download the finetuned roberta model from [here](https://drive.google.com/drive/folders/1S03HHFKDDQA9mZfS-jMxtsmaJy6-M3A9?usp=sharing). The finetuned roberta was finetuned on a sentence binary classification task. (The issue may caused by the first stage finetuning which I was using a binary classification task, but the downstream task stsb is a regression task.)
2. Load the finetuned roberta model and use [this code](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py) to finetune the stsb task and I got the above issue.
### Expected behavior
Please give me any advice to solve the issue. Many Thanks. | 09-24-2022 13:43:52 | 09-24-2022 13:43:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,180 | closed | Connection error even though files are already downloaded. | ### System Info
transformers version: 4.16.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run demo.py (from https://github.com/bes-dev/stable_diffusion.openvino) without having an internet connection, but the models are already downloaded.
### Expected behavior
When TRANSFORMERS_OFFLINE is not used, I get the following error:
```
Stacktrace:
File "stable_diffusion_engine.py", line 26, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(tokenizer)
File "transformers/tokenization_utils_basepy", line 1707, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "transformers/file_utils.py", line 1846, in cached_path
output_path = get_from_cache(
File "transformers/file_utils.py", line 2102, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
```
When TRANSFORMERS_OFFLINE is set to 'true', it works as expected.
I expect it to work even if I don't use this environment variable.
Not sure if related, but I have also set the TRANSFORMERS_CACHE environment variable. | 09-24-2022 11:33:22 | 09-24-2022 11:33:22 | This should be fixed by https://github.com/huggingface/transformers/pull/19206, which we just released in a patch. Could you upgrade your transformers version and let me know if the issue is fixed? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,179 | closed | Small nit on log output | # What does this PR do?
Changes you -> your in a log output.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-24-2022 03:08:56 | 09-24-2022 03:08:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19179). All of your documentation changes will be reflected on that endpoint.<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>done @LysandreJik , any easy way to retrigger CircleCI? The workflow page on CircleCI has the rerun button grayed out for me. <|||||>Is it possible to refresh your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,178 | closed | Fix cached lookup filepath on windows for hub | # What does this PR do?
Add small safety mechanism to hub.py for cached lookups of model files on Windows systems.
Windows systems resolve files using `C:\\Path\\To\\File.ext` under the hood in Python, this makes the `"snapshots/([^/]+)/"` regex fail to find the commit hash. So this simply adds a safety mechanism in `extract_commit_hash` to unify the resolved file paths (replace \\ with / which works just fine in all windows apis)
## Code at issue
https://github.com/huggingface/transformers/blob/fa4eeb4fd342cdbad50d1eeacdd7d7d7bc23b080/src/transformers/utils/hub.py#L222-L226
## Found while trying
```python
CLIPTokenizer.from_pretrained(version)
```
## Before Change

### Error

## After Change

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (None pertinent found)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
## Tests Run
* tests/utils/*.py

<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-24-2022 01:54:28 | 09-24-2022 01:54:28 | Re-ran black on hub.py to fix a spacing inconsistency 💀<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Mmm, some of the failure on the CI are due to a bug now fixed in the return of `cached_file`. Could you do a quick rebase on the main branch so I can see clearer in the failed tests?<|||||>Thanks for the rebase, we now have a clearer idea of the failing tests! This seems to break the functionality somehow (basically the commit hash is not found anymore). Maybe the regex need a small adaptation.
I'll dive into this tomorrow!<|||||>All green, thanks for iterating with us!<|||||>> All green, thanks for iterating with us!
No problem, thanks for all the suggestions :)
These cross platform fixes are always comically annoying, fix for Windows, break on *nix, of course! |
transformers | 19,177 | closed | ValueError: Task image-classification is not compatible with this dataset! Available tasks: [] | ### System Info
Dear all,
Thank you for your great work. I have tried to run ```image-classification``` example on my simple dataset and I got this error. The version of transformers I used is the newest one. Do you have any idea what happened?
Thanks very much!
```WARNING:datasets.builder:Using custom data configuration default-9107176dcf18ce11
WARNING:datasets.builder:Found cached dataset imagefolder (/root/.cache/huggingface/datasets/imagefolder/default-9107176dcf18ce11/0.0.0/e872d3ec27c6c200a8881a4af52930df7eca3372b19aa4d0f5db74a2fded8141)
100% 1/1 [00:00<00:00, 45.80it/s]
Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 240, in main
task="image-classification",
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1713, in load_dataset
ds = ds.prepare_for_task(task)
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 1272, in prepare_for_task
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 1272, in <dictcomp>
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2172, in prepare_for_task
f"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}"
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!CUDA_DIVISIBLE_DEVICES=0, python3 run_image_classification.py \
--model_name_or_path facebook/convnext-tiny-224 \
--train_dir $TRAIN_DIR \
--output_dir $OUTPUT_DIR \
--do_train \
--do_eval \
--learning_rate 1e-5 \
--num_train_epochs 10 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337 \
--overwrite_output_dir
### Expected behavior
ValueError: Task image-classification is not compatible with this dataset! Available tasks: [] | 09-24-2022 01:52:59 | 09-24-2022 01:52:59 | I have a similar situation. I had image classification working a couple of weeks ago with a local (disk) dataset, using run_image_classification.py, but now that code no longer works.
I run the script within a Jupyter Notebook with
```
%run Experimento_5/run_image_classification.py\
--train_dir {ruta_dataset}\
--output_dir Experimento_5/modelos/\
--remove_unused_columns False\
--do_train\
--do_eval
```
And after it loads the dataset I get:
```
Dataset imagefolder downloaded and prepared to /root/.cache/huggingface/datasets/imagefolder/default-b8c6bbfc7a1635cf/0.0.0/e872d3ec27c6c200a8881a4af52930df7eca3372b19aa4d0f5db74a2fded8141. Subsequent calls will reuse this data.
100%
1/1 [00:00<00:00, 14.38it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /drive/Experimento_5/run_image_classification.py:388, in <module>
384 trainer.create_model_card(**kwargs)
387 if __name__ == "__main__":
--> 388 main()
File /drive/Experimento_5/run_image_classification.py:236, in main()
234 if data_args.validation_dir is not None:
235 data_files["validation"] = os.path.join(data_args.validation_dir, "**")
--> 236 dataset = load_dataset(
237 "imagefolder",
238 data_files=data_files,
239 cache_dir=model_args.cache_dir,
240 task="image-classification",
241 )
243 # If we don't have a validation split, split off a percentage of train as validation.
244 data_args.train_val_split = None if "validation" in dataset.keys() else data_args.train_val_split
File /usr/local/lib/python3.8/dist-packages/datasets/load.py:1713, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1711 # Rename and cast features to match task schema
1712 if task is not None:
-> 1713 ds = ds.prepare_for_task(task)
1714 if save_infos:
1715 builder_instance._save_infos()
File /usr/local/lib/python3.8/dist-packages/datasets/dataset_dict.py:1272, in DatasetDict.prepare_for_task(self, task, id)
1269 @is_documented_by(Dataset.prepare_for_task)
1270 def prepare_for_task(self, task: Union[str, TaskTemplate], id: int = 0) -> "DatasetDict":
1271 self._check_values_type()
-> 1272 return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File /usr/local/lib/python3.8/dist-packages/datasets/dataset_dict.py:1272, in <dictcomp>(.0)
1269 @is_documented_by(Dataset.prepare_for_task)
1270 def prepare_for_task(self, task: Union[str, TaskTemplate], id: int = 0) -> "DatasetDict":
1271 self._check_values_type()
-> 1272 return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File /usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py:2171, in Dataset.prepare_for_task(self, task, id)
2169 compatible_templates = [template for template in (self.info.task_templates or []) if template.task == task]
2170 if not compatible_templates:
-> 2171 raise ValueError(
2172 f"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}"
2173 )
2175 if not 0 <= id < len(compatible_templates):
2176 templates_list_str = "\n".join(
2177 f"- `{idx}` for task {template}" for idx, template in enumerate(compatible_templates)
2178 )
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
```<|||||>Solved it by downgrading to datasets==2.4.0<|||||>Cc @mariosasko<|||||>> Solved it by downgrading to datasets==2.4.0
Hi,
After downgrading ```dataset```, I need to install ```evaluate``` again. I did it by ```pip install evaluate``` and I run the experiment. After doing so, I got this error:
```
[INFO|trainer.py:1628] 2022-09-29 17:35:38,675 >> ***** Running training *****
[INFO|trainer.py:1629] 2022-09-29 17:35:38,676 >> Num examples = 5394
[INFO|trainer.py:1630] 2022-09-29 17:35:38,676 >> Num Epochs = 10
[INFO|trainer.py:1631] 2022-09-29 17:35:38,676 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1632] 2022-09-29 17:35:38,676 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1633] 2022-09-29 17:35:38,676 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1634] 2022-09-29 17:35:38,676 >> Total optimization steps = 3380
0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 362, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1525, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1737, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2166, in __getitem__
key,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2151, in _getitem
pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 387, in format_row
formatted_batch = self.format_batch(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 418, in format_batch
return self.transform(batch)
File "run_image_classification.py", line 315, in train_transforms
_train_transforms(pil_img.convert("RGB")) for pil_img in example_batch["image"]
KeyError: 'image'
0% 0/3380 [00:00<?, ?it/s]
```
Do you have any idea?
Thanks!<|||||>@NielsRogge @mariosasko <|||||>Hi @NielsRogge,
I have tested this issue and it seems that the problem was still there:
```
0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 362, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1504, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1716, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2166, in __getitem__
key,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2151, in _getitem
pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 387, in format_row
formatted_batch = self.format_batch(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 418, in format_batch
return self.transform(batch)
File "run_image_classification.py", line 315, in train_transforms
_train_transforms(pil_img.convert("RGB")) for pil_img in example_batch["image"]
KeyError: 'image'
0% 0/3380 [00:00<?, ?it/s]
```
Do you have any suggestion?
Thanks!<|||||>Hi! We still need to make a patch release on the `datasets` side for my fix to take effect. In the meantime, you can install `datasets` directly from `main`:
```
pip install git+https://github.com/huggingface/datasets.git
```<|||||>When installing dataset directly from main, I got this error:
```
Traceback (most recent call last):
File "run_image_classification.py", line 36, in <module>
import evaluate
File "/usr/local/lib/python3.7/dist-packages/evaluate/__init__.py", line 37, in <module>
from .hub import push_to_hub
File "/usr/local/lib/python3.7/dist-packages/evaluate/hub.py", line 4, in <module>
from datasets.utils.metadata import known_task_ids
ImportError: cannot import name 'known_task_ids' from 'datasets.utils.metadata' (/usr/local/lib/python3.7/dist-packages/datasets/utils/metadata.py)
```<|||||>cc @lvwerra do we need to re-add known_task_ids to not break evaluate ? We moved those lists to the Hub, `datasets` doesn't contain any task list anymore<|||||>Is there a way we can get them from the Hub? Then we can replace this dependancy on `datasets`.<|||||>I don't think so, but the list is available here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
Anyway for the next release I think we need to still have the known_task_ids variable, otherwise it makes evaluate crash.
Do you think you could fix this on the evaluate side and do a release ?
Otherwise we can also re-add this list temporarily<|||||>@dxlong2000 we just released `datasets` 2.5.2 to fix this issue ;)<|||||>Hi @lvwerra @mariosasko,
I have installed
```
!pip install git+https://github.com/huggingface/transformers
!pip install -r requirements.txt
!pip install datasets==2.5.2
!pip install evaluate
```
and it seems that the problem is still there:
```
[INFO|trainer.py:1613] 2022-10-10 04:21:42,170 >> Total optimization steps = 3380
0% 0/3380 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 362, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1504, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1716, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2229, in __getitem__
key,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2214, in _getitem
pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 387, in format_row
formatted_batch = self.format_batch(pa_table)
File "/usr/local/lib/python3.7/dist-packages/datasets/formatting/formatting.py", line 418, in format_batch
return self.transform(batch)
File "run_image_classification.py", line 315, in train_transforms
_train_transforms(pil_img.convert("RGB")) for pil_img in example_batch["image"]
KeyError: 'image'
0% 0/3380 [00:00<?, ?it/s]
```<|||||>Hi,
This is another error I think, not related to datasets or evaluate. Do you have an "image" column in your dataset? Did you specify `--remove_unused_columns False` when running the script?<|||||>This issue seems to persist in datasets 2.6.1
After loading my dataset from the Hub (which was created locally using split-folders and then uploaded to the hub), the dataset looks ok:
```
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 4797
})
test: Dataset({
features: ['image', 'label'],
num_rows: 1200
})
})
```
, but then I get `ValueError: Task image-classification is not compatible with this dataset! Available tasks: []` when running `run_image_classification.py`<|||||>You can remove the `.prepare_for_task()` call in run_image_classification.py for now.
This is an issue with `datasets` not recognizing an existing task, feel free to open an issue on the `datasets` repo<|||||>The issue seems to persist. Is there any solution for this? `prepare_for_task()` is not called from run_image_classification.py (at least in the current version) @lhoestq <|||||>```
File "/workdir/transformer-sparsity/examples/pytorch/image-classification/run_image_classification_no_check.py", line 431, in <module>
task="image-classification",
File "/opt/conda/lib/python3.7/site-packages/datasets/load.py", line 1757, in load_dataset
main()
File "/workdir/transformer-sparsity/examples/pytorch/image-classification/run_image_classification_no_check.py", line 267, in main
task="image-classification",
File "/opt/conda/lib/python3.7/site-packages/datasets/load.py", line 1757, in load_dataset
ds = ds.prepare_for_task(task)
File "/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py", line 1278, in prepare_for_task
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py", line 1278, in <dictcomp>
ds = ds.prepare_for_task(task)
File "/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py", line 1278, in prepare_for_task
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2301, in prepare_for_task
return DatasetDict({k: dataset.prepare_for_task(task=task, id=id) for k, dataset in self.items()})
File "/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py", line 1278, in <dictcomp>
f"Task {task} is not compatible with this dataset! Available tasks: {list(unique_values(tasks))}"
ValueError: Task image-classification is not compatible with this dataset! Available tasks: []
```<|||||>@yazdanbakhsh what's your transformers + datasets version?<|||||>@NielsRogge Thanks for the message. I am using ViT + ImageNet-1K
https://huggingface.co/datasets/imagenet-1k
https://huggingface.co/google/vit-base-patch16-224
I am using head for installing HuggingFace.<|||||>Forgot to mention that I am using offline datasets. I pass the directory with arrow format. <|||||>My other solution is to use "load_from_disk" (I am adding this option to `run_image_classifiction`). I will update the issue if it works. <|||||>@NielsRogge I think the issue is that `self.info = self._load_info()` is not called when HF_DATASET_OFFLINE is true.
https://github.com/huggingface/datasets/blob/232a43943e87dfedcc328a9a3d3b4d89ea5c6627/src/datasets/builder.py#L788 |
transformers | 19,176 | closed | Bump protobuf from 3.19.4 to 3.19.5 in /examples/research_projects/decision_transformer | Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 3.19.4 to 3.19.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/protocolbuffers/protobuf/releases">protobuf's releases</a>.</em></p>
<blockquote>
<h2>Protocol Buffers v3.19.5</h2>
<h1>C++</h1>
<ul>
<li>Reduce memory consumption of MessageSet parsing</li>
<li>This release addresses a <a href="https://github.com/protocolbuffers/protobuf/security/advisories/GHSA-8gq9-2x98-w8hf">Security Advisory for C++ and Python users</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/b464cfbee18c71c40e761a5273ad369f3547294b"><code>b464cfb</code></a> Updating changelog</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/40859fb1c03bfbffe10cdb8009d08ff7e8d8a2f2"><code>40859fb</code></a> Updating version.json and repo version numbers to: 19.5</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/3b175f173903c934f5ba0d1726b430ddbce7ea56"><code>3b175f1</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10543">#10543</a> from deannagarcia/3.19.x</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/c05b5f3755af2f6a05c37cb0930373ac3e37463f"><code>c05b5f3</code></a> Add missing includes</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/0299c03005fbfe086d8394fb7a873a8a21fe327f"><code>0299c03</code></a> Apply patch</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/0a722f1573e629f8c3adc8fd4d298522b667548c"><code>0a722f1</code></a> Update version.json with "lts": true (<a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10533">#10533</a>)</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/d5eb60a56081930c706198e459480ab3204e435c"><code>d5eb60a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10530">#10530</a> from protocolbuffers/deannagarcia-patch-6</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/6cf1f78c27c15ae66fb7714798c82de24d4aa2a8"><code>6cf1f78</code></a> Update version.json</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/97fc8447c7b2441bff9b5be02d0964bfe4926302"><code>97fc844</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/protocolbuffers/protobuf/issues/10504">#10504</a> from deannagarcia/3.19.x</li>
<li><a href="https://github.com/protocolbuffers/protobuf/commit/29d60a2fa478d3c222a615c39cbf29918f194877"><code>29d60a2</code></a> Add version file</li>
<li>Additional commits viewable in <a href="https://github.com/protocolbuffers/protobuf/compare/v3.19.4...v3.19.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-23-2022 22:09:06 | 09-23-2022 22:09:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,175 | closed | Poc to use safetensors | # What does this PR do?
This PR introduces the necessary code changes to use safetensors weight file as a primary source. | 09-23-2022 18:43:32 | 09-23-2022 18:43:32 | Thank you for being verified!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks so clean !<|||||>Opened Hub Pull requests to convert the following weights:
- https://huggingface.co/roberta-base/discussions/3
- https://huggingface.co/roberta-large/discussions/1
- https://huggingface.co/gpt2/discussions/6
- https://huggingface.co/Jean-Baptiste/camembert-ner/discussions/1
- https://huggingface.co/openai/clip-vit-large-patch14/discussions/5
^will merge the 3 canonical ones to be able to test easily<|||||>note that you can also test from un-merged Hub PRs, you just have to pass the `refs/pr/:id` as a `revision`, for instance for gpt2:
```python
model = AutoModelForCausalLM.from_pretrained("gpt2", revision="refs/pr/6")
```
|
transformers | 19,174 | closed | Improving TrOCR results with LM 🚀 | # What does this PR do?
This PR adds `TrOCRProcessorWithLM` which gives us the ability to add a Kenlm Langauge model to improve the metrics results of TrOCR
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @patrickvonplaten @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-23-2022 18:07:56 | 09-23-2022 18:07:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19174). All of your documentation changes will be reflected on that endpoint.<|||||>Hi,
Thanks for your PR! Wonder if it makes sense to add another language model on top of the decoder of TrOCR (which already is a language model)? For instance, beam search is already supported (as you can use the [generate](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method to autoregressively generate text). What would be the benefit of adding this other decoder on top?
Did you see a boost in performance?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,173 | closed | Fix doctest for `TFDeiTForImageClassification` | # What does this PR do?
Since TF 2.10, `tf.random.set_seed` with a fixed seed won't give the same model weights anymore. See the [release note](https://github.com/tensorflow/tensorflow/releases/tag/v2.10.0). We need `tf.keras.utils.set_random_seed()` for this purpose.
This PR fixes the doctest for `TFDeiTForImageClassification` by using the above solution.
- I have to update the expected value however.
- I get the new expected value on a CPU VM. It should work on the GPU VM too, but let's keep an eye on the CI result.
| 09-23-2022 15:59:35 | 09-23-2022 15:59:35 | > LGTM! 👍
>
> is the line that sets the non-Keras seed still needed? (`>>> tf.random.set_seed(3)`)
No we don't need it :-) Removed it.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,172 | closed | Maskformer post-processing fixes and improvements | # What does this PR do?
- Improves MaskFormer docs, corrects minor typos
- Restructures `MaskFormerFeatureExtractor.post_process_panoptic_segmentation` for better readability, adds target_sizes argument for optional resizing
- Adds `post_process_semantic_segmentation` and `post_process_instance_segmentation` methods.
- Adds a deprecation warning to `post_process_segmentation` method in favour of `post_process_instance_segmentation`
Notes:
This PR is part of a larger effort to ensure consistency of post-processing methods across segmentation models, to define common arguments and outputs, and get ImageSegmentationPipeline working with all available models.
- `post_process_semantic_segmentation` returns segmentations as tensors of shape (height, width), which is consistent with the COCO format
- `post_process_instance_segmentation` returns segmentations either in the same format as the panoptic method or optionally in run-length encoded format (if return_coco_format is set to True).
- `post_process_semantic_segmentation` currently has an inconsistent input argument (target_size instead of target_sizes) and output (3D tensor instead of list of 2D tensors)
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-23-2022 15:48:12 | 09-23-2022 15:48:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @amyeroberts @NielsRogge all comments are addressed, could you approve the PR if everything looks good? |
transformers | 19,171 | closed | german training, accelerate and model sharing | another continue of https://github.com/huggingface/transformers/issues/18564 @sgugger | 09-23-2022 15:05:44 | 09-23-2022 15:05:44 | waiting for the tests and docs rendering by HF docs bot to take a final view<|||||>You might need an empty commit to re-trigger the doc build job.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>seems like all links are working now
ready to merge from my side |
transformers | 19,170 | closed | Separate Push CI images from Scheduled CI | # What does this PR do?
⚠️ **Before merge, I need to build the new push CI images that have the new tags.**
Currently, if `setup.py` is changed, Push CI will re-build the CI images before running tests.
https://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/.github/workflows/self-push-caller.yml#L39-L43
However, this may cause different jobs in a scheduled CI workflow run to use images with different versions.
Recently, when `tokenizers` is changed to `0.13`, some jobs failed in the scheduled CI due to the new image (with `tokenizers 0.13`) but the `transformers` code in those runs still required `tokenizers < 0.13`.
**This PR separates the push CI images from scheduled CI.**
| 09-23-2022 14:47:05 | 09-23-2022 14:47:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Added in `Transformers testing design, internal document` on Notion.
<img width="545" alt="Screenshot 2022-09-23 191436" src="https://user-images.githubusercontent.com/2521628/192018244-918525d6-f9a8-4077-a8a1-e1ccc2b47a3b.png">
### Text version
The CI for a push event (to main branch) will check if setup.py is changed. If yes, it will launch the docker image build CI before launching the actual tests. This is to make sure the tests will run against the specified package versions in setup.py. In order to avoid the conflict with the daily schedule CI, which should use the same image version for all jobs during a workflow run, we separate the CI images used for push events and schedule CI. The docker images used for push events start with the tag of the images used in the corresponding jobs in scheduled CI, but with a postfix push-ci. For example, transformers-all-latest-gpu in schedule CI will be transformers-all-latest-gpu-push-ci in push CI.
|
transformers | 19,169 | closed | Add offline runners info in the Slack report | # What does this PR do?
So we see which runners are offline directly in the report.
Currently, this information is added only if the check is run through `check_runner_status.yml`, where all runners are checked but reported to scheduled CI channel. Adding this information avoids confusion in the case where push/doctest CI runners are offline but reported in the scheduled CI channel.
| 09-23-2022 11:10:44 | 09-23-2022 11:10:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,168 | closed | fix HPO DDP GPU problem | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/18609
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger | 09-23-2022 09:42:27 | 09-23-2022 09:42:27 | @sgugger @@spigo900 please try with PR, it works for me to do HPO DDP with GPU<|||||>@yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,166 | closed | Add WhisperModel to transformers | # What does this PR do?
Adds Whisper to transformers
| 09-23-2022 07:38:25 | 09-23-2022 07:38:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Going forward, let's check generation with:
```python
#!/usr/bin/env python3
import whisper
import jiwer
import numpy as np
import torch
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, WhisperTokenizer
from whisper.normalizers import EnglishTextNormalizer
normalizer = EnglishTextNormalizer()
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base.en")
processor = WhisperProcessor.from_pretrained("openai/whisper-base.en")
device = "cuda"
model = model.to(device).eval()
def map_fn(batch):
arrays = [x["array"] for x in batch["audio"]]
# -> here is a bug
input_features = processor.feature_extractor(arrays, padding="max_length", max_length=480_000, return_tensors="pt").input_features
input_features = input_features.to(device)
model.config.use_cache = False
sequences = model.generate(input_features, max_length=224, forced_bos_token_id=50362, decoder_start_token_id=50257)
results = processor.tokenizer.batch_decode(sequences, skip_special_tokens=True)
batch["hypotheses"] = [normalizer(result) for result in results]
batch["reference"] = [normalizer(text) for text in batch["text"]]
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_fn, batch_size=16, remove_columns=ds.column_names, batched=True)
wer = jiwer.wer(list(ds["reference"]), list(ds["hypotheses"]))
print("Wer", wer)
```<|||||>Failing tests are related to the `RAG` model that re-uses the generate function.<|||||>Ready I think @patrickvonplaten <|||||>Hey @patrickvonplaten and @sgugger the PR is ready for a final review! 🤗 <|||||>Could we also add a script that runs each checkpoint in a 5-linear as discussed on Slack here?
These code snippets could then be added to the respective model cards<|||||>Okay, so here is a simple example :
```python
>>> model = WhisperForConditionalGeneration.from_pretrained(f"openai/whisper-large")
>>> processor = WhisperProcessor.from_pretrained(f"openai/whisper-large")
>>> ds = load_dataset("common_voice", "ja", split="test", streaming=True)
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
>>> ds_iter = iter(ds)
>>> input_speech = next(ds_iter)["audio"]["array"]
>>> inputs = processor(input_speech, return_tensors = "pt")
>>> predicted_ids = model.generate(**inputs)
>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True, normalize = True)[0]
'i borrowed a phone from kimura san'
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language = "ja", task = "transcribe")
>>> predicted_ids = model.generate(**inputs, forced_decoder_ids=forced_decoder_ids)
>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)[0]
"木村さんに電話を貸してもらいました"
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language = "en", task = "transcribe")
>>> predicted_ids = model.generate(**inputs, forced_decoder_ids=forced_decoder_ids)
>>> processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)[0]
' Kimura san ni denwa wo kaite moraimashita'
```<|||||>2 final things:
- Add 2 tests for batched generation
- Make sure the tokenizer has a pad_token_id => it should be identical to the eos_token_id since there is no official one. We don't want to trigger a warning every time we run generation in batch
- Also make sure that `config.pad_token_id` is correctly set.
cc @sanchit-gandhi we have to remember this when doing fine-tuning experiments! Whisper has `pad_token_id == eos_token_id` which means that during training we need to make sure in our general training scripts that we don't replace the `eos_token_id` with `-100` and thus ignore it in the loss. Instead we should only replace the "not-first" pad_token_id with `-100` (we have the same for GPT2 BTW) <|||||>Hello!
I apologize for interrupting the development process. But I'm following this thread, because I'm really looking forward to the Whisper at HF, and here I also see words about fine tuning. It will be very cool if you can make good fine tuning and code examples!
I myself am already trying to finetune in different ways, but so far the model is only being unlearned.
In any case, thanks for your work and good luck! ❤️ <|||||>> Hello! I apologize for interrupting the development process. But I'm following this thread, because I'm really looking forward to the Whisper at HF, and here I also see words about fine tuning. It will be very cool if you can make good fine tuning and code examples!
>
> I myself am already trying to finetune in different ways, but so far the model is only being unlearned.
>
> In any case, thanks for your work and good luck! heart
Hey @ArtyomZemlyak,
This is a major focus of our right now! We've already done some experiments - you can check it here:
https://openreview.net/forum?id=9OL2fIfDLK (we've fine-tuned whisper on a bunch of open-source datasets)
We hope to have a well-functioning fine-tuning script by early next week (we plan on doing a blog post + google colab)<|||||>Merging to unblock TF PR<|||||>Awesome, sorry for the delay! |
transformers | 19,165 | closed | would huggingface like support cpp env libtorch or rewrite the core code for cpp ? | ### System Info
libtorch 1.21
macos
Hi:
I want to use huggingface ,but ,I find the model api almost python api, I use the cpp env for libtorch , would you like support cpp ,or rewrite the code with cpp?
thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
no
### Expected behavior
no | 09-23-2022 07:04:22 | 09-23-2022 07:04:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,164 | closed | Evaluation of wav2vec2 model all labeled string return "<unk>" value | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Both have same issue
- $ pip freeze |grep datasets
datasets==2.4.0
### Who can help?
@patrickvonplaten
@anton-l
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the issue:
1. Download the [issue_report](https://drive.google.com/drive/folders/1M5xE4L_HBxBQynWyl6f1c-tJtat027d1?usp=sharing) folder to your local
2. open a command prompt and cd to the issue_report
3. run eval cmd: python ctc_finetune.py --eval
4. The loss is 1.0086 as the value of label_str always "unk", which printed at the line # 566 of [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing)
5. To re-generate the datase cache files, please run : python customise_dataset.py
Here is the log printed at end of evaluation as following, please see the [full_log.log](https://drive.google.com/file/d/1hIMmxfLXOEm_Sx_g3100MvR2so1sthEM/view?usp=sharing) for more details :
***** Running Evaluation *****
Num examples = 91
Batch size = 4
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 5.54it/s]
**pred_str[0]: THERE WERE BARRELS OF WINE IN THE SHU CELLOR
label_str[0]: <unk><unk><unk><unk><unk> <unk><unk><unk><unk> <u**nk><unk><unk><unk><unk><unk><unk> <unk><unk> <unk><unk><unk><unk> <unk><unk> <unk><unk><unk> <unk><unk><unk><unk> <unk><unk><unk><unk><unk><unk>
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.12it/s]
***** eval metrics *****
eval_loss = 4704.6416
eval_runtime = 0:00:06.64
eval_samples = 91
eval_samples_per_second = 13.697
eval_steps_per_second = 3.462
eval_wer = 1.0086
### Expected behavior
As I use original pre-trained model : facebook/wav2vec2-large-robust-ft-libri-960h for evaluation, the only changes is my customized dataset.
I could not figure out where is wrong with my own modified scripts which had just minor change from the official example scripts.
So I am not sure my encountered issue whether it's my scripts issue or the finetune libs issue.
Thanks in advance for helping me on this matter. | 09-23-2022 05:47:27 | 09-23-2022 05:47:27 | Are the labels in lower case?<|||||>Maybe of interest to @sanchit-gandhi as well<|||||>> Are the labels in lower case?
Yes, it's lower case
Here is the csv file looks like:
$ head dataset.csv
path,transcription
wav/000010001.WAV,there were barrels of wine in the huge cellar
wav/000010002.WAV,she won a car because she was the twelfth person to call the radio station
wav/000010003.WAV,as they walked back they were shocked to see a pack of stray dogs circling around the car<|||||>Ok I think that's the issue. Your vocabulary likely only contains upper case letters. The tokenizer doesn't recognise lower case letters so it uses `<unk>` instead.
Try converting your transcription column to upper case and see if that fixes it.<|||||>> Ok I think that's the issue. Your vocabulary likely only contains upper case letters. The tokenizer doesn't recognise lower case letters so it uses `<unk>` instead.
>
> Try converting your transcription column to upper case and see if that fixes it.
Yeah, that is the root cause. After I changed it to upper case, the issue go aways:
Thank you so much for the troubleshoot.
***** Running Evaluation *****
Num examples = 91
Batch size = 4
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.42it/s]pred_str[0]: THERE WERE BARRELS OF WINE IN THE SHU CELLOR
label_str[0]: THERE WERE BARRELS OF WINE IN THE HUGE CELLAR
100%|███████████████████████████████████████████| 23/23 [00:03<00:00, 6.68it/s]
***** eval metrics *****
eval_loss = 118.7373
eval_runtime = 0:00:05.39
eval_samples = 91
eval_samples_per_second = 16.856
eval_steps_per_second = 4.26
eval_wer = 0.1228
|
transformers | 19,163 | closed | Move AutoClasses under Main Classes | This PR proposes moving `Auto Classes` under the `Main Classes` section instead. As pointed out by @patrickvonplaten, the docs look a little strange because the `Auto Classes` doc isn't bold like the rest of the section and the header name doesn't exactly fit. Since bolded titles mean it is expandable we can't put `Auto Classes` in bold to align it with the rest of the section. `Auto Classes` also includes other things besides models (config, tokenizer, processor, etc.), so it doesn't exactly fit under `Models`. | 09-22-2022 20:57:15 | 09-22-2022 20:57:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,162 | closed | Fix TrainingArguments documentation | # What does this PR do?
@Rocketknight1 added a new class variable for `TrainingArguments` but put it before the docstring, which erased all doc for this class :grimacing: This PR fixes that. | 09-22-2022 18:05:49 | 09-22-2022 18:05:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>In retrospect I really should have foreseen that one, I'm sorry! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.