repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,466 | closed | Fix doctests for `DeiT` and `TFGroupViT` | # What does this PR do?
For `DeiT`: The parameter initialization is changed for some models in #19341. This can affect some tests where a checkpoint without head is used (so randomly initialized head). cc @alaradirik
For `TFGroupViT`: We loved PyTorch too much (or not enough?) and want to use it in TensorFlow --> It's still too early :-) | 10-10-2022 15:53:05 | 10-10-2022 15:53:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,465 | closed | Update PT to TF CLI for audio models | # What does this PR do?
Fixes small issues which prevented converting checkpoints for the whisper model with the pt-to-tf CLI.
* `"raw_speech"` -> `"audio"`: updates the name of the inputs fed to the processor. This reflects the deprecation of `raw_speech` in [the audio processors](https://github.com/huggingface/transformers/blob/331ea019d7053924ee4d9d4d30282a2c74c272a6/src/transformers/models/wav2vec2/processing_wav2vec2.py#L79)
* Takes the feature extractor's default padding strategy if it's not False. Otherwise sets it to True. This was needed as whisper models must be padded to the max sequence length (not to the max sequence in the batch). Whereas other speech models' feature extractors can run with `padding=True` but don't have max length set by default so will fail if `padding="max_length"` e.g. `"facebook/s2t-small-librispeech-asr"`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-10-2022 15:25:40 | 10-10-2022 15:25:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> This was causing the conversion script to throw exceptions with Whisper, correct?
@gante Exactly. I modified it locally to push the weights to the hub. This PR is a tidier version of the changes I made i.e. not breaking it for other models. |
transformers | 19,464 | closed | Update Marian config default vocabulary size | # What does this PR do?
Fixes #19296
Looking at existing Marian models, their vocabulary size is set to pad token id + 1 ([example](https://huggingface.co/Helsinki-NLP/opus-mt-de-en/blob/main/config.json#L59)). This PR modifies the default vocabulary size such that a) it doesn't throw exceptions (must be > pad token id) and b) preserves this property.
Alternatively, the default for `decoder_start_token_id` and `pad_token_id` can be reduced π | 10-10-2022 14:58:05 | 10-10-2022 14:58:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks like a bugfix to me if the model cannot be instantiated. Looks good to me. |
transformers | 19,463 | closed | [WIP] Add MANTa-LM | # Implement the MANTa-LM model (upcoming paper)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @patil-suraj | 10-10-2022 14:56:06 | 10-10-2022 14:56:06 | We're looking forward to it, @NathanGodey!
I'm pinging @ArthurZucker and @sgugger to make sure it's on their radar even if the implementation isn't ready to review. Let us know when you'd like for us to jump in and help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,462 | closed | missing double slash in link | Correcting the missing double slash in the community's notebook link
| 10-10-2022 14:25:27 | 10-10-2022 14:25:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19462). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,461 | closed | Fix `TFGroupViT` CI | # What does this PR do?
Fix 3 `TFGroupViT` CI failures.
There is a remaining one `FAILED tests/models/groupvit/test_modeling_tf_groupvit.py::TFGroupViTTextModelTest ::test_saved_model_creation_extended` which I think @Rocketknight1 will know better. | 10-10-2022 14:21:50 | 10-10-2022 14:21:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @Rocketknight1
There is an test failure `FAILED tests/models/groupvit/test_modeling_tf_groupvit.py::TFGroupViTTextModelTest ::test_saved_model_creation_extended`, see the error below. It happens at
```python
model = tf.keras.models.load_model(saved_model_dir)
outputs = model(class_inputs_dict)
```
It seems we provide arguments in `tf.int32`, but the loaded model expects `tf.int64`. I think you know much better this part.
Could you take a look - maybe open another PR if necessary? Thank you.
**Full Error**
```
2022-10-09T08:01:03.3261432Z __________ TFGroupViTTextModelTest.test_saved_model_creation_extended __________
2022-10-09T08:01:03.3261667Z
2022-10-09T08:01:03.3261940Z self = <tests.models.groupvit.test_modeling_tf_groupvit.TFGroupViTTextModelTest testMethod=test_saved_model_creation_extended>
2022-10-09T08:01:03.3262235Z
2022-10-09T08:01:03.3262337Z @slow
2022-10-09T08:01:03.3262575Z def test_saved_model_creation_extended(self):
2022-10-09T08:01:03.3262979Z config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
2022-10-09T08:01:03.3263352Z config.output_hidden_states = True
2022-10-09T08:01:03.3263664Z config.output_attentions = True
2022-10-09T08:01:03.3263941Z
2022-10-09T08:01:03.3264165Z if hasattr(config, "use_cache"):
2022-10-09T08:01:03.3264499Z config.use_cache = True
2022-10-09T08:01:03.3264758Z
2022-10-09T08:01:03.3264989Z for model_class in self.all_model_classes:
2022-10-09T08:01:03.3265379Z class_inputs_dict = self._prepare_for_class(inputs_dict, model_class)
2022-10-09T08:01:03.3392097Z model = model_class(config)
2022-10-09T08:01:03.3392497Z num_out = len(model(class_inputs_dict))
2022-10-09T08:01:03.3392721Z
2022-10-09T08:01:03.3392989Z with tempfile.TemporaryDirectory() as tmpdirname:
2022-10-09T08:01:03.3393340Z model.save_pretrained(tmpdirname, saved_model=True)
2022-10-09T08:01:03.3393690Z saved_model_dir = os.path.join(tmpdirname, "saved_model", "1")
2022-10-09T08:01:03.3394008Z model = tf.keras.models.load_model(saved_model_dir)
2022-10-09T08:01:03.3394308Z > outputs = model(class_inputs_dict)
2022-10-09T08:01:03.3394467Z
2022-10-09T08:01:03.3394617Z tests/models/groupvit/test_modeling_tf_groupvit.py:480:
2022-10-09T08:01:03.3394911Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-10-09T08:01:03.3395433Z /usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:70: in error_handler
2022-10-09T08:01:03.3395756Z raise e.with_traceback(filtered_tb) from None
2022-10-09T08:01:03.3396038Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-10-09T08:01:03.3396187Z
2022-10-09T08:01:03.3396605Z args = ({'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>, 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}, None, None, None, None, None, ...)
2022-10-09T08:01:03.3396964Z kwargs = {}
2022-10-09T08:01:03.3397499Z inputs = (({'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>, 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}, None, None, None, None, None, ...), {})
2022-10-09T08:01:03.3397859Z allow_conversion = True
2022-10-09T08:01:03.3398483Z function_name = '__inference_tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses_370446'
2022-10-09T08:01:03.3399135Z function = <ConcreteFunction tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses(input_ids, attention_mask=None, position_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=True) at 0x7F8CC81D4D90>
2022-10-09T08:01:03.3400102Z signature_descriptions = ["Option 1:\n Positional arguments (7 total):\n * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64...put_ids/input_ids')}\n * None\n * None\n * None\n * None\n * None\n * True\n Keyword arguments: {}"]
2022-10-09T08:01:03.3400797Z _pretty_format_positional = <function recreate_function.<locals>.restored_function_body.<locals>._pretty_format_positional at 0x7f8d003cb700>
2022-10-09T08:01:03.3401116Z index = 3
2022-10-09T08:01:03.3401601Z concrete_function = <ConcreteFunction tf_group_vi_t_text_model_36_layer_call_and_return_conditional_losses(input_ids, attention_mask=None, position_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=True) at 0x7F8CC81D4D90>
2022-10-09T08:01:03.3402494Z positional = ({'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'), 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}, None, None, None, None, None, ...)
2022-10-09T08:01:03.3402916Z keyword = {}
2022-10-09T08:01:03.3403035Z
2022-10-09T08:01:03.3403161Z def restored_function_body(*args, **kwargs):
2022-10-09T08:01:03.3403636Z """Calls a restored function or raises an error if no matching function."""
2022-10-09T08:01:03.3403949Z if not saved_function.concrete_functions:
2022-10-09T08:01:03.3404275Z raise ValueError("Found zero restored functions for caller function.")
2022-10-09T08:01:03.3404642Z # This is the format of function.graph.structured_input_signature. At this
2022-10-09T08:01:03.3405012Z # point, the args and kwargs have already been canonicalized.
2022-10-09T08:01:03.3405274Z inputs = (args, kwargs)
2022-10-09T08:01:03.3405478Z
2022-10-09T08:01:03.3405739Z # First try to find a concrete function that can be called without input
2022-10-09T08:01:03.3406101Z # conversions. This allows one to pick a more specific trace in case there
2022-10-09T08:01:03.3406417Z # was also a more expensive one that supported tensors.
2022-10-09T08:01:03.3406708Z for allow_conversion in [False, True]:
2022-10-09T08:01:03.3407008Z for function_name in saved_function.concrete_functions:
2022-10-09T08:01:03.3407321Z function = concrete_functions[function_name]
2022-10-09T08:01:03.3407622Z if any([inp is None for inp in function.captured_inputs]):
2022-10-09T08:01:03.3407955Z raise ValueError("Looks like you are trying to run a loaded "
2022-10-09T08:01:03.3408359Z "non-Keras model that was trained using "
2022-10-09T08:01:03.3408774Z "tf.distribute.experimental.ParameterServerStrategy "
2022-10-09T08:01:03.3409160Z "with variable partitioning, which is not currently "
2022-10-09T08:01:03.3409664Z "supported. Try using Keras to define your model "
2022-10-09T08:01:03.3409965Z "if possible.")
2022-10-09T08:01:03.3410294Z if _concrete_function_callable_with(function, inputs, allow_conversion):
2022-10-09T08:01:03.3410757Z return _call_concrete_function(function, inputs)
2022-10-09T08:01:03.3410980Z
2022-10-09T08:01:03.3411193Z signature_descriptions = []
2022-10-09T08:01:03.3411414Z
2022-10-09T08:01:03.3411651Z def _pretty_format_positional(positional):
2022-10-09T08:01:03.3411939Z return "Positional arguments ({} total):\n * {}".format(
2022-10-09T08:01:03.3412206Z len(positional),
2022-10-09T08:01:03.3412486Z "\n * ".join(pprint.pformat(a) for a in positional))
2022-10-09T08:01:03.3412735Z
2022-10-09T08:01:03.3413063Z for index, function_name in enumerate(saved_function.concrete_functions):
2022-10-09T08:01:03.3413387Z concrete_function = concrete_functions[function_name]
2022-10-09T08:01:03.3413721Z positional, keyword = concrete_function.structured_input_signature
2022-10-09T08:01:03.3414020Z signature_descriptions.append(
2022-10-09T08:01:03.3414290Z "Option {}:\n {}\n Keyword arguments: {}".format(
2022-10-09T08:01:03.3414622Z index + 1, _pretty_format_positional(positional), keyword))
2022-10-09T08:01:03.3414929Z > raise ValueError(
2022-10-09T08:01:03.3415383Z "Could not find matching concrete function to call loaded from the "
2022-10-09T08:01:03.3415807Z f"SavedModel. Got:\n {_pretty_format_positional(args)}\n Keyword "
2022-10-09T08:01:03.3416205Z f"arguments: {kwargs}\n\n Expected these arguments to match one of the "
2022-10-09T08:01:03.3416594Z f"following {len(saved_function.concrete_functions)} option(s):\n\n"
2022-10-09T08:01:03.3416955Z f"{(chr(10)+chr(10)).join(signature_descriptions)}")
2022-10-09T08:01:03.3417393Z E ValueError: Exception encountered when calling layer "tf_group_vi_t_text_model_36" " f"(type TFGroupViTTextModel).
2022-10-09T08:01:03.3417731Z E
2022-10-09T08:01:03.3418041Z E Could not find matching concrete function to call loaded from the SavedModel. Got:
2022-10-09T08:01:03.3418385Z E Positional arguments (7 total):
2022-10-09T08:01:03.3418885Z E * {'attention_mask': <tf.Tensor 'input_ids:0' shape=(12, 7) dtype=int32>,
2022-10-09T08:01:03.3419334Z E 'input_ids': <tf.Tensor 'input_ids_1:0' shape=(12, 7) dtype=int32>}
2022-10-09T08:01:03.3419619Z E * None
2022-10-09T08:01:03.3419837Z E * None
2022-10-09T08:01:03.3420048Z E * None
2022-10-09T08:01:03.3420245Z E * None
2022-10-09T08:01:03.3420455Z E * None
2022-10-09T08:01:03.3420668Z E * False
2022-10-09T08:01:03.3420890Z E Keyword arguments: {}
2022-10-09T08:01:03.3421125Z E
2022-10-09T08:01:03.3421420Z E Expected these arguments to match one of the following 4 option(s):
2022-10-09T08:01:03.3421707Z E
2022-10-09T08:01:03.3421894Z E Option 1:
2022-10-09T08:01:03.3422147Z E Positional arguments (7 total):
2022-10-09T08:01:03.3422648Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/attention_mask'),
2022-10-09T08:01:03.3423198Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}
2022-10-09T08:01:03.3423494Z E * None
2022-10-09T08:01:03.3423709Z E * None
2022-10-09T08:01:03.3423922Z E * None
2022-10-09T08:01:03.3424136Z E * None
2022-10-09T08:01:03.3424405Z E * None
2022-10-09T08:01:03.3424613Z E * False
2022-10-09T08:01:03.3424850Z E Keyword arguments: {}
2022-10-09T08:01:03.3425065Z E
2022-10-09T08:01:03.3425242Z E Option 2:
2022-10-09T08:01:03.3425478Z E Positional arguments (7 total):
2022-10-09T08:01:03.3425975Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/attention_mask'),
2022-10-09T08:01:03.3426635Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}
2022-10-09T08:01:03.3426902Z E * None
2022-10-09T08:01:03.3427098Z E * None
2022-10-09T08:01:03.3427397Z E * None
2022-10-09T08:01:03.3427576Z E * None
2022-10-09T08:01:03.3427742Z E * None
2022-10-09T08:01:03.3427922Z E * True
2022-10-09T08:01:03.3428119Z E Keyword arguments: {}
2022-10-09T08:01:03.3428303Z E
2022-10-09T08:01:03.3428476Z E Option 3:
2022-10-09T08:01:03.3428729Z E Positional arguments (7 total):
2022-10-09T08:01:03.3429129Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'),
2022-10-09T08:01:03.3429670Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}
2022-10-09T08:01:03.3429926Z E * None
2022-10-09T08:01:03.3430109Z E * None
2022-10-09T08:01:03.3430288Z E * None
2022-10-09T08:01:03.3430450Z E * None
2022-10-09T08:01:03.3430633Z E * None
2022-10-09T08:01:03.3430814Z E * False
2022-10-09T08:01:03.3431017Z E Keyword arguments: {}
2022-10-09T08:01:03.3431193Z E
2022-10-09T08:01:03.3431371Z E Option 4:
2022-10-09T08:01:03.3431585Z E Positional arguments (7 total):
2022-10-09T08:01:03.3431991Z E * {'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name='attention_mask'),
2022-10-09T08:01:03.3432620Z E 'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name='input_ids/input_ids')}
2022-10-09T08:01:03.3432891Z E * None
2022-10-09T08:01:03.3433073Z E * None
2022-10-09T08:01:03.3433259Z E * None
2022-10-09T08:01:03.3433417Z E * None
2022-10-09T08:01:03.3433589Z E * None
2022-10-09T08:01:03.3433769Z E * True
2022-10-09T08:01:03.3433955Z E Keyword arguments: {}
2022-10-09T08:01:03.3434128Z E
2022-10-09T08:01:03.3434408Z E Call arguments received by layer "tf_group_vi_t_text_model_36" " f"(type TFGroupViTTextModel):
2022-10-09T08:01:03.3434960Z E β’ args=({'input_ids': 'tf.Tensor(shape=(12, 7), dtype=int32)', 'attention_mask': 'tf.Tensor(shape=(12, 7), dtype=int32)'},)
2022-10-09T08:01:03.3435308Z E β’ kwargs={'training': 'False'}
``` |
transformers | 19,460 | closed | TF: TFBart embedding initialization | # What does this PR do?
### Context
We were initializing the embeddings as `TFSharedEmbeddings(config.vocab_size, config.d_model, config.pad_token_id, name="model.shared")`. Notice the 3rd argument, the pad token id, which is the 3rd argument in `nn.Embedding`. However, for `TFSharedEmbeddings`, [the 3rd argument is the initializer range](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/modeling_tf_utils.py#L2837). This means that some models, like TFMarian, were initializing the embeddings with very [large values](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/models/marian/configuration_marian.py#L135) (stddev=58100).
### Changes
This PR correctly sets the embedding initialization according to the configuration parameter related to weight initialization range. It includes proper weight initialization when the embeddings are resized.
This PR will be used as a reference for embedding weight initialization, regarding the embedding update that is happening in the codebase at the moment.
### Discussion for the future
PT sets the weights in a top-down fashion, with `_init_weights` conveniently fetching information from the `config` and then initializing the weights. On TF, weight initialization is defined in a bottom-up fashion. This means that if we want to replicate the PT initialization for all TF weights, we need to pass the `config` all the way down to the individual layers (= verbose and needs manual changes in many places for all models) π
Alternativelly, we can replicate the `_init_weights` logic in TF, and manually set the weights after initializing the model. IMO that would be much cleaner, despite not being the way Keras expects weights to be set. | 10-10-2022 14:01:27 | 10-10-2022 14:01:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,459 | closed | corrected not woking link. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
| 10-10-2022 13:39:19 | 10-10-2022 13:39:19 | @sgugger sir could you review this up. <|||||>@julien-c if u could check this up then please check it once.
<|||||>@amyeroberts mam if u r free then please u check. or anyone who is free please check.<|||||>Hi @YOGENDERSS , I think this PR is a duplicate from mine https://github.com/huggingface/transformers/pull/19434<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>sorry i did'nt knew it was open already.<|||||>@MikailINTech thanks buddy<|||||>@YOGENDERSS No worries |
transformers | 19,458 | closed | fix warnings in deberta | # What does this PR do?
In recent torch versions, the following warning is thrown on the deberta code:
```
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
p2c_att = torch.matmul(key_layer, torch.tensor(pos_query_layer.transpose(-1, -2), dtype=key_layer.dtype))
```
This PR fixes the warnings by using .to(dtype=) rather than tensor(...,dtype=)
| 10-10-2022 13:19:45 | 10-10-2022 13:19:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik any thoughts? |
transformers | 19,457 | closed | Add docstrings for canine model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-10-2022 13:18:47 | 10-10-2022 13:18:47 | cc @ydshieh <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ping @ydshieh <|||||>@raghavanone I update your PR to make it work for `CanineForTokenClassification`.
Thank you for your work!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>pong @sgugger ! |
transformers | 19,456 | closed | Original image from Trocr Processor | Hi [@NielsRogge, I am following your TrOCR finetuning with PyTorch but I have two questions: The processor resizes the image from 1700 x 134 to 384 x384, 1) is there a way to maintain the height of the original image or even use a custom dimension for training, eg. 512 x 134. and 2) is there a way to get the original image back for logging purposes as the processed image is unrecognizable after those basic augmentations. Thanks | 10-10-2022 12:59:00 | 10-10-2022 12:59:00 | Hi,
This question is answered on our forum: https://discuss.huggingface.co/t/get-original-image-from-trocr-processor/24224/2 |
transformers | 19,455 | closed | Extend `nested_XXX` functions to mappings/dicts. | Extended `nested_XXX` trainer pt utility functions to work with mappings (dict, OrderedDict, etc.)
Some classes that are modeling the models' outputs, inherit from `ModelOutput` which at turn is an `OrderedDict`. Currently, when applying these `nested_XXX` functions to these models ouputs the code fails with an error.
Extending this nested utilities to work with dictionaries fix this issue. | 10-10-2022 12:55:36 | 10-10-2022 12:55:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for your feedback @sgugger .
Suggestions and style applied!<|||||>Thanks a lot! |
transformers | 19,454 | closed | Fix TF batch norm momentum and epsilon values | # What does this PR do?
Updates momentum values for TF batch norm layers to match the pytorch models'.
The momentum value for PyTorch and TensorFlow batch normalization layers is not equivalent, as pointed out by @mathieujouffroy [here](https://github.com/huggingface/transformers/pull/18597#issuecomment-1263381794)
The TensorFlow value should be (1 - pytorch_momentum) in order to ensure the correct updates are applied to the running mean and running variance calculations. We wouldn't observe a difference loading a pretrained model and performing inference, but evaluation outputs would change after some training steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 10-10-2022 12:37:23 | 10-10-2022 12:37:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,453 | closed | Added support for multivariate independent emission heads | # What does this PR do?
This adds support for multivariate independent emission heads to the time series transformer model. | 10-10-2022 12:22:25 | 10-10-2022 12:22:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>indeed! Somehow i do not get this failing test locally... any idea what could be wrong?<|||||>thank you!
|
transformers | 19,452 | closed | Different behaviour of AutoTokenizer and T5Tokenizer | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `T5Tokenizer` prepends a whitespace before the eos token when a new eos token is provided, while AutoTokenizer maintains the usual behaviour.
```python
from transformers import AutoTokenizer, T5Tokenizer
text = ["My name is Pietro", "I love pizza"]
tok = T5Tokenizer.from_pretrained("t5-small", bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
auto_tok = AutoTokenizer.from_pretrained("t5-small", bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
print(tok.batch_decode(tok(text)["input_ids"]))
print(auto_tok.batch_decode(tok(text)["input_ids"]))
#> ['My name is Pietro [eos]', 'I love pizza [eos]']
#> ['My name is Pietro[eos]', 'I love pizza[eos]']
tok = T5Tokenizer.from_pretrained("t5-small")
auto_tok = AutoTokenizer.from_pretrained("t5-small")
print(tok.batch_decode(tok(text)["input_ids"]))
print(auto_tok.batch_decode(tok(text)["input_ids"]))
#> ['My name is Pietro</s>', 'I love pizza</s>']
#> ['My name is Pietro</s>', 'I love pizza</s>']
```
### Expected behavior
The two tokenizer classes should be equivalent | 10-10-2022 12:19:45 | 10-10-2022 12:19:45 | When you're using `T5Tokenizer`, you're loading the slow version. When using `AutoTokenizer`, you're loading the fast version by default.
Unfortunately, due to a difference in implementation, the fast and slow tokenizer can have some differences.
Pinging @SaulLu and @ArthurZucker for knowledge<|||||>Yes, noticed the same thing with GPT2, especially with OOV that are automatically converted to blank with the fast version! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,451 | closed | Fix repo names for ESM tests | This should cause ESM tests to stop erroring out all the time! Cc @amyeroberts @sgugger | 10-10-2022 11:49:11 | 10-10-2022 11:49:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,450 | closed | Add LiLT | # What does this PR do?
This PR adds LiLT, a simple way to extend LayoutLM to any language that has a pre-trained RoBERTa checkpoint.
To do:
- [x] setup new organization, transfer checkpoints
- [x] make tests faster
- [x] remove is_decoder logic | 10-10-2022 10:23:04 | 10-10-2022 10:23:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,449 | open | [WIP] Fix weights initialization of several vision models | # What does this PR do?
This PR is a follow-up of #19341, to make sure weights are properly initialized when training vision models from scratch. | 10-10-2022 09:47:04 | 10-10-2022 09:47:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19449). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,448 | closed | OWL-ViT | ### Feature request
Dear all,
It looks like [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) doesn't support image conditioned detection.
### Motivation
Image conditioned detection is the most appealing feature of this model
### Your contribution
No, just pointing this out. Thanks | 10-10-2022 09:37:27 | 10-10-2022 09:37:27 | Hi Francesco, this feature was already asked and a PR to add this feature can be found here: #18891 <|||||>nice, thanks @NielsRogge |
transformers | 19,447 | closed | T5ForConditionalGeneration output differently with the same batch input | ### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik @patrickvonplaten, @Narsil, @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It is actually modified from [FID](https://github.com/facebookresearch/FiD). In the code, I give model, inherit from `T5ForConditionalGeneration` . And I also count the number of using `forward` some layer, it seems that it shows different number of using `forward`.
```
class FiDT5(transformers.T5ForConditionalGeneration):
def __init__(self, config):
...
def generate(self, input_ids, attention_mask, max_length):
self.encoder.n_passages = input_ids.size(1)
return super().generate(
input_ids=input_ids.view(input_ids.size(0), -1),
attention_mask=attention_mask.view(attention_mask.size(0), -1),
max_length=max_length
)
t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
model = FiDT5(t5.config)
model.load_t5(t5.state_dict())
for i, batch in enumerate(dataloader):
(idx, _, _, context_ids, context_mask) = batch
outputs = model.generate(
input_ids=context_ids.cuda(),
attention_mask=context_mask.cuda(),
max_length=50,
)
print(outputs)
```
### Expected behavior
In two same batch, it print
```
tensor([[ 0, 22789, 9, 3038, 16924, 2060, 1]], device='cuda:0')
tensor([[ 0, 17724, 5500, 7059, 1]], device='cuda:0')
```
And it actuallly goes different number of layer, for example, in the first batch, it goes 288 times, but in the second, it goes 216 times. | 10-10-2022 07:21:00 | 10-10-2022 07:21:00 | Hi @CaffreyR π
Can you share a snippet where I can fully reproduce the issue locally? Also -- am I right in saying that the issue is that the exact same input might result in different outputs, using `model.generate()`?<|||||>Hi @gante , thanks for your kind reply.
Sorry but the full code has not been released. It is actually modified from the code of facebook [FID](https://github.com/facebookresearch/FiD). I count the time in the evaluation
https://github.com/facebookresearch/FiD/blob/main/test_reader.py#L36
You can add the code in the `for` cycle,
```
for i, batch in enumerate(dataloader):
(idx, _, _, context_ids, context_mask) = batch
torch.cuda.synchronize()
import time
start = time.perf_counter()
if opt.write_crossattention_scores:
model.reset_score_storage()
outputs = model.generate(
input_ids=context_ids.cuda(),
attention_mask=context_mask.cuda(),
max_length=50,
)
if opt.write_crossattention_scores:
crossattention_scores = model.get_crossattention_scores(context_mask.cuda())
for k, o in enumerate(outputs):
ans = tokenizer.decode(o, skip_special_tokens=True)
example = dataset.data[idx[k]]
if 'answers' in example:
score = src.evaluation.ems(ans, example['answers'])
exactmatch.append(score)
if opt.write_results:
fw.write(str(example['id']) + "\t" + ans + '\n')
if opt.write_crossattention_scores:
for j in range(context_ids.size(1)):
example['ctxs'][j]['score'] = crossattention_scores[k, j].item()
total += 1
if (i + 1) % opt.eval_print_freq == 0:
log = f'Process rank:{opt.global_rank}, {i+1} / {len(dataloader)}'
if len(exactmatch) == 0:
log += '| no answer to compute scores'
else:
log += f' | average = {np.mean(exactmatch):.3f}'
logger.warning(log)
torch.cuda.synchronize()
end = time.perf_counter()
print(end-start)
logger.warning(f'Process rank:{opt.global_rank}, total {total} | average = {np.mean(exactmatch):.3f}')
if opt.is_distributed:
torch.distributed.barrier()
score, total = src.util.weighted_average(np.mean(exactmatch), total, opt)
return score, total
```
And the `outputs` , there actually 3 `outputs`
- [The outputs of model generate](https://github.com/facebookresearch/FiD/blob/main/test_reader.py#L42)
- The number of `forward` in some layers.
- The time, (Some are 1.5 times the size of another)
I think the difference in `The outputs of model generate` can be fixed by loading the same weight of model, but the second and third are different.
Many thanks again for your time
Best,
CaffreyR<|||||>@CaffreyR without an exact script, I am limited in what I can do :) I understand your limitations, but the problem you are describing can come from many places.
In essence, `generate()` can have variable outputs (which leads to different execution times) for the same input in two circumstances:
1. `generate()` is configured to not be deterministic. If `transformers` `generate()` is being used without modifications, this should only be possible with the `do_sample=True` argument.
2. the model is not the same between `generate()` calls.
<|||||>Hi @gante , thanks again for your reply. It actually do not modify, see [here](https://github.com/facebookresearch/FiD/blob/main/src/model.py#L51), it actually just use the `generate()` from `transformers.T5ForConditionalGeneration`
And what is the `do_sample=True` ? Because in the code here.
https://github.com/facebookresearch/FiD/blob/main/test_reader.py#L115
There is a sampler , does it match the `circumstance 1`, could you please explain it more? Thanks
```
from torch.utils.data import DataLoader, SequentialSampler
eval_examples = src.data.load_data(
opt.eval_data,
global_rank=opt.global_rank, #use the global rank and world size attibutes to split the eval set on multiple gpus
world_size=opt.world_size
)
eval_dataset = src.data.Dataset(
eval_examples,
opt.n_context,
)
eval_sampler = SequentialSampler(eval_dataset)
eval_dataloader = DataLoader(
eval_dataset,
sampler=eval_sampler,
batch_size=opt.per_gpu_batch_size,
num_workers=20,
collate_fn=collator_function
)
```
<|||||>It shouldn't be related, `SequentialSampler` only touches the data, not the `generate()` method.
As for an explanation of `do_sample`, you can refer to our [docs](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) or our [blog post](https://huggingface.co/blog/how-to-generate).
Please note that without a full reproduction script I won't give further support here. As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository (with clear reproducibility) and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€<|||||>HI @CaffreyR ,
Maybe it's because of the `t5-base` configuration ? https://huggingface.co/t5-base/blob/main/config.json#L21
These lines modify the default options of `generate` for this model.<|||||>Hi @Narsil , thanks for your kind reply. Do you mean `task_specific_params`? Could you please explain more? What is the default option and how does it modify them ? Thanks!<|||||>the pipeline reads `task_specific_params` and overrides the default when it's present.
We realized this wasn't super discoverable, so very few models have this feature being used, but I happen to remember this one does.
So if you're using `t5-base` as a `summarization` pipeline (which I think is the default) then the pipeline will use those defaults and treat them as regular params, it happens these control the `generate_kwargs` of `generate`.
Sometimes models also have defaults in the `config` (same idea just it's for the whole model and does not depend on the actual task).
Neither of these mechanism is really great at showing to users what happens but it's great to try and provide sane defaults (or the ones used in the original repo/ original paper).
If you want to override any you just need to supply yours directly to `generate` for instance.
`User specified > Config > Default` is the order of resolution (`pipeline` has a few more rules, but you're not using them in fact).
<|||||>Hi @Narsil , thanks for your explanation. So what should I do, the code here just use t5.config to them. I need to delete `task_specific_params` in this case?<|||||>@CaffreyR
You can:
- Override the params directly in the pipeline
```python
pipeline(model="t5-base",
**{
"early_stopping": true,
"length_penalty": None,
"max_length": None,
"min_length": None,
"no_repeat_ngram_size": None,
"num_beams": None,
"prefix": "summarize: " # You probably want to keep this for summarization as it's how the model was trained
})
```
Or deactivate them altogether by loading the model before the pipeline
```python
model =AutoModelForXXX.from_pretrained("t5-base")
model.config.task_specific_params = None
pipe = pipeline(task="summarization", model=model, tokenizer=tokenizer)
```
Would either solution work for you ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,446 | closed | Add LongT5 to AutoConfig | ### System Info
config = AutoConfig.from_pretrained("google/long-t5-local-base")
gives a KeyError: 'longt5'
I am trying to run this as it's part of SentenceTransformer package when initialising a search model - could longT5 be added to AutoConfig?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
config = AutoConfig.from_pretrained("google/long-t5-local-base")
### Expected behavior
The config is loaded | 10-10-2022 05:22:54 | 10-10-2022 05:22:54 | Hey @robbohua -- I can run the script you shared without problems on my end. What version of `transformers` are you using? :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,445 | closed | Anything but plain "greedy" search "not implemented for 'Half'" | ### System Info
`transformers-cli env`=
```
Traceback (most recent call last):
File "/home/user/.local/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing
### Who can help?
@sgugger, @patil-suraj
Im following:
https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F
from:
https://huggingface.co/blog/hf-bitsandbytes-integration
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I love being able to use the int8 models! And being able to run the big LLMs on Colab. However, I just discovered that only the default search algo works. This is too bad, because for a chat application, the TopK sampling provides much more natural variation.
Beam:
RuntimeError: "log_softmax_lastdim_kernel_impl" not implemented for 'Half'
Topk:
RuntimeError: "topk_cpu" not implemented for 'Half'
### Expected behavior
being able to use TopK and TopP sampling on int8 optimized models | 10-09-2022 22:42:08 | 10-09-2022 22:42:08 | Hey @auwsom !
Thanks for your message! Indeed it would be better to support all the possible sampling procedures for 8-bit models πͺ
There is definitely something around half-precision logits and `generate` that needs a closer look!
Also this seems to be a duplicate of https://github.com/TimDettmers/bitsandbytes/issues/42#issuecomment-1272877078 - so tagging the issue here
Will look into it ASAP! <|||||>Hey @auwsom !
Thanks for your patience!
It appears that the workaroud is pretty much straightforward, could you run `generate` with `inputs_ids` set to a GPU device? For example by making sure that:
```
input_ids = input_ids.to('cuda')
```
`generate` yields an error since by instantiating a model with `device_map=auto` forces the output of the model to be on the same device as the input. In the snippet in https://github.com/TimDettmers/bitsandbytes/issues/42#issuecomment-1272877078 the `input_ids` are set on the `cpu`. I believe that making sure that these are on the GPU should do the trick for you, before waiting a proper fix to be merged in #19468 ? Could you confirm that this workaround fixes your issue? Thanks!<|||||>@younesbelkada yes, this works on the example notebook on Colab. Thanks! |
transformers | 19,444 | closed | Syntax issues (lines 126, 203) Documentation: @sgugger | # What does this PR do?
Syntax issues (lines 126, 203)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue related
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c
@donelianc
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 19:52:40 | 10-09-2022 19:52:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your contribution! |
transformers | 19,443 | closed | Attention mask fixed. Documentation: @sgugger | # What does this PR do?
* Attention mask fixed (line 217)
* typo fixed (paragraph 326)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue related
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@osanseviero
@Narsil
@ydshieh
@sgugger
@omarespejel
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 19:32:59 | 10-09-2022 19:32:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19443). All of your documentation changes will be reflected on that endpoint.<|||||>Same as your other PRs, there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,442 | closed | Syntax issues Documentation: @sgugger | # What does this PR do?
Syntax issues (lines 497, 526)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel
@amyeroberts
@yharyarias
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 18:58:04 | 10-09-2022 18:58:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Permissions refreshed @sgugger |
transformers | 19,441 | closed | [WIP] Add type hints for Lxmert (TF) | # What does this PR do?
This PR adds type hints to the Lxmert model for TensorFlow.
Models:
Lxmert: @LysandreJik
| 10-09-2022 18:27:14 | 10-09-2022 18:27:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Is there anything else I need to add to this PR?<|||||>@elusenji No, sorry! I meant to merge it after the tests passed but lost track of it yesterday. Doing it now, and thanks again! |
transformers | 19,440 | closed | Backtick fixed (paragraph 68) Documentation: @sgugger | # What does this PR do?
* Backtick fixed (paragraph 68)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 18:12:19 | 10-09-2022 18:12:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,439 | closed | Wrap VisualBERT integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in VisualBERT integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-09-2022 17:34:06 | 10-09-2022 17:34:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,438 | closed | Wrap RoFormer integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in RoFormer integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-09-2022 17:27:09 | 10-09-2022 17:27:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,437 | closed | Syntax issues (paragraphs 122, 130, 147, 155) Documentation: @sgugger |
# What does this PR do?
* Syntax issues (paragraphs 122, 130, 147, 155): `preentramiento` > `preentrenamiento`
* semantic issue (paragraph 220 & 232 & 252)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Resolves no previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@omarespejel
@ignacioct
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 16:54:19 | 10-09-2022 16:54:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Same as on your other PR, it seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Permissions refreshed @sgugger <|||||>Hi @kant It didn't work but I should have fixed the issue on our side. Could you make sure to accept the suggestion above?<|||||>I accept the above suggestion, @sgugger <|||||>So please click the button to commit it.<|||||>confirmed commit @sgugger |
transformers | 19,436 | closed | Fixed duplicated line (paragraph #83) Documentation: @sgugger | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
No previous issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-09-2022 14:45:42 | 10-09-2022 14:45:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,435 | closed | Make ViltModel forward method arguments (inputs_embeds and image_embeds) consistent | ### Feature request
The inputs_embeds arguemnt in Vilt expects the word embeddings (which is [cls_token_emb, token_embs, sep_token_emb]) of the input without adding the pos or token_type embs yet, while the image_embeds argument expects the processed embeddings [cls_token_emb, patch_embs] + pos_embs + token_types , which seems a bit inconsistent?
### Motivation
I was a bit confused why the following code was not producing the same output and then read the source code an realized the issue:
```
from transformers import ViltProcessor, ViltModel
from PIL import Image
import requests
# prepare image and text
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "hello world"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm")
inputs = processor(image, text, return_tensors="pt")
outputs_1 = model(**inputs)
# using embeds (the wrong way)
img_embs, img_mask, (patch_index, (h, w)) = model.embeddings.visual_embed(inputs['pixel_values'], inputs['pixel_mask'])
txt_embs = model.embeddings.text_embeddings(inputs['input_ids'])
outputs_2 = model(inputs_embeds=txt_embs, image_embeds=img_embs, pixel_mask=img_mask)
# seems to be the correct way
img_embs, img_mask, (patch_index, (h, w)) = model.embeddings.visual_embed(inputs['pixel_values'], inputs['pixel_mask'])
txt_embs = txt_emb = model.embeddings.text_embeddings.word_embeddings(inputs['input_ids']) # word_embeddings instead
outputs_3 = model(inputs_embeds=txt_embs, image_embeds=img_embs, pixel_mask=img_mask)
```
outputs_2 and outputs_3 are (almost) identical (they aren't exact matches but the total difference between all entries is around -0.009) while outputs_1 and outputs_2 are just different.
### Your contribution
Make both arguments either take the processed embeddings (after adding everything) or just the raw embeddings before adding anything. unless ofc, this is a deliberate decision made for reasons I don't know :o. If it is not intended, then I can try and submit a pr that fixes it. | 10-09-2022 14:34:35 | 10-09-2022 14:34:35 | Hi,
Thanks for reporting. I'm afraid we can't change it for backwards compatibility reasons, as people might have already used `image_embeds` with the way it is implemented now. Cc'ing @sgugger to confirm<|||||>Indeed. The documentation can be clarified however, to make this less surprising to users.<|||||>makes sense! i guess the docs could be indeed slightly clarified <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,434 | closed | Fixed a non-working hyperlink in the README.md file | The hyperlink to the community notebooks was outdated. | 10-09-2022 10:05:46 | 10-09-2022 10:05:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@MikailINTech will close this then. sorry buddy<|||||>> The link is just a missing double slash, could you please just fix that? Your fix includes some metadata that is not useful.
Sorry ! fixed |
transformers | 19,433 | closed | Tranformers documentation translation to Italian #17459 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # #17459
Italian translation of transformes/docs/source/en/perf_training_tpu.mdx
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 18:11:34 | 10-08-2022 18:11:34 | |
transformers | 19,432 | closed | Removed XLMModel inheritance from FlaubertModel(torch+tf) | # What does this PR do?
related to #19303
Removed XLMModel inheritance from FlaubertModel for pytorch and tensorflow
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-08-2022 17:03:23 | 10-08-2022 17:03:23 | Hi @sgugger I am failing some tests, can you please help me with this?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Hi. esm model test is failing but I am not able to figure out why. Can you please help?
<|||||>Arf, sorry I misled you. The copied from came from the code of XLM (they are copied from BERT actually) and you need to keep them. Sorry about that!<|||||>Thank you for your help! |
transformers | 19,431 | closed | Make bert_japanese and cpm independent of their inherited modules | # What does this PR do?
Another step towards completion of #19303
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 16:24:26 | 10-08-2022 16:24:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not going to let some hash collision stop me from publishing something that passes everything else.
Of note, I actually removed a test from the japanese test suite that appeared to be checking to make sure that it was inheriting from the module that this pr is decoupling the japanese bert from. Implication I suspect being that the original author especially cared that this relation would be in place and enforced.<|||||>Well, changing the import source did the job, the quick skim and assumption I made from the error wasn't quite on the mark<|||||>Thanks again for your contribution!<|||||>Probably going to scour the warnings as mentioned last time next, having some trouble with locally running tests but ci probably has my back |
transformers | 19,430 | open | Create TF port of BigBird | ### Model description
[BigBird](https://arxiv.org/abs/2007.14062) is an open source transformer model architecture for longer sequences, and is implemented in the Transformer library already in PyTorch and Flax, but not yet in TensorFlow. This issue tracks the implementation of a TensorFlow version of the model.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Location of current implementations](https://github.com/huggingface/transformers/tree/main/src/transformers/models/big_bird) | 10-08-2022 16:16:05 | 10-08-2022 16:16:05 | Currently starting to work on this :) |
transformers | 19,429 | closed | fixed grammatical omissions and fixed typos. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 14:10:55 | 10-08-2022 14:10:55 | This PR fixes a typo or improves the docs.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19429). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger could you please review this sir.<|||||>@codePerfectPlus sir could you please review if possible. |
transformers | 19,428 | closed | Small fix for `AutoTokenizer` using opt model. Use `GPT2TokenizerFast` | # What does this PR do?
This PR allows to use GPT2TokenizerFast for OPT when using from_pretrained method of AutoTokenizer.
I checked that both give the same thing after my change.
```
from transformers import AutoTokenizer
tokenizer_slow = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False)
> PreTrainedTokenizer(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)})
tokenizer_fast = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=True)
> PreTrainedTokenizerFast(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True)})
tokenizer_slow == tokenizer_fast
> False
text='Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.'
token_slow = tokenizer_slow(text)
token_fast = tokenizer_fast(text)
token_slow == token_fast
> True
```
However, the doc of OPT https://huggingface.co/docs/transformers/v4.22.2/en/model_doc/opt#overview advises not to use FastTokenizer for OPT.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 12:16:17 | 10-08-2022 12:16:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,427 | closed | Adding the README_es.md and reference to it in the others files readme | # What does this PR do?
This PR addede the readme.md file in spanish version and updated the other readme files to reference it correctly.
I made this because you have the docs in spanish lenguage but no readme was in spanish
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 11:20:41 | 10-08-2022 11:20:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sguggerI think it's done! :smiley: , If there is something else,tell me<|||||>@osanseviero I have read and accept the suggestions you make, only few small mistakes that i commented.Thanks a lot for answering so fast!<|||||>Hey there! I fixed the 2 suggestions you mentioned :) feel free to commit them now :D <|||||>@osanseviero @sgugger everything clear and ok now. Have a nice day! :smile_cat: <|||||>Can you just run `make fix-copies` on your branch to fix the CLI issue? There is probably one or two models out of sync between the READMEs.<|||||>I hope now everything is right, sorry for the mistakes :smiling_face_with_tear: <|||||>No worries, looking good now. Thanks again! |
transformers | 19,426 | closed | made tokenization_roformer independent of bert | # What does this PR do?
Relates to issue #19303
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 11:15:57 | 10-08-2022 11:15:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pinging @sgugger @ArthurZucker for re-distribution here <|||||>Will take care of it π<|||||>Don't worrk @ArthurZucker I'll review :-) This is linked to #19303 and I should have been tagged, not Patrick :-)<|||||>@sgugger when I opened the PR, in the PR template it showed me @patrickvonplaten is looking after roformer which I modified, so tagged him for the review.<|||||>No worries at all @naveennamani , but as @patrickvonplaten is very busy on plenty of other things and you PR should not make any actual changes so Patrck's input won't be necessary. I'll review it tomorrow :-)<|||||>Hi @sgugger , I've modified the comments as per your suggestion.<|||||>I missed it somehow, fixed it |
transformers | 19,425 | closed | Error while converting BigBirdPegasus tensorflow checkpoints into pytorch model using "convert_bigbird_pegasus_tf_to_pytorch.py" | Hi,
I want to covert TensorFlow checkpoints generated from training a BigbirdPegasus model for the summarization task into a PyTorch model using the prepared script for it (i.e., "convert_bigbird_pegasus_tf_to_pytorch.py") which is located in the following URL:
https://github.com/huggingface/transformers/tree/main/src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
During conversion, I faced the following error:
Traceback (most recent call last):
**File "convert_ckpt.py", line 212, in <module>
convert_bigbird_pegasus_ckpt_to_pytorch(args.tf_ckpt_path, args.save_dir, config_update=config_update)
File "convert_ckpt.py", line 202, in convert_bigbird_pegasus_ckpt_to_pytorch
torch_model = convert_bigbird_pegasus(tf_weights, config_update)
File "convert_ckpt.py", line 148, in convert_bigbird_pegasus
raise ValueError(f"could not find new key {new_k} in state dict. (converted from {k})")
ValueError: could not find new key model.decoder.layernorm_embedding.bias.Adafactor in state dict. (converted from pegasus/decoder/LayerNorm/beta/Adafactor)**
Can anybody help me with solving this error?
Thanks in advance
| 10-08-2022 10:53:10 | 10-08-2022 10:53:10 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,424 | closed | Fix typo in image-classification/README.md | Fix link typo of the following content.
PyTorch version, Trainer
PyTorch version, no Trainer
# What does this PR do?
Fixes a typo
## Who can review?
@sgugger @NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 10-08-2022 07:11:31 | 10-08-2022 07:11:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,423 | closed | Error while running deepseed in dreambooth | ### System Info
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend.
### Expected behavior
NotImplementedError: Could not run 'xformers::efficient_attention_forward_generic' with arguments from the 'CUDA' backend. | 10-08-2022 06:45:17 | 10-08-2022 06:45:17 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,422 | closed | this error occurs how to fix it | torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B8516D550>. warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.") torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B851838B0>. warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.") The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. Moving 6 files to the new cache system 0%| | 0/6 [00:02<?, ?it/s] There was a problem when trying to move your cache: File "transformers\utils\hub.py", line 1077, in <module> File "transformers\utils\hub.py", line 1040, in move_cache File "transformers\utils\hub.py", line 997, in move_to_new_cache File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help. | 10-08-2022 05:25:40 | 10-08-2022 05:25:40 | Hey @rajatj86, it's unfortunate that this issue occurs. Unless you have important data in your Hugging Face cache, I would advise removing it.
Otherwise, please update both your `transformers` and `huggingface_hub` versions and post the message here, as it should contain more information.
Thank you!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,421 | closed | Remove GPT-2 tokenizer dependancy from Deberta Tokenizers | # What does this PR do?
Hi @sgugger
Related to #19303 ,
- the GPT2Tokenizer dependency has been removed from DebertaTokenizer
- the GPT2TokenizerFast dependency has been removed from DebertaTokenizerFast
I ran` pytest tests/models/deberta/test_tokenization_deberta.py` which passed
Thanks for reviewing!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-08-2022 04:38:20 | 10-08-2022 04:38:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger
Please review the above PR when you get a moment.<|||||>I raised a Clean PR #19551 for this work will close once that is merged, @sgugger Please review |
transformers | 19,420 | closed | Load config error, permission denied and EnvironmentError | ### System Info
_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
_pytorch_select 0.2 gpu_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
aiohttp 3.8.3 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
asynctest 0.13.0 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
blessed 1.19.1 pypi_0 pypi
boto3 1.16.7 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
botocore 1.19.9 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bottleneck 1.3.4 py37hce1f21e_0
brotli 1.0.9 he6710b0_2
brotlipy 0.7.0 py37h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
bzip2 1.0.8 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2022.07.19 h06a4308_0
certifi 2022.6.15 py37h06a4308_0
cffi 1.14.3 py37h261ae71_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
charset-normalizer 2.0.4 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
click 8.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cryptography 3.1.1 py37h1ba5d50_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudatoolkit 10.2.89 hfd86e86_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cudnn 7.6.5 cuda10.2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.5.2 pypi_0 pypi
dbus 1.13.18 hb2f20db_0
dill 0.3.5.1 pypi_0 pypi
docutils 0.14 py37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
elastic-transport 8.1.2 pypi_0 pypi
elasticsearch 7.14.1 pypi_0 pypi
et_xmlfile 1.1.0 py37h06a4308_0
expat 2.4.4 h295c915_0
ffmpeg 4.2.2 h20bf706_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
filelock 3.7.1 pypi_0 pypi
flask 2.1.3 pypi_0 pypi
fontconfig 2.13.1 h6c09931_0
fonttools 4.34.4 pypi_0 pypi
freetype 2.11.0 h70c0345_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
frozenlist 1.3.1 pypi_0 pypi
fsspec 2022.8.2 pypi_0 pypi
gevent 21.12.0 pypi_0 pypi
gevent-websocket 0.10.1 pypi_0 pypi
giflib 5.2.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
glib 2.69.1 h4ff587b_1
gmp 6.2.1 h2531618_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gnutls 3.6.15 he1e5248_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gpustat 1.0.0 pypi_0 pypi
greenlet 1.1.2 pypi_0 pypi
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
huggingface-hub 0.8.1 pypi_0 pypi
icu 58.2 he6710b0_3
idna 2.10 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
importlib-metadata 4.11.3 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
intel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
itsdangerous 2.1.2 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
jmespath 0.10.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
joblib 1.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jpeg 9e h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
kiwisolver 1.4.4 pypi_0 pypi
lame 3.100 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
lcms2 2.12 h3be6417_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libedit 3.1.20191231 h14c3975_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libidn2 2.3.2 h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libopus 1.3.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtasn1 4.16.0 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libtiff 4.2.0 h85742a9_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libunistring 0.9.10 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libuuid 1.0.3 h7f8727e_2
libuv 1.40.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libvpx 1.7.0 h439df22_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libwebp 1.2.2 h55f646e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libwebp-base 1.2.2 h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libxcb 1.15 h7f8727e_0
libxml2 2.9.10 hb55368b_3
lpips 0.1.4 pypi_0 pypi
lz4-c 1.9.3 h295c915_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.5.2 pypi_0 pypi
matplotlib-base 3.5.1 py37ha18d171_1
mkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py37he8ac12f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft 1.2.0 py37h23d657b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py37h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.13 pypi_0 pypi
munkres 1.1.4 py_0
ncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nettle 3.7.3 hbbd107a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ninja 1.10.1 py37hfd86e86_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nltk 3.6.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numexpr 2.7.3 py37hb2eb853_0
numpy 1.19.2 py37h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base 1.19.2 py37hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
nvidia-ml-py 11.495.46 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openh264 2.1.1 h4ff587b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openpyxl 3.0.9 pyhd3eb1b0_0
openssl 1.1.1q h7f8727e_0
opentsne 0.6.2 pypi_0 pypi
packaging 21.3 pyhd3eb1b0_0
pandas 1.3.5 pypi_0 pypi
pcre 8.45 h295c915_0
pillow 9.2.0 pypi_0 pypi
pip 20.2.4 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
psutil 5.9.2 pypi_0 pypi
pyarrow 9.0.0 pypi_0 pypi
pycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyopenssl 19.1.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyparsing 3.0.9 py37h06a4308_0
pyqt 5.9.2 py37h05f1152_2
pysocks 1.7.1 py37_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python 3.7.10 hdb3f193_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
python-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pytorch-mutex 1.0 cpu pytorch-nightly
pytz 2022.1 py37h06a4308_0
pyyaml 6.0 pypi_0 pypi
qt 5.9.7 h5867ecd_1
readline 8.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
regex 2021.7.6 py37h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
requests 2.28.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
s3transfer 0.3.3 pyhd3eb1b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sacremoses 0.0.53 pypi_0 pypi
scikit-learn 1.0.2 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
seaborn 0.11.2 pyhd3eb1b0_0
sentence-transformers 2.0.0 pypi_0 pypi
sentencepiece 0.1.96 pypi_0 pypi
setuptools 50.3.0 py37h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sip 4.19.8 py37hf484d3e_0
six 1.15.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.33.0 h62c20be_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
threadpoolctl 3.1.0 pypi_0 pypi
tk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.9.4 pypi_0 pypi
torch 1.11.0+cu113 pypi_0 pypi
torchaudio 0.11.0+cu113 pypi_0 pypi
torchvision 0.12.0+cu113 pypi_0 pypi
tornado 6.1 py37h27cfd23_0
tqdm 4.64.0 py37h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
transformers 4.2.1 pypi_0 pypi
typing_extensions 3.10.0.0 pyh06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
urllib3 1.26.10 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
werkzeug 2.0.1 pypi_0 pypi
wget 3.2 pypi_0 pypi
wheel 0.35.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
x264 1!157.20191217 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xlrd 2.0.1 pypi_0 pypi
xlsxwriter 3.0.3 pyhd3eb1b0_0
xlwt 1.3.0 pypi_0 pypi
xxhash 3.0.0 pypi_0 pypi
xz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
yarl 1.8.1 pypi_0 pypi
zhconv 1.4.3 pypi_0 pypi
zipp 3.5.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zope-event 4.5.0 pypi_0 pypi
zope-interface 5.4.0 pypi_0 pypi
zstd 1.4.9 haebb681_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction


### Expected behavior
load the config correctly | 10-08-2022 03:47:03 | 10-08-2022 03:47:03 | Hello! It seems like you don't have read access into the cache generated by huggingface?
You're getting a permission denied to read the file.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,419 | closed | Stacktrace migrating cache opening OpenAI Whisper | ### System Info
```
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ transformers-cli env
Traceback (most recent call last):
File "/home/home/PycharmProjects/whisper/venv/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
```
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ git show-ref
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/heads/main
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/remotes/origin/HEAD
9e653bd0ea0f1e9493cb4939733e9de249493cfb refs/remotes/origin/main
```
<details>
<summary>.cache/huggingface</summary>
```
(venv) home@daniel-tablet1:~/.cache/huggingface$ find
.
./hub
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8.lock
./hub/version.txt
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8
./hub/273c26d519eca3d37b6907fca55b4570903094837c1e88f41544c2d7a1ef9b36.2581b5124d154f09d9841e3f106147b17807bdc9b30338c2f6b065a7119328b8.json
./transformers
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912.json
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de.lock
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5.lock
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca.json
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5.lock
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d.lock
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1.json
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e.lock
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2.json
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a.json
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.json
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2.lock
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e.json
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d.lock
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.lock
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44.lock
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494.lock
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.json
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb.lock
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24.lock
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b.lock
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c.json
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24.json
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09.lock
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.lock
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/375a542f256f8537243b49f47691b6b370e74950f71552629ff41b4025cdc719.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d.json
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11.lock
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637.lock
./transformers/e6eeef886a597ad9496f7a38414dc332f49fd0e18bc279439f19f6ef80a6830f.150cd75d571e557b7d1dc1a3fd74c0ebe252b855739e47c8040a11a362b2f912.lock
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e.json
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5.json
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f.lock
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5
./transformers/6e443a2ed9a4346cca5f4fb9986a60fea956b0f74694596632e5d37302cd2d51.6e9c56f90d0ccc4bb88c2360463bcbd3a5d5688b9ba81e6bcea7316ac803e5ca.lock
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2.lock
./transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.json
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
./transformers/3b13d6000bf0faa8f68bbbfabc744100e2abc27c7c8612bf1269bd79fd94fa3d.3df0d73ec7fbb471c0502e9bf5b52515f84d3af812b70f08e7ce8200d268c366.h5
./transformers/4ac94ea87276ca5a0c5bca5048e2dc4ff34d8c0cc5d48e4205bf5390f7290fd1.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a
./transformers/980f2be6bd282c5079e99199d7554cfd13000433ed0fdc527e7def799e5738fe.4fdc7ce6768977d347b32986aff152e26fcebbda34ef89ac9b114971d0342e09.json
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29.lock
./transformers/e8c98220e9166b448d2e9dfdec05e35b3b68e2c079d80fadfb4dc71e96dee028.852c05acd4c087ec9774e7ed56aeea5010c13056cc8bc37594b75b172416592c.json
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2.json
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29.json
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/4e60bb8efad3d4b7dc9969bf204947c185166a0a3cf37ddb6f481a876a3777b5.9f8326d0b7697c7fd57366cdde57032f46bc10e37ae81cb7eb564d66d23ec96b.json
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab.json
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9.lock
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.json
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75.lock
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab.lock
./transformers/74a3f992bf31343d09735202aa941b8b974c3c50506826429779f938d27705f7.1788df22ba1a6817edb607a56efa931ee13ebad3b3500e58029a8f4e6d799a29
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5.lock
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8.lock
./transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a.lock
./transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637
./transformers/03dbd2b11eae924dfd97070ed60502df863584957419a604e1c039e0eab3f974.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/997406d739f356745bd01f90fc8a2ff252ce35e403d6015f2b80fc214fe9387d.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8
./transformers/e727ad0b5b727e965ac92d0d987189dd8baca246cc5d9cd2d2991f5bd3a286c5.5fd7d9eb368cd9cb55495ec20862b533efee02e1e074c3bc7bf451b25b4fe59e.lock
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock
./transformers/569800088d6f014777e6d5d8cb61b2b8bb3d18a508a1d8af041aae6bbc6f3dfe.67d01b18f2079bd75eac0b2f2e7235768c7f26bd728e7a855a1c5acae01a91a8
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634.lock
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151.json
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634.json
./transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de.json
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.json
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9
./transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/d276a164c3a022c7d3c6887b2e91411b7bf2254df88506ee15510b313956d5fe.9ce994d579bd8ff52a13a561a8e7972d89bd45f20ef49a117c430147ee053da9.json
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a
./transformers/533d2051a74ea66e9d039bb6c455ef98972c14ecae8a492ec8684cbb236685f9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05.lock
./transformers/35014754ae1fcb956d44903df02e4f69d0917cab0901ace5ac7f4a4a998346fe.a30bb5d685bb3c6e9376ab4480f1b252d9796d438d1c84a9b2deb0275c5b2151.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.lock
./transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f.json
./transformers/748a176e9d151dcad63a27974db8b8f665f286954cfbb77008ca42163419ff66.6a323429db2b09562cffdb9bc72d09d08bccbca1d832434b183b867864c30526.h5.json
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1.lock
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5.lock
./transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb.json
./transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5.json
./transformers/ab70e5f489e00bb2df55e4bae145e9b1c7dc794cfa0fd8228e1299d400613429.f3874c2af5400915dc843c97f502c5d30edc728e5ec3b60c4bd6958e87970f75.json
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d
./transformers/d0404704aff7a47b8d8a30573cb4f67045bf89101e3200146c2a1a55f182d380.a3dc3058cc957fef449bfe2a4db7cdca4c9b0f7c0b2a9c4bc6228ba024621a78.h5.json
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0.lock
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529.lock
./transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
./transformers/199ab6c0f28e763098fd3ea09fd68a0928bb297d0f76b9f3375e8a1d652748f9.930264180d256e6fe8e4ba6a728dd80e969493c23d4caa0a6f943614c52d34ab
./transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
./transformers/b4f8395edd321fd7cd8a87bca767b1135680a41d8931516dd1a447294633b9db.647b4548b6d9ea817e82e7a9231a320231a1c9ea24053cc9e758f3fe68216f05.json
./transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.lock
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.lock
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
./transformers/198d2773a3a47fe909fd8bf2ab9d40f0c1355d9a45a3ecac510ab2d44390577c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.lock
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82.json
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c.lock
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d.json
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.json
./transformers/540455855ce0a3c13893c5d090d142de9481365bd32dc5457c957e5d13444d23.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8.json
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443.json
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11
./transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0.json
./transformers/16b07bde9fc789a1d5bafeeb361edfe9e4df30077f3f8150f33130800dd9fab7.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e.lock
./transformers/e584858c24b9c062296d83fd0d04e8037a58ca86863388b251e20d15b57d3652.4048b5693f516fd4b429d384e716f4bb0d4831de2b6c9ea2c42a86765c5ee4a1
./transformers/6b6d15ffd3a1fa3015ffff8a9a4a78371fecd1ed1f61aed8a35baf09535240ae.b2f577eb2ce415668e4a3805e4effcc3d81dae1126890ffb69936e7481327494.json
./transformers/d44ec0488a5f13d92b3934cb68cc5849bd74ce63ede2eea2bf3c675e1e57297c.627f9558061e7bc67ed0f516b2f7efc1351772cc8553101f08748d44aada8b11.json
./transformers/8d04c767d9d4c14d929ce7ad8e067b80c74dbdb212ef4c3fb743db4ee109fae0.9d268a35da669ead745c44d369dc9948b408da5010c6bac414414a7e33d5748c
./transformers/066c0238a1dab50404e7d118e7ad1468d20a1fc18c3f2ad1036366759bfc343d.c26bcfbd792a38251a4fb555d9110e87dcc2ecaee13ac0a027d1584df8a09634
./transformers/4d8eeedc3498bc73a4b72411ebb3219209b305663632d77a6f16e60790b18038.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d.json
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad.lock
./transformers/36135304685d914515720daa48fc1adae57803e32ab82d5bde85ef78479e9765.b548f7e307531070391a881374674824b374f829e5d8f68857012de63fe2681a.lock
./transformers/55c96bd962ce1d360fde4947619318f1b4eb551430de678044699cbfeb99de6a.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730.json
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4.lock
./transformers/c0abea01d3725dc3c06370cced02822e09a715c98c62346f5ec9b730361df18d.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.json
./transformers/1ad22be12336f9eec2b9fa372045631e8ffe9e2ca771f6802f88b5b15651f859.c46a0ea4d8cfc938ed324724108be3e06c2fb377cfdbd57ac70f5f589bb03a44.json
./transformers/715836a337ea91c1df044351c6041fcac9e268c8836a08c3aae639e8b38b4760.71e50b08dbe7e5375398e165096cacc3d2086119d6a449364490da6908de655e.lock
./transformers/775efbdc2152093295bc5824dee96da82a5f3c1f218dfface1b8cef3094bdf8f.c719a806caef7d36ec0185f14b3b5fa727d919f924abe35622b4b7147bfbb8c7.h5.json
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
./transformers/e1881a496d5b707363a530f017ae73140e9ce35e240c7fef5b6835a26bd20492.f19e829a37b1b5e2490c86b2233b4c0af113615667600e558758f314027f668e.json
./transformers/f8eeca194a413b200e1a5bd0e44d9b97e841dab11786978da40771d35dc6dd51.61622627847a3dbefbd551fce83592689111ec347ecce4b9a7ce14d10840be24
./transformers/702389a9cec22f2d79bf3fe49280d2eb5525b574d7a08fa786e30afd16b73de2.f45e1d59b04808261852aa4e0864ba21e35e23fbead10958b80bf4330c93aad2
./transformers/e35579e8a88906e94c27c62a44b4ed91aad2f30aace4ddbb72537133beee8046.0f4e7e01b1ce2b178aebfb2722a31f84570d00b96726ed9db0caed2c0856089d.lock
./transformers/f548ad4723a1111fd380d466e7291a47148498641c693e4959c3ff05bdcef0e3.13a045cad07359e6844c4f487af8e6323ad2308cac6357692d2359f1a9711443.lock
./transformers/83d419fb34e90155a8d95f7799f7a7316a327dc28c7ee6bee15b5a62d3c5ca6b.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8.lock
./transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab.json
./transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
./transformers/8785a0072d807ebc8a3b6bf5648744bfc3cc83e0e845c40b670d10c0d7827164.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.json
./transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82.lock
./transformers/83261b0c74c462e53d6367de0646b1fca07d0f15f1be045156b9cf8c71279cc9.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.json
./transformers/4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.json
./transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.json
./transformers/19c09c9654551e163f858f3c99c226a8d0026acc4935528df3b09179204efe4c.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
./transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.lock
./transformers/9c38ef325ee9369da1b4b968f92e65ff23befb359d8c51cab821a5a2fd77467e.95aa56f5baa208e6615988f702caba3cff650a3e0fc81149995ccbc168795db4
./transformers/0ddddd3ca9e107b17a6901c92543692272af1c3238a8d7549fa937ba0057bbcf.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
./transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
./transformers/81ffd70af12a736e520c197108c70778f231f23ad374bc228dd623abf2ee373b.0afca8ac6cb45f40028b0583daf120fc891de6e9146b0683fbc8556e33714dad.json
```
</details>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Use a Linux user account with previous transformers cache from 4.19.2
2. `git clone [email protected]:openai/whisper.git && cd whisper`
3. `python3 -m venv venv`
4. `. venv/bin/activate`
5. `python3 -m pip install -e .`
6. `venv/bin/whisper`
```
home@daniel-tablet1:~/PycharmProjects$ git clone [email protected]:openai/whisper.git
Cloning into 'whisper'...
Enter passphrase for key '/home/home/.ssh/id_ed25519':
remote: Enumerating objects: 192, done.
remote: Counting objects: 100% (82/82), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 192 (delta 73), reused 68 (delta 67), pack-reused 110
Receiving objects: 100% (192/192), 3.10 MiB | 13.97 MiB/s, done.
Resolving deltas: 100% (101/101), done.
home@daniel-tablet1:~/PycharmProjects$ cd whisper/
home@daniel-tablet1:~/PycharmProjects/whisper$ python3 -m venv venv
home@daniel-tablet1:~/PycharmProjects/whisper$ . venv/bin/activate
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ python3 -m pip install -e .
Obtaining file:///home/home/PycharmProjects/whisper
Preparing metadata (setup.py) ... done
Collecting ffmpeg-python==0.2.0
Downloading ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting more-itertools
Downloading more_itertools-8.14.0-py3-none-any.whl (52 kB)
ββββββββββββββββββββββββββββββββββββββββ 52.2/52.2 kB 10.3 MB/s eta 0:00:00
Collecting numpy
Downloading numpy-1.23.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)
ββββββββββββββββββββββββββββββββββββββββ 17.1/17.1 MB 49.0 MB/s eta 0:00:00
Collecting torch
Downloading torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl (776.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 776.3/776.3 MB 4.5 MB/s eta 0:00:00
Collecting tqdm
Downloading tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
ββββββββββββββββββββββββββββββββββββββββ 78.5/78.5 kB 26.0 MB/s eta 0:00:00
Collecting transformers>=4.19.0
Downloading transformers-4.22.2-py3-none-any.whl (4.9 MB)
ββββββββββββββββββββββββββββββββββββββββ 4.9/4.9 MB 45.9 MB/s eta 0:00:00
Collecting future
Downloading future-0.18.2.tar.gz (829 kB)
ββββββββββββββββββββββββββββββββββββββββ 829.2/829.2 kB 62.7 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting requests
Using cached requests-2.28.1-py3-none-any.whl (62 kB)
Collecting regex!=2019.12.17
Downloading regex-2022.9.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (770 kB)
ββββββββββββββββββββββββββββββββββββββββ 770.5/770.5 kB 48.3 MB/s eta 0:00:00
Collecting pyyaml>=5.1
Using cached PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB)
Collecting filelock
Downloading filelock-3.8.0-py3-none-any.whl (10 kB)
Collecting packaging>=20.0
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting huggingface-hub<1.0,>=0.9.0
Downloading huggingface_hub-0.10.0-py3-none-any.whl (163 kB)
ββββββββββββββββββββββββββββββββββββββββ 163.5/163.5 kB 63.2 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1
Using cached tokenizers-0.12.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)
Collecting typing-extensions
Downloading typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.9.24-py3-none-any.whl (161 kB)
Collecting idna<4,>=2.5
Downloading idna-3.4-py3-none-any.whl (61 kB)
ββββββββββββββββββββββββββββββββββββββββ 61.5/61.5 kB 23.6 MB/s eta 0:00:00
Collecting charset-normalizer<3,>=2
Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.12-py2.py3-none-any.whl (140 kB)
ββββββββββββββββββββββββββββββββββββββββ 140.4/140.4 kB 54.7 MB/s eta 0:00:00
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Installing collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, pyparsing, numpy, more-itertools, idna, future, filelock, charset-normalizer, certifi, torch, requests, packaging, ffmpeg-python, huggingface-hub, transformers, whisper
Running setup.py install for future ... done
Running setup.py develop for whisper
Successfully installed certifi-2022.9.24 charset-normalizer-2.1.1 ffmpeg-python-0.2.0 filelock-3.8.0 future-0.18.2 huggingface-hub-0.10.0 idna-3.4 more-itertools-8.14.0 numpy-1.23.3 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.9.13 requests-2.28.1 tokenizers-0.12.1 torch-1.12.1 tqdm-4.64.1 transformers-4.22.2 typing-extensions-4.4.0 urllib3-1.26.12 whisper-1.0
(venv) home@daniel-tablet1:~/PycharmProjects/whisper$ venv/bin/whisper
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 71 files to the new cache system
0%| | 0/71 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 1128, in <module>
move_cache()
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 1071, in move_cache
hub_metadata[url] = get_hub_metadata(url, token=token)
File "/home/home/PycharmProjects/whisper/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata
huggingface_hub.file_download._raise_for_status(r)
AttributeError: module 'huggingface_hub.file_download' has no attribute '_raise_for_status'
Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help.
usage: whisper [-h] [--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large}] [--model_dir MODEL_DIR] [--device DEVICE] [--output_dir OUTPUT_DIR]
[--verbose VERBOSE] [--task {transcribe,translate}]
[--language {af,am,ar,as,az,ba,be,bg,bn,bo,br,bs,ca,cs,cy,da,de,el,en,es,et,eu,fa,fi,fo,fr,gl,gu,ha,haw,hi,hr,ht,hu,hy,id,is,it,iw,ja,jw,ka,kk,km,kn,ko,la,lb,ln,lo,lt,lv,mg,mi,mk,ml,mn,mr,ms,mt,my,ne,nl,nn,no,oc,pa,pl,ps,pt,ro,ru,sa,sd,si,sk,sl,sn,so,sq,sr,su,sv,sw,ta,te,tg,th,tk,tl,tr,tt,uk,ur,uz,vi,yi,yo,zh,Afrikaans,Albanian,Amharic,Arabic,Armenian,Assamese,Azerbaijani,Bashkir,Basque,Belarusian,Bengali,Bosnian,Breton,Bulgarian,Burmese,Castilian,Catalan,Chinese,Croatian,Czech,Danish,Dutch,English,Estonian,Faroese,Finnish,Flemish,French,Galician,Georgian,German,Greek,Gujarati,Haitian,Haitian Creole,Hausa,Hawaiian,Hebrew,Hindi,Hungarian,Icelandic,Indonesian,Italian,Japanese,Javanese,Kannada,Kazakh,Khmer,Korean,Lao,Latin,Latvian,Letzeburgesch,Lingala,Lithuanian,Luxembourgish,Macedonian,Malagasy,Malay,Malayalam,Maltese,Maori,Marathi,Moldavian,Moldovan,Mongolian,Myanmar,Nepali,Norwegian,Nynorsk,Occitan,Panjabi,Pashto,Persian,Polish,Portuguese,Punjabi,Pushto,Romanian,Russian,Sanskrit,Serbian,Shona,Sindhi,Sinhala,Sinhalese,Slovak,Slovenian,Somali,Spanish,Sundanese,Swahili,Swedish,Tagalog,Tajik,Tamil,Tatar,Telugu,Thai,Tibetan,Turkish,Turkmen,Ukrainian,Urdu,Uzbek,Valencian,Vietnamese,Welsh,Yiddish,Yoruba}]
[--temperature TEMPERATURE] [--best_of BEST_OF] [--beam_size BEAM_SIZE] [--patience PATIENCE] [--length_penalty LENGTH_PENALTY]
[--suppress_tokens SUPPRESS_TOKENS] [--initial_prompt INITIAL_PROMPT] [--condition_on_previous_text CONDITION_ON_PREVIOUS_TEXT] [--fp16 FP16]
[--temperature_increment_on_fallback TEMPERATURE_INCREMENT_ON_FALLBACK] [--compression_ratio_threshold COMPRESSION_RATIO_THRESHOLD]
[--logprob_threshold LOGPROB_THRESHOLD] [--no_speech_threshold NO_SPEECH_THRESHOLD]
audio [audio ...]
whisper: error: the following arguments are required: audio
```
### Expected behavior
It should not print the stack trace and need to tell me to "copy paste this whole message and we will do our best to help". | 10-08-2022 00:36:07 | 10-08-2022 00:36:07 | Hey @danielzgtg π
I believe it is the same issue as in https://github.com/huggingface/transformers/issues/19384, with the same resolution as listed there -- it should be fixed today/tomorrow, with the new release of `transformers`
Meanwhile, you may be able to get rid of that error if you install `huggingface_hub==0.9.0` :)
(cc @LysandreJik )<|||||>Indeed, upgrading `huggingface_hub` to the latest version (which is 0.10.0!) should solve the error.<|||||>> may be able to get rid of that error if you install huggingface_hub==0.9.0
> (which is 0.10.0!) should solve the error.
So which one is it? 0.9.0 or 0.10.0? I had 0.10.0:
```
Collecting huggingface-hub<1.0,>=0.9.0
Downloading huggingface_hub-0.10.0-py3-none-any.whl (163 kB)
ββββββββββββββββββββββββββββββββββββββββ 163.5/163.5 kB 63.2 MB/s eta 0:00:00
```
Anyway, I don't yet know how to reproduce this message. It only appeared for me once. I do hope that I can any necessary cache migration to succeed though<|||||>> I had 0.10.0
That's why I suggested `0.9.0` :) It seems to be a problem due to a temporary version mismatch between `transformers` and `huggingface_hub` (see https://github.com/huggingface/transformers/pull/19244)
In any case, it's very hard to reproduce the issue -- it seems to happen when migrating from the old cache version (i.e. if you had used `transformers<=4.21` in your system) into the new cache version, which only happens once per system, AND have incompatible versions of `transformers`+`huggingface_hub`. New pip installs shouldn't see this error, even if they have an old cache<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,418 | closed | T5ForConditionalGeneration checkpoint size mismatch | ### System Info
## Error Description
I trained a `T5ForConditionalGeneration` model and saved the checkpoint using PyTorch Lightning's Trainer to a `.ckpt` file. But when I try to load back the state_dict using `model.from_state_dict()`, I get this error:
```python
RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:
Unexpected key(s) in state_dict: "decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight".
size mismatch for shared.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for encoder.embed_tokens.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([32103, 512]) from checkpoint, the shape in current model is torch.Size([32128, 512]).
```
I have not changed the model definition in any way. The keys also match. So, I'm really not sure how the sizes could mismatch magically when loading?
## Loading the model
This is how I'm loading the model:
```python
tokenizer = T5Tokenizer.from_pretrained(args["model_checkpoint"], bos_token="[bos]", eos_token="[eos]", sep_token="[sep]")
model = T5ForConditionalGeneration.from_pretrained(args["model_checkpoint"], ignore_mismatched_sizes=True)
model.load_state_dict({k[6:]: v for k, v in ckpt["state_dict"].items()})
```
I even tried to pass `ignore_mismatched_sizes=True` to the `from_pretrained` call, and that didn't help either.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As described above.
### Expected behavior
No error | 10-07-2022 19:47:52 | 10-07-2022 19:47:52 | Hey @msamogh π
To explain why there is a mismatch, we would need to know exactly how the model was trained :) However, the most important part -- you may be able to load the checkpoint with these two strategies:
1. Load the model architecture from the same configuration as your trained model
2. After initializing the model architecture (and before loading the checkpoint), [resize the embeddings](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings)
Both strategies should change the shape of your architecture to match your checkpoint<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @gante, does interpolate_pos_embedding type functions do what you mentioned in point 2?
Here is a snippet:
```
def interpolate_pos_embed_multimae(model, checkpoint_model):
pattern = "input_adapters\.(.*)\.pos_emb"
matched_keys = [k for k in checkpoint_model if bool(re.match(pattern, k))]
for key in matched_keys:
domain = re.match(pattern, key).group(1) # group(0) is entire matched regex
if getattr(model.input_adapters, domain, None) is not None:
pos_embed_checkpoint = checkpoint_model[key]
_, _, orig_H, orig_W = pos_embed_checkpoint.shape
_, _, new_H, new_W = getattr(model.input_adapters, domain).pos_emb.shape
if (orig_H != new_H) or (orig_W != new_W):
print(f"Key {key}: Position interpolate from {orig_H}x{orig_W} to {new_H}x{new_W}")
pos_embed_checkpoint = torch.nn.functional.interpolate(
pos_embed_checkpoint, size=(new_H, new_W), mode='bicubic', align_corners=False)
checkpoint_model[key] = pos_embed_checkpoint
```
<|||||>Hey @forkbabu π I do not know the answer to your question. However, from your code snippet, it seems like you are working with a vision model -- my recommendation would be to open a new issue and tag one of our vision experts |
transformers | 19,417 | closed | Make `MobileBert` tokenizers independent from `Bert` | # What does this PR do?
Copied the code from `Bert` tokenizers into `MobileBert` tokenizers to make the latter self-contained.
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-07-2022 19:22:59 | 10-07-2022 19:22:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19417). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @501Good, as you can see, your rebase has messed the diff on Git a little. Could you open a fresh PR from your branch?<|||||>Hi @sgugger, sorry for that! Opened a new PR here #19531! |
transformers | 19,416 | closed | Wrap TAPAS integration test forward passes with torch.no_grad() | # What does this PR do?
This PR wraps forward passes in TAPAS integration tests with `torch.no_grad()`, as proposed in issue #14642. This avoids the computation of unnecessary gradients during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please check it?
Thanks :) | 10-07-2022 19:01:43 | 10-07-2022 19:01:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,415 | closed | fix misspelled word in ensure_valid_input docstring | This PR fixes misspelled docstring for `ensure_valid_input` function in `convert_graph_to_onnx.py`.
Fixes https://github.com/huggingface/transformers/issues/19362
| 10-07-2022 18:50:57 | 10-07-2022 18:50:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,414 | closed | Wrap ImageGPT integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in ImageGPT integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-07-2022 18:46:00 | 10-07-2022 18:46:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,413 | closed | Wrap FNet integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in FNet integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-07-2022 18:40:03 | 10-07-2022 18:40:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,412 | closed | Wrap FlauBERT integration test forward passes with torch.no_grad() | # What does this PR do?
This PR wraps forward passes in FlauBERT integration tests with `torch.no_grad()`, as proposed in issue #14642. This avoids the computation of unnecessary gradients during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-07-2022 18:33:30 | 10-07-2022 18:33:30 | |
transformers | 19,411 | closed | Remove dependency of Roberta in Blenderbot | Hi @sgugger,
This PR looks to address https://github.com/huggingface/transformers/issues/19303: the RobertaTokenizer dependency has been removed from `BlenderbotTokenizer` and the RobertaFastTokenizer dependency has been removed from `BlenderbotTokenizerFast`.
I did encounter an error when running `pytest tests/models/blenderbot/test_tokenization_blenderbot.py`, and I got the following error:
```
========================================================================= test session starts =========================================================================
platform darwin -- Python 3.10.4, pytest-7.1.3, pluggy-1.0.0
rootdir: /Users/rchan/Library/CloudStorage/OneDrive-TheAlanTuringInstitute/huggingface/transformers, configfile: setup.cfg
collected 4 items
tests/models/blenderbot/test_tokenization_blenderbot.py F... [100%]
============================================================================== FAILURES ===============================================================================
___________________________________________________ Blenderbot3BTokenizerTests.test_3B_tokenization_same_as_parlai ____________________________________________________
self = <tests.models.blenderbot.test_tokenization_blenderbot.Blenderbot3BTokenizerTests testMethod=test_3B_tokenization_same_as_parlai>
def test_3B_tokenization_same_as_parlai(self):
assert self.tokenizer_3b.add_prefix_space
> assert self.tokenizer_3b([" Sam", "Sam"]).input_ids == [[5502, 2], [5502, 2]]
E assert [[1, 5502, 2], [1, 5502, 2]] == [[5502, 2], [5502, 2]]
E At index 0 diff: [1, 5502, 2] != [5502, 2]
E Use -v to get more diff
tests/models/blenderbot/test_tokenization_blenderbot.py:48: AssertionError
------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
========================================================================== warnings summary ===========================================================================
src/transformers/testing_utils.py:28
/Users/rchan/Library/CloudStorage/OneDrive-TheAlanTuringInstitute/huggingface/transformers/src/transformers/testing_utils.py:28: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.util import strtobool
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================= short test summary info =======================================================================
FAILED tests/models/blenderbot/test_tokenization_blenderbot.py::Blenderbot3BTokenizerTests::test_3B_tokenization_same_as_parlai - assert [[1, 5502, 2], [1, 5502, 2]...
=============================================================== 1 failed, 3 passed, 1 warning in 2.15s ================================================================
```
Any idea on what I have done wrong here?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 10-07-2022 16:13:44 | 10-07-2022 16:13:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, I have now removed the global ` # Copied from` statements, and replaced them with Copied from statements to the individual methods that I am copying into the Blenderbot classes. This has resolved my earlier problem and now `pytest tests/models/blenderbot/test_tokenization_blenderbot.py` runs without error.
Currently, my PR now fails `python utils/check_copies.py` as there are two methods named `mask_token` in the `RobertaTokenizerFast`. This means that I currently have two `# Copied from transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.mask_token` statements and so there's a matching problem.
How should I deal with the case where there are two methods with the same name?<|||||>I've just seen that someone had a similar issue with copying over a method with a setter: https://github.com/huggingface/transformers/pull/19408#pullrequestreview-1134731877.
I have now followed the advice on this PR and have removed my Copied from statement on the setter for `mask_token`. Seems like all tests pass now π |
transformers | 19,410 | closed | Removed Bert and XML Dependency from Herbert | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #19303 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Thank you so much @sgugger for your guidance! I think it should be good to go now!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-07-2022 15:28:25 | 10-07-2022 15:28:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,409 | closed | Clip device map | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-07-2022 14:47:04 | 10-07-2022 14:47:04 | Looks good to me, pinging @sgugger
What happened with your branch? :smile: <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Looks good to me, pinging @sgugger
>
> What happened with your branch? smile
Yeah sorry about this :sweat_smile: |
transformers | 19,408 | closed | Remove Dependency between Bart and LED (slow/fast) | # What does this PR do?
Removes the dependency between LED and Bart
## Who can review?
@sgugger | 10-07-2022 13:35:46 | 10-07-2022 13:35:46 | @sgugger hopefully this does it?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Just tried locally your PR and found the reason why we can't have a global copied from on the tokenizer (sorry I didn't spot it earlier): the `_pad` method is overwritten.
So instead we need to applied the copied from on each method (except `_pad`) and not copy the whole class.
Sorry about that!<|||||>> Just tried locally your PR and found the reason why we can't have a global copied from on the tokenizer (sorry I didn't spot it earlier): the `_pad` method is overwritten. So instead we need to applied the copied from on each method (except `_pad`) and not copy the whole class.
>
> Sorry about that!
so basically, write those copy comments again on both the slow and fast tokenizers?<|||||>Yeah, sorry<|||||>> Yeah, sorry
Oh dont be, on it now :D, thanks for the quick reply!<|||||>@sgugger anything left?<|||||>btw, the `mask_token` method in the fast tokenizer, there is a method and a setter , but i have the same comment on both (Copied from `BartTokenizerFast.mask_token)` is that the right way or am i supposed to keep the one above the method only and ignore the setter's one?<|||||>thanks a lot for the quick replies, what else is left?<|||||>Should be good now, just waiting for all tests to pass :-)<|||||>oh finally, the green light |
transformers | 19,407 | closed | Removed XML and Bert dependency from Herbert tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removed the dependency of herbert(slow/fast) tokenizer on bert and xml.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Pinging @sgugger for this issue!
Black seems to be working fine on my system, but shows errors in the automated tests for files that I haven't modified. For example: wav2vec2 and blenderbot_small have only style changes but is failing a run_tests_tf and run_tests_torch respectively. Let me know if this is ok!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-07-2022 11:02:50 | 10-07-2022 11:02:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19407). All of your documentation changes will be reflected on that endpoint.<|||||>Hi there! Thanks a lot for working on this, but your PR shows a diff of 577 files when it should just be the two tokenizer file you are touching.
I think it might be because you have a different version fo black that we are using in your environment. Could you try doing `pip install -e. [quality]`?<|||||>Superseded by #19410 |
transformers | 19,406 | closed | Decouples `XLMProphet` model from `Prophet` | @sgugger ,
Per the issue #19303, the `Prophet` model dependency is removed from `XLMProphet` and it now directly inherits from `PretrainedModel`.
- [As discussed in a different PR review](https://github.com/huggingface/transformers/pull/19346#discussion_r988069210) , I've moved some of the docstring examples to inside the corresponding docs/source location. Let me know if there are some tests you want me to run.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Thanks for reviewing the PR! | 10-07-2022 11:00:42 | 10-07-2022 11:00:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger - the requested changes have all been done. Thanks for your review! |
transformers | 19,405 | closed | Remove unneded words from audio-related feature extractors | null | 10-07-2022 10:51:19 | 10-07-2022 10:51:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Lgtm thanks a lot π |
transformers | 19,404 | closed | remove RobertaConfig inheritance from MarkupLMConfig | # What does this PR do?
Related to #19303
Removes `RobertaConfig` and `BertConfig` dependency from `MarkupLMConfig`. Even though `RobertaConfig` inherits from `BertConfig`, I have changed `MarkupLMConfig` to directly inherit from `PretrainedConfig`.
Added the following arguments in `__init__`:
- `bos_token_id = 0`
- `eos_token_id = 2`
- `position_embedding_type="absolute"`
- `use_cache=True`
- `classifier_dropout=None`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-07-2022 10:44:41 | 10-07-2022 10:44:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Great work, thanks a lot! There is a typo in the docstring that is responsible for the failing test. I believe my suggestion should fix it :-)
Thank you for your suggestion.<|||||>Thanks for your work on this! |
transformers | 19,403 | closed | Remove dependency of Bert from Squeezebert tokenizer | Hi @sgugger,
Fixes #19303, the BertTokenizer dependency has been removed from `SqueezeBertTokenizer` and the BertTokenizerFast dependency has been removed from `SqueezeBertTokenizerFast`.
I ran `pytest tests/models/squeezebert/test_tokenization_squeezebert.py`, which passed.
Thanks for reviewing this! :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 10-07-2022 09:48:34 | 10-07-2022 09:48:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I see that my code currently fails the style and code consistency checks. When I run `make style`, it seems to change a lot of files and ones that I did not touch. Is this normal?
I'm also trying to run `make repo-consistency` but keep getting
```
python utils/check_copies.py --fix_and_overwrite
make: python: No such file or directory
make: *** [fix-copies] Error 1
```
which is strange as I am running this from the root directory...<|||||>Hi @sgugger, many thanks for the quick replies! I have made the changes you mentioned above, and regarding `make repo-consistency` and `make-style`, it seems like `pip install -e ."[quality]"` did the trick! |
transformers | 19,402 | closed | Add `OPTForQuestionAnswering` | # What does this PR do?
This PR adds `OPTForQuestionAnswering` in Transformers. The implementation is based on `BloomForQuestionAnswering` #19310. This introduces a new autoregressive model for question answering tasks in the library.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @LysandreJik @ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-07-2022 09:41:20 | 10-07-2022 09:41:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failures are unrelated to this PR, so merging :-)<|||||>Hi @clementapa ,
While adding `OPTForQuestionAnswering` did you test if you were able to train (say, fine-tune on squad) a QA model for any of the opt variants?
I am getting a fast tokenizer error here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py#L345
Essentially, the `run_qa.py` script requires the model to have a fast tokenizer which is not available for the OPT models.
Thanks,<|||||>Hey! The Fast tokenizer is available for OPT. Make sure you are using main, as a recent issue with automatic conversion for OPT tokenizer was fixed. See #20823 |
transformers | 19,401 | closed | Adds DonutSwin to models exportable with ONNX | # What does this PR do?
Fixes #16308
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lewtun & @ChainYo for ONNX and @NielsRogge for Donut and Document Question Answering.
| 10-07-2022 09:13:02 | 10-07-2022 09:13:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19401). All of your documentation changes will be reflected on that endpoint.<|||||>> Hi @WaterKnight1998,
>
> Thanks for your PR. It looks clean.
>
> Nice catch for the `model-type` variable that could be tricky to find: https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa/blob/main/config.json#L138
>
> First DocumentQuestionAnswering model added. It's pretty cool!
I don't see the comment. Do I need to solve anything?
However, for testing locally I was using next code but I can't export the model :(
I exported just encoder like this
```python
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base")
model.encoder.save_pretrained("./swin")
```
Then trying to convert to onnx I get:
```
python -m transformers.onnx --model=./swin onnx/
Local PyTorch model found.
Framework not requested. Using torch to export to ONNX.
/home/david/.local/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Using framework PyTorch: 1.12.1+cu116
Traceback (most recent call last):
File "/home/david/micromamba/envs/huggingface/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/david/micromamba/envs/huggingface/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 115, in <module>
main()
File "/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 97, in main
onnx_inputs, onnx_outputs = export(
File "/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/convert.py", line 337, in export
return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
File "/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/convert.py", line 144, in export_pytorch
model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)
File "/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/onnx/config.py", line 348, in generate_dummy_inputs
raise ValueError(
ValueError: Unable to generate dummy inputs for the model. Please provide a tokenizer or a preprocessor.
```
Do I need to add more code?
<|||||>> Do I need to add more code?
Yes, it would help if you overcharged the `generate_dummy_inputs()` function. Like the `LayoutLMv3` model, you need to define the process as a dummy input. ONNX conversion models use one batch (even random dummy data) to follow the data flow through the graph layers.
Check this here: https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/layoutlmv3/configuration_layoutlmv3.py#L227-L294
This can help too, it's the base `generate_dummy_inputs()` function : https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/onnx/config.py#L264-L378<|||||>@ChainYo @lewtun Relative imports fixed and added also the function to generate dummy functions. But when I convert the model into ONNX like this:
```python
import transformers
from pathlib import Path
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base")
model.encoder.save_pretrained("./swin")
from transformers.onnx import export
from transformers import AutoConfig
from transformers.models.donut import *
onnx_config = AutoConfig.from_pretrained("./swin")
onnx_config = DonutSwinOnnxConfig(onnx_config)
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
onnx_inputs, onnx_outputs = export(processor, model.encoder, onnx_config, onnx_config.default_onnx_opset, Path("model.onnx"))
```
I get the following warnings:
```
/home/david/.local/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if width % self.patch_size[1] != 0:
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if height % self.patch_size[0] != 0:
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if min(input_resolution) <= self.window_size:
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:147: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
was_padded = pad_values[3] > 0 or pad_values[5] > 0
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if was_padded:
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
should_pad = (height % 2 == 1) or (width % 2 == 1)
/home/david/micromamba/envs/huggingface/lib/python3.10/site-packages/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if should_pad:
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
```
Is it ok?<|||||>> Is it ok?
Hi @WaterKnight1998,
Do you get onnx files locally when you export the model?
Did you try to load the file with https://netron.app ?
Could you try to load an InferenceSession with Optimum or Onnx and use the model to see if it works? <|||||>> Hi @WaterKnight1998, Do you get onnx files locally when you export the model?
Yes, I get the files
> Did you try to load the file with https://netron.app ?
Yes, model loaded
> Could you try to load an InferenceSession with Optimum or Onnx and use the model to see if it works?
I am testing:
```python
from transformers.onnx import validate_model_outputs
validate_model_outputs(
onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
)
```
But python process is killed here in my computer: https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/convert.py#L392
Maybe too big for CPU?<|||||>Hi, I tested in Databricks and got this error:
```
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.05213117599487305
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<command-489655835555725> in <module>
32
33 from transformers.onnx import validate_model_outputs
---> 34 validate_model_outputs(
35 onnx_config, processor, model.encoder, Path("model.onnx"), onnx_outputs, onnx_config.atol_for_validation
36 )
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)
440 if not np.allclose(ref_value, ort_value, atol=atol):
441 logger.info(f"\t\t-[x] values not close enough (atol: {atol})")
--> 442 raise ValueError(
443 "Outputs values doesn't match between reference model and ONNX exported model: "
444 f"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}"
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.05213117599487305
```
Maybe I need to update anything @ChainYo & @lewtun ? Or is it OK?
<|||||>> Hi, I tested in Databricks and got this error:
>
> ```
>
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305
> ---------------------------------------------------------------------------
> ValueError Traceback (most recent call last)
> <command-489655835555725> in <module>
> 32
> 33 from transformers.onnx import validate_model_outputs
> ---> 34 validate_model_outputs(
> 35 onnx_config, processor, model.encoder, Path("model.onnx"), onnx_outputs, onnx_config.atol_for_validation
> 36 )
>
> /local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)
> 440 if not np.allclose(ref_value, ort_value, atol=atol):
> 441 logger.info(f"\t\t-[x] values not close enough (atol: {atol})")
> --> 442 raise ValueError(
> 443 "Outputs values doesn't match between reference model and ONNX exported model: "
> 444 f"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}"
>
> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305
> ```
>
> Maybe I need to update anything @ChainYo & @lewtun? Or is it OK?
I didn't think about this but do you have enough RAM locally? Imagine the model is 20Gb you need the double to convert one model (~40Gb) because scripts need to load both models simultaneously.
The error I see on Databricks is about `absolute tolerance, which is `1e-5` by default. There are two possibilities:
- You selected the wrong `--feature` in your conversion command (maybe try something other than the default one)
- You need to pass the argument `--atol` to your conversion command with the proper value even if 0.052 seems too much IMO (never go with more than `1e-3`).<|||||>> > Hi, I tested in Databricks and got this error:
> > ```
> >
> > ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305
> > ---------------------------------------------------------------------------
> > ValueError Traceback (most recent call last)
> > <command-489655835555725> in <module>
> > 32
> > 33 from transformers.onnx import validate_model_outputs
> > ---> 34 validate_model_outputs(
> > 35 onnx_config, processor, model.encoder, Path("model.onnx"), onnx_outputs, onnx_config.atol_for_validation
> > 36 )
> >
> > /local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)
> > 440 if not np.allclose(ref_value, ort_value, atol=atol):
> > 441 logger.info(f"\t\t-[x] values not close enough (atol: {atol})")
> > --> 442 raise ValueError(
> > 443 "Outputs values doesn't match between reference model and ONNX exported model: "
> > 444 f"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))}"
> >
> > ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got a max absolute difference of: 0.05213117599487305
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Maybe I need to update anything @ChainYo & @lewtun? Or is it OK?
>
> I didn't think about this but do you have enough RAM locally? Imagine the model is 20Gb you need the double to convert one model (~40Gb) because scripts need to load both models simultaneously.
>
Good point, I just have 32GB of RAM locally, probably this.
> The error I see on Databricks is about `absolute tolerance, which is `1e-5` by default. There are two possibilities:
>
> * You selected the wrong `--feature` in your conversion command (maybe try something other than the default one)
I tested with this:
```python
import transformers
from pathlib import Path
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base")
model.encoder.save_pretrained("./swin")
from transformers.onnx import export
from transformers import AutoConfig
from transformers.models.donut import *
onnx_config = AutoConfig.from_pretrained("./swin")
onnx_config = DonutSwinOnnxConfig(onnx_config)
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
onnx_inputs, onnx_outputs = export(processor, model.encoder, onnx_config, onnx_config.default_onnx_opset, Path("model.onnx"))
from transformers.onnx import validate_model_outputs
validate_model_outputs(
onnx_config, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config.atol_for_validation
)
```
> * You need to pass the argument `--atol` to your conversion command with the proper value even if 0.052 seems too much IMO (never go with more than `1e-3`).
In my config it is set to:
```python
@property
def atol_for_validation(self) -> float:
return 1e-4
```
Should I test with 1e-3? But I am getting 0.05
I don't get why difference is too bight, maybe the warnings that I mentioned in other comment?
```
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if width % self.patch_size[1] != 0:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if height % self.patch_size[0] != 0:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if min(input_resolution) <= self.window_size:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:147: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
was_padded = pad_values[3] > 0 or pad_values[5] > 0
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if was_padded:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
should_pad = (height % 2 == 1) or (width % 2 == 1)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-b455b6d8-06c3-4a9e-9af6-0fd82d764878/lib/python3.8/site-packages/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
```<|||||>Hi again @ChainYo & @lewtun I tested validate_model_outputs in different setups:
- Nvidia T4: 0.01 difference
- Nvidia V100: 0.06 difference
- CPU: 16 Cores & 56GB RAM: 0.04 difference
I don't know where is the problem. What can I look at?<|||||>> I don't know where is the problem. What can I look at?
I think it just means that it's a bit random. I don't think it's linked to the hardware, test to check the atol like 10k times per hardware.
IMO it seems evident that atol=1e-2 could do the trick, but it looks terrible to accept atol > 1e-3.
To return to the warning, you had earlier while converting the model: did you check if all layers are implemented in ONNX?<|||||>Hey @WaterKnight1998 I recently implemented a fix in #19475 that was causing all the Swin models to have incorrect ONNX graphs. Could you first try rebasing on `main` and checking the tolerance again?<|||||>> Hey @WaterKnight1998 I recently implemented a fix in #19475 that was causing all the Swin models to have incorrect ONNX graphs. Could you first try rebasing on `main` and checking the tolerance again?
Hi @lewtun If if you in the PR i rebased and tested again, I am seeing the same issue:
```
ValueError Traceback (most recent call last)
<command-489655835555726> in <module>
1 from transformers.onnx import validate_model_outputs
----> 2 validate_model_outputs(
3 onnx_config, processor, model.encoder, Path("model.onnx"), onnx_outputs, onnx_config.atol_for_validation
4 )
/local_disk0/.ephemeral_nfs/envs/pythonEnv-f0e538e7-c99a-4698-9d4a-c04070b5c780/lib/python3.8/site-packages/transformers/onnx/convert.py in validate_model_outputs(config, preprocessor, reference_model, onnx_model, onnx_named_outputs, atol, tokenizer)
453 bad_indices = np.logical_not(np.isclose(ref_value, ort_value, atol=atol))
454 logger.info(f"\t\t-[x] values not close enough (atol: {atol})")
--> 455 raise ValueError(
456 "Outputs values doesn't match between reference model and ONNX exported model: "
457 f"Got max absolute difference of: {np.amax(np.abs(ref_value - ort_value))} for "
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.06693840026855469 for [ -2.359991 4.654682 -14.478863 ... 5.7127304 1.8854475
0.7024307] vs [ -2.3598232 4.65485 -14.47826 ... 5.712929 1.8853188
0.7022476]
```<|||||>Hi again, @lewtun & @ChainYo I have checked this implementation and original Swin Transformer, the only difference is that normalization layer is not present. Maybe that's the reason?<|||||>> Hi again, @lewtun & @ChainYo I have checked this implementation and original Swin Transformer, the only difference is that normalization layer is not present. Maybe that's the reason?
Thanks for that insight @WaterKnight1998, although I'd be surprised if that's the source of the issue. I'll take a closer look at the dummy data generation ASAP<|||||>Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:
```
python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx
```
My understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed<|||||>> Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:
>
> ```
> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx
> ```
>
> My understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed
Hi @lewtun I tested this but this is not working owing to the tollerance issue. In addition, maybe some users just want to export the encoder part. adding @NielsRogge as he implemeted this in #18488
<|||||>> Hi @WaterKnight1998 now that #19254 has been merged, can't you export the Donut checkpoints directly using this feature:
>
> ```
> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx
> ```
>
> My understanding is that Donut falls under the general class of vision encoder-decoder models, so a separate ONNX export might not be needed
@lewtun While converting facing output value error (for the same command mentioned above)
```
Validating ONNX model...
-[β] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[β] (3, 1200, 1024) matches (3, 1200, 1024)
-[x] values not close enough (atol: 1e-05)
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 180, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 113, in main
args.atol if args.atol else encoder_onnx_config.atol_for_validation,
File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 456, in validate_model_outputs
"Outputs values doesn't match between reference model and ONNX exported model: "
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0018157958984375 for [ 1.5980988 0.5988426 -14.8206215 ... -5.1114273 4.5024166
2.8833218] vs [ 1.5982218 0.59886694 -14.820812 ... -5.1115417 4.502474
2.883381 ]
```
But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement ```model.generate()``` instead of ```model.run``` for the decoder part.
@lewtun @WaterKnight1998 Any suggestions here ( I can share the Colab if required).
Thanks and Regards.<|||||>> But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.
@BakingBrains Using the code from my PR to do the encoder conversion?<|||||>@lewtun and @WaterKnight1998 any updates on the decoder? I am able to convert the decoder model. Not sure if that's the right method. (but the output shape from Donut decoder and ONNX decoder is same)<|||||>Hi, @lewtun @ChainYo @BakingBrains any news on this? I need this to get the model into production :(<|||||>@sgugger could you help us? We are looking forward for this feature π<|||||>Hey @WaterKnight1998 I'm taking a look at this, but it's turning out to be tricky to figure out why where the discrepancy arises with the ONNX graph vs PyTorch model. <|||||>> Hey @WaterKnight1998 I'm taking a look at this, but it's turning out to be tricky to figure out why where the discrepancy arises with the ONNX graph vs PyTorch model.
Thank you very much for looking at it π<|||||>FYI if you need a temporary workaround and are willing to tolerate some error on the decoder, you can export one of the donut checkpoints on the `main` branch with:
```
python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 3e-3
```
This will produce two ONNX files (`encoder_model.onnx` and `decoder_onnx.model`) that you can then run inference with.
<|||||>> But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.
Good question @BakingBrains ! As of now, you'll have to roll your own generation loop with `onnxruntime`. An alternative would be to implement an `ORTModelForVisionSeq2Seq` in `optimum`, similar to how @mht-sharma is doing this for Whisper: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc<|||||>> > But separately I am able to convert the encoder and decoder model to ONNX as well as verified the output shape, that went well. But I don't know how to implement `model.generate()` instead of `model.run` for the decoder part.
>
> Good question @BakingBrains ! As of now, you'll have to roll your own generation loop with `onnxruntime`. An alternative would be to implement an `ORTModelForVisionSeq2Seq` in `optimum`, similar to how @mht-sharma is doing this for Whisper: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc
Thank you @lewtun. Got it.<|||||>> FYI if you need a temporary workaround and are willing to tolerate some error on the decoder, you can export one of the donut checkpoints on the `main` branch with:
>
> ```
> python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 3e-3
> ```
>
> This will produce two ONNX files (`encoder_model.onnx` and `decoder_onnx.model`) that you can then run inference with.
Ok, thank you very much. I hope you find a solution and we can merge this branch.<|||||>I've created an issue to track the issue with specifically exporting Donut checkpoints: https://github.com/huggingface/transformers/issues/19983
@WaterKnight1998 can you please share some code snippets on how you currently use the DonutSwin models for document QA and image classification? If I'm not mistaken, inference with these models is only supported via the `VisionEncoderDecoder` model, so once the above issue is resolved you should be able to use the export without needing the new tasks included in this PR<|||||>> I've created an issue to track the issue with specifically exporting Donut checkpoints: #19983
>
> @WaterKnight1998 can you please share some code snippets on how you currently use the DonutSwin models for document QA and image classification? If I'm not mistaken, inference with these models is only supported via the `VisionEncoderDecoder` model, so once the above issue is resolved you should be able to use the export without needing the new tasks included in this PR
Yes, you are right, maybe we can remove those tasks. However, I think it will be good to allow users to export the encoder independently. Maybe some wants to re-use it for a different model or architecture<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun reopen |
transformers | 19,400 | closed | Removes `ProphetNet` config dependency from `XLM-ProphetNet` config | @sgugger ,
Per the issue #19303, the `ProphetNet` config dependency is removed from `XLMProphetNetConfig` and it now directly inherits from `PretrainedConfig`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-07-2022 08:41:53 | 10-07-2022 08:41:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,399 | closed | `device_map="auto"` fails for GPT2 on CPU | ### System Info
Python 3.9, Mac, `transformers==4.21.3`
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On a box with no GPUs:
```Python
transformers.AutoModelForCausalLM.from_pretrained("gpt2", device_map="auto")
```
### Expected behavior
I'd expect a model. I get an exception. | 10-07-2022 06:30:24 | 10-07-2022 06:30:24 | Workaround: Don't use `device_map` on CPU π€·π» <|||||>cc @sgugger <|||||>Yes, `device_map="auto"` is not supported in CPU only environments.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should be supported now on the main branch of Accelerate!<|||||>Yay!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,398 | closed | Removed Bert and XML dependency from herbert | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes issue #19303. Removed dependency of herbert tokenizer on bert and xml tokenizers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-07-2022 05:09:53 | 10-07-2022 05:09:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,397 | closed | Hyperparameter Sweep for Selection of Best Pre-trained model | ### Feature request
Perhaps I have not been able to figure this out, but the way that the current hyperparameter_search function is set up is that it only allows you to pass in hyperparameters that are in the TrainingArguments. However, one key hyperparameter that we are not currently able to pass in the model types themselves. If I want to pass in a list of models that I would like to try out to hyperparameter space, it is not possible to do with the current set up. For example I want to try out passing in something like this to my hyperparameter space.
`model_type = trial.suggest_categorical(["bert-base-uncased", "roberta-base", "xlnet"])`
### Motivation
Being able to pass in model names as part of the hp_space could be very useful especially when one is trying to determine which models might be best for their use case. Compiling a list of different models after reading literature and passing in that compiled list to a hyperparameter sweep could be very useful.
It could be used in this manner:
```
model_type = trial.suggest_categorical(["bert-base-uncased", "roberta-base", "xlnet"])
epochs = trial.suggest_categorical("epochs", EPOCHS)
batch_size = trial.suggest_categorical("batch_size", BATCH_SIZE)
learning_rate = trial.suggest_categorical("learning_rate", LEARNING_RATES)
scheduler = trial.suggest_categorical("scheduler", SCHEDULERS)
model_name = trial.suggest_categorical("model_name", MODEL_NAMES)
hp_space = {
"model_name": model_name,
"batch_size": batch_size,
"learning_rate": learning_rate,
"scheduler": scheduler,
"epochs": epochs,
}
## Passing it to trainer
trainer = Trainer(
training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.hyperparameter_search(hp_space=hp_space)
```
### Your contribution
I would love to help work on this issue, if a solution does not possibly exist. | 10-07-2022 02:50:08 | 10-07-2022 02:50:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,396 | closed | Strange behavior of translation (text generation) pipelines | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
Models:
- NLLB
- M2M100
Example - [[QUESTION] model translates only a part of the text](https://huggingface.co/facebook/nllb-200-distilled-600M/discussions/6)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang="ces_Latn", tgt_lang='eng_Latn',device=0)
# Text with 3 sentences: 1) Zuzka bydlΓ v panelΓ‘ku na 9 podlaΕΎΓ. 2) AniΔka bydlΓ o 3 podlaΕΎΓ vΓ½Ε‘e. 3) Na kterΓ©m podlaΕΎΓ bydlΓ AniΔka?
text="Zuzka bydlΓ v panelΓ‘ku na 9 podlaΕΎΓ. AniΔka bydlΓ o 3 podlaΕΎΓ vΓ½Ε‘e. Na kterΓ©m podlaΕΎΓ bydlΓ AniΔka?"
translator(text, max_length=512, num_beams=5,)
```
Outputs only one sentence (**2 sentences lost**):
> [{'translation_text': 'Zuzka lives in a nine-story penthouse, AniΔka lives three floors up.'}]
If we add to translator parameter min_length like [how-to-generate article](https://huggingface.co/blog/how-to-generate):
`translator(text, max_length=512, num_beams=5, min_length=512 )`
(for many languages (ja, zh, etc) we don't know translated length in tokens, but we don't want to lose the text and set min_length bigger)
It's output **translated text with repeates**:
> {'translation_text': "Zuzka lives in a boarding house on the ninth floor, AniΔka lives three floors upstairs, which floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka live on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does AniΔka lives on, what floor does she lives on, what floor does AniΔka lives on, what floor does she lives on, what floor does she lives on, what floor, what floor does she lives on, what floor she lives on, what floor, and what floor she lives on the floor, and what floor, and what is she lives on the floor, and what is the floor, and what is the floor of the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what is the floor, and what does she's on the floor, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, what is, and what is, what is, what is, what is, and what is, what is, and what is, what is, and what is, what is, what is, and what is, what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, what is, and what is, what is, and what is, and what is, what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is, and what is"}
If we add many other parameters combinations:
`translator(text, max_length=512, min_length=512, num_beams=5, no_repeat_ngram_size=3, do_sample=True, temperature=1.5, top_p=0.9, early_stopping=True, remove_invalid_values=True)
`
The translation will contain **generated text that was not in the original sentence:**
> [{'translation_text': "Zuzka's living in a penthouse on the ninth floor, AniΔka's in a three story apartment, which floor does AniΔka reside on, and what floor is the building on which the building is housed, and how are you supposed to know where she's staying, so what's the floor where the apartment is on the 9th floor... and what is the first floor where AniΔka is staying... and how is the second floor of the house where the house is, so... what floor does she live on, where's AniΔka, the third floor, and where is AniΔka staying in the apartment on the 3rd floor, where you can't find her room, where she can'd say she'd like to go on her own, and you'd wanna know what to do with her room in the next room, so you can I'd tell me that she can be sure that you's not going to be happy with the room to do it, right now, that is, it's all right, you know, right or at least I's right, and I don't think that she't, and that't know that they's what I'll have something that, and we'll want you know that you can be honestly, you'll know that'll be honest that you, right, I mean that I'm sure, you can tell you't you will be right, that that it'll say it't be all right or whatever you know about that you will, you don're not that, but it'd you've got to you know it'm gonna be true, you say that you know right, if they't that's going to me that, I't say, and it' and that, that I will be true or you won'll always, and is, and she'll let me, you will not that'm right, yes or what you' will be that that right, but, and will be, you are gonna be safe to you'l right, or that that'lll be true that we't ever, and yes, but I'l be, right right, they'm going to say, she will be honest or not gonna say that we are, and, that're all right right that he is, you gonna be, but you'"}]
What parameters should be used to get the correct translation of the correct length for many languages with unknown translation lengths? Why does text generation start instead of translation? This is behavior of transformers pipelines or translation models?
### Expected behavior
English translation with 3 sentences:
- Zuzka lives in a block of flats on 9 floors.
- Anna lives 3 floors above.
- Which floor does Anicka live on? | 10-06-2022 23:08:03 | 10-06-2022 23:08:03 | Hey @Fikavec π
Text generation can be very tricky, as you've just explained. The quality of the generated text (i.e. the translation) depends on two things: the model and the generation method.
Regarding the model, my suggestion would be to use a larger model OR a model that contains a single language pair (as opposed to multilingual). You can use the language tags on the Hugging Face Hub π€ to help you navigate the sea of models.
Regarding the generation method, you've already mentioned the blog post I usually redirect to in this sort of issues :) If you force `min_length`, the model tends to hallucinate after it runs out of the original content, so I highly advise not to use it. However, if you don't do it, you may get a too short output (your first example) -- in that case, you may try playing with the [`length_penalty`](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.length_penalty) parameter (which only has impact with `num_beams`>1).
If these two sets of tips do not yield successful results, I still have good news for you -- we are working to implement a new generation strategy which may help in your case (https://github.com/huggingface/transformers/issues/19182) :)<|||||>Thanks @gante for the explanation and work in the greatest project! I can't figure out is this issue a feature of the huggingface generators implementation or the original fairseq translation models? Translation is very specific text generation task where precission output length is critical - if output length or other generation parameters is necessary for correct translation they can be predicted by special model on top of the tokenizer before translation generation. [#19182](https://github.com/huggingface/transformers/issues/19182) is interesting, but after spent a lot of time for searching parameters manualy iβm think that create only one formula for 40 000 translation directions is a miracle. Maybe fairseq team may train model for predict best genreration for 200+ languages on their parallel learning data, as the language definition model has trained and, in the future of generators development, models for selecting the best generation parameters will become a standard step after tokenization or a parameter of generator functions as generate(input_text, params_predictor=predict_best_params_model) and predict_best_params_models separately developed and trained for different tasks (translation, qa, [prompt engineering](https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/), etc.) by the authors of generative models and community with special tests sets and metrics. What do you think about this?<|||||>> if output length or other generation parameters is necessary for correct translation
It is not -- generation ends when the model predicts a special token ([`eos_token_id`](https://huggingface.co/facebook/nllb-200-distilled-600M/blob/main/config.json#L20)) OR when the generation length reaches `max_length`. This is why you should add a large `max_length`, so the translation is not constrained by it :)
As for your other question, as you wrote, setting the parameters depends on the model itself and your goals -- there is no silver bullet that would fit everyone. However, we have a library that might be of your interest: [evaluate](https://huggingface.co/docs/evaluate/index)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,395 | closed | XLMRobertaTokenizerFast Error | ### System Info
I've created a new vocab using sentencePiece PBE, I trained an xlm-roberta-base from scratch using [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). When I tried loading the tokenizer using AutoTokanizer or XLMRobertaTokenizerFast it takes a long time and doesn't load, while I load it using XLMRobertaTokenizer is works well.
I noticed this problem when I tried to finetune the model on ner using the official ner script [run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) which works only on fast tokanizer.
I was wondering how to convert it to a fast tokenizer.
### Who can help?
@SaulLu @sgu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I built the vocab using this command
spm_train --input="merged_vocab_data.txt" --model_prefix=sentencepiece.bpe --vocab_size=250002 --character_coverage=0.9995 --model_type=bpe --pad_id=0 --eos_id=1 --unk_id=2 --bos_id=-1
I used this vocab to initial the training model using the official code run_mlm.py
### Expected behavior
How to convert the sentencepiece BPE slow vocab to fast | 10-06-2022 22:06:56 | 10-06-2022 22:06:56 | @SaulLu<|||||>Maybe of interest to @ArthurZucker as well!<|||||>Hi @elmadany
The conversion of a vocabulary coming from sentencepiece to a fast version of the tokenizer is indeed an operation which can take time, but it is an operation to be carried out only once because once loaded you can save the converted version with the fast format not to have to remake this operation the next time.
On the other hand, I am a little more concerned when you say that it "doesn't load", do you have an error message? <|||||>Thanks @SaulLu
yes, it took 4 hours to convert.
So, I will close this issue |
transformers | 19,394 | closed | ~7% drop in performance is noticed for huggingface GPT2 model | ### System Info
platform: ROCm AMD device
python version: 3.7.13
There is a ~7% drop in performance noticed for huggingface GPT2 model after the IFU (https://github.com/ROCmSoftwarePlatform/transformers/pull/15) on https://github.com/ROCmSoftwarePlatform/transformers repository.
@patil-suraj, @patrickvonplaten, could you please help me in finding the change in transformers that is responsible for the drop in performance?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Command used to run the model:
python3 -m torch.distributed.launch --nproc_per_node=8 transformers/examples/pytorch/language-modeling/run_clm.py --output_dir output --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --label_smoothing 0.1 --logging_steps 1 --logging_dir log --fp16 --dataloader_num_workers 1 --skip_memory_metrics --per_device_train_batch_size=8 --overwrite_output_dir --max_steps 150
### Expected behavior
I was expecting to see similar or better performance of the model after IFU on Aug 9, 2022.
I also tried with the recent commits after Aug 9, 2022. Those seem to worsen the performance much more. | 10-06-2022 21:39:48 | 10-06-2022 21:39:48 | Hello @rraminen, could you mention the two versions of `transformers` between which you see the difference in performance?
By performance, do you mean performance in metrics or performance in processing power/speed of iteration?
Thank you.<|||||>Hi @LysandreJik, thank you for your response.
The performance metric I am looking at is **stable_train_samples_per_second**.
The transformers version before performance drop is 4.19.0.dev0.
Performance drop is noticed 4.22.0.dev0 and 4.23.0.dev0 transformers versions.<|||||>@rraminen , stable_train_samples_per_second is only in the ROCm fork of HF transformers. This equates to train_samples_per_second with a warmup period. <|||||>So I get the full context: is this happening only on ROCm hardware and using the fork from `ROCmSoftwarePlatform`, or is it happening across the library?
I'm trying to understand if it's link to this repository or to the fork. Thanks!<|||||>We observed this on ROCm hardware. @rraminen , can you please test on A100 to confirm if the drop is not limited to MI250 ?
The perf drop is not happening across the library. Just gpt2. <|||||>The perf drop is not observed on A100. <|||||>@rraminen , I dont think upstream HF can help much here; this is on AMD to root-cause. Please close this ticket.
Let's get started with figuring out which commit caused the regression on ROCm, and tracking internally. <|||||>I agree @amathews-amd, I don't think we're in a very large capacity to help here. We're happy to follow along however, so please let us know if there's anything we can do to help out.<|||||>Thank you @LysandreJik, closing this issue. <|||||>Just seconding what @LysandreJik said: if we can help in any way to improve support or performance of our software on AMD chips, we'd like to help
Just ping us |
transformers | 19,393 | closed | Change link of repojacking vulnerable link |
Hello from Hacktoberfest :)
# What does this PR do?
The link to https://github.com/vasudevgupta7/bigbird is vulnerable to repojacking (it redirects to the orignial project that changed name), you should change the link to the current name of the project. if you won't change the link, an attacker can open the linked repository and attacks users that trust your links
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-06-2022 20:53:27 | 10-06-2022 20:53:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,392 | closed | Stop relying on huggingface_hub's private methods | Updates the `move_cache` method to stop relying on `huggingface_hub`'s private methods. | 10-06-2022 18:52:50 | 10-06-2022 18:52:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,391 | closed | T5tokenizer.pre_trained("t5-small") is not callable whereas AutoTokenizer worked fine | null | 10-06-2022 18:31:34 | 10-06-2022 18:31:34 | Hi @mellow-d πΒ Having a popular project like `transformers` means we get many support and feature requests β if we want to maximize how much we help the community, the community has to help us stay productive π
To that end, please share a *short* script where the issue is clearly reproducible on *any* computer. Thank you π€<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,390 | closed | add ONNX support for swin transformer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your great contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses #16308
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
it was already addressed in PR #18171 (but was mistakenly closed by me, Sorry for the repeat PR)
@lewtun @ChainYo | 10-06-2022 18:13:29 | 10-06-2022 18:13:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks
Great start on Huggingface, hope to keep contributing more. |
transformers | 19,389 | closed | Fix gather for metrics in summarization example | # What does this PR do?
Fixes the failing test on the summarization no_trainer for now, eventually this API will be doable but it's not there yet :)
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 10-06-2022 17:51:55 | 10-06-2022 17:51:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,388 | closed | removed dependency on bart tokenizer(slow/fast) LED | # What does this PR do?
removes the dependency between LED tokenizer and Bart's (slow version)
Fixes # (issue)
follows hugging's face philosophy of Do Repeat Yourself
## Who can review?
@sgugger
| 10-06-2022 17:32:14 | 10-06-2022 17:32:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for the feedback! ill fix everything and open a brand new one. |
transformers | 19,387 | closed | Documentation of Adafactor is at odds with Google implementations | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, two RTX8000
- Using distributed or parallel set-up in script?: Yes, DDP via HuggingFace accelerate
### Who can help?
documentation of Adafactor: @sgugger @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The documentation of Adafactor seems to be at odds with the Google implementation in T5X / PaLM. I've found these hyperparameters to be critical while optimizing HuggingFace transformers for metric learning tasks. Specifically the documentation ([link](https://huggingface.co/docs/transformers/main_classes/optimizer_schedules#transformers.Adafactor)) says `Use scale_parameter=False` and `Additional optimizer operations like gradient clipping should not be used alongside Adafactor`.
However, in T5X the default hyperparameter is set to `True` and is not modified in the config files (https://github.com/google-research/t5x/blob/83046e22750635f76c7e600f01c0a002915b52b8/t5x/adafactor.py#L199).
Similarly, PaLM used `scale_parameter` with a constant learning rate,
```
Optimizer β ... This is effectively equivalent to Adam (Kingma & Ba, 2014) with βparameter scaling,β
which scales the learning rate by the root-mean-square of the parameter matrix. Because the weight
initialization is proportional to 1/βn, the effect of this is similar to the manual scaling down of Adam
learning rate as in Brown et al. (2020). However, parameter scaling has the benefit that parameter
matrices which operate at different scales (the embeddings and layer norm scales) do not have their
learning rate scaled down at the same rate.... We use an Adafactor learning rate of 10β2 for the first 10,000
steps, which is then decayed at a rate of 1/βk, where k is the step number. We train with momentum
of Ξ²1 = 0.9 .... We use global norm gradient clipping (Pascanu et al. (2012)) with a value of 1.0 for all
models...
```
Overall, consistent with the Google recommendations, the following hyperparameters worked well for me:
```
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=False, warmup_init=False, lr=float(args.learning_rate))
...
accelerator.clip_grad_norm_(model.parameters(), 1.0)
```
### Expected behavior
N/A (documentation fix) | 10-06-2022 17:30:47 | 10-06-2022 17:30:47 | Sorry I'll skip this, I'm not very well versed in that area.
What led you to ping me if it's not too much to ask ? (Since maybe there are better people to ping here).<|||||>hi @Narsil thanks and no worries! I didn't find a section on optimizers in the issue builder, so I pinged the people in the closest two areas (trainer and pipeline). I am guessing "trainer" / @sgugger may be better able to answer the issue.<|||||>Transformers is not a library of optimizers, so you should really use an implementation of `Adafactor` form somewhere else that suites your need. It will be deprecated and removed in future versions :-) (Note that it comes from fairseq originally, so that's probably the reason you have comments at odds with T5x)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,386 | closed | Different Embeddings values on different OS | ### System Info
Sentence Transformers Version - 2.2.0
Platform - Windows 10
Python Version - 3.8.5
I am trying to create word embeddings for a couple of words and the same embeddings are not getting generated on different OS machines. I have checked it on Windows and Linux machines. I am trying to perform clustering on text embeddings and te embeddings are not the same for a particular word hence the overall clusters are not the same. I am doing all development on my windows machine and the final deployment is on AWS EC2 instance which is a Linux machine. The results of embeddings are the same on both machines. Can you please help me solve this issue? I have tried setting all the seed values but still then are not the same. There is a very small variation that starts at the 4th or 5th decimal position in the embeddings array.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from sentence_transformers import SentenceTransformer, util
import random
import numpy as np
import torch
import os
random.seed(42)
np.random.seed(42)
torch.manual_seed(42)
torch.cuda.manual_seed_all(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ['PYTHONHASHSEED'] = str(42)
torch.use_deterministic_algorithms(True)
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v2')
embeddings = model.encode(sentences)
model.encode("customer experience")
### Expected behavior
The embedding created doesn't have exactly same values across different machines. Specially for different OS Machines. The value starts to differ at 4th or 5th positions of every element. Here is the embeddings array of my machine for the word "customer experience"
array([-5.63166857e-01, -4.05957282e-01, 3.00267637e-01, -2.46767655e-01,
4.89773035e-01, -1.94317810e-02, 1.80651009e-01, 8.92449439e-01,
-1.74235195e-01, 3.29178236e-02, -1.19764984e-01, 2.58512050e-01,
1.51172578e+00, -5.46738386e-01, -1.15303159e-01, -5.24251983e-02,
-2.24761695e-01, 3.28272223e-01, 4.98460889e-01, -8.20172966e-01,
-1.17172766e+00, -9.98448491e-01, 3.21752965e-01, 3.72964174e-01,
1.82584435e-01, -4.08045053e-01, -2.02570185e-01, -4.01083052e-01,
-1.54582113e-01, 6.08542264e-02, 3.55301678e-01, 1.58671722e-01,
-4.71475840e-01, -4.93791938e-01, 1.62821263e-04, 3.33021164e-01,
2.97434449e-01, 3.72983813e-01, -5.82175553e-01, -8.59432593e-02,
-1.84757441e-01, -5.53481221e-01, 6.05549157e-01, -1.52354419e-01,
-8.89008582e-01, -1.22463606e-01, -6.02528095e-01, -1.82574391e-01,
3.01688969e-01, 6.89519763e-01, 2.30612442e-01, 6.26742125e-01,
8.43013823e-02, -3.03132862e-01, -1.85130581e-01, 5.28024077e-01,
6.71206862e-02, -9.32246521e-02, -4.03505266e-02, -4.49038267e-01,
5.06386906e-03, 7.86191404e-01, -8.70651156e-02, -7.72568226e-01,
-1.32925123e-01, -1.24123693e-01, 3.29535365e-01, -5.11285424e-01,
-5.65095618e-02, 9.33079541e-01, 3.53344619e-01, -5.66991568e-01,
2.39370614e-01, 6.86836958e-01, -1.44293070e+00, -2.73904860e-01,
-1.90752760e-01, 5.77968955e-01, -3.78967732e-01, -2.59176493e-01,
2.76730835e-01, -5.14467835e-01, 1.06894684e+00, -4.06756431e-01,
-6.92828238e-01, -2.19953716e-01, 4.77855325e-01, -5.88070691e-01,
5.13936020e-02, 2.48879939e-01, -4.67677772e-01, -2.15098113e-01,
-1.09672315e-01, 1.01601869e-01, -2.71980494e-01, 4.15393680e-01,
2.42622405e-01, 1.73546404e-01, -1.73137829e-01, 9.69614685e-02,
4.23627317e-01, -9.35343504e-02, 8.40337425e-02, -3.80988598e-01,
2.08486021e-01, 5.14860749e-01, 3.26781601e-01, -5.36286473e-01,
-3.18198889e-01, 8.19383442e-01, -6.75107002e-01, -1.86185926e-01,
3.88082922e-01, 8.55610073e-01, -7.86133289e-01, 3.95356789e-02,
-2.44248822e-01, 1.14838436e-01, 6.87963545e-01, -9.37253654e-01,
1.19670846e-01, 2.22856849e-02, -1.01163872e-02, 2.25836709e-01,
8.92986879e-02, -6.63402498e-01, 5.70526302e-01, 6.88406408e-01,
-8.66231248e-02, -4.10765529e-01, 5.30590117e-01, -7.02219427e-01,
-3.93625051e-01, 6.24131560e-01, 1.48762420e-01, 8.14396262e-01,
4.03758168e-01, -4.09283876e-01, -1.13471504e-02, 1.74081907e-01,
2.16557682e-01, -8.00780594e-01, 3.03449005e-01, -2.27454484e-01,
-1.42966017e-01, -5.93980193e-01, 6.39644504e-01, -4.82465982e-01,
-5.32015800e-01, -9.92556393e-01, 6.19081676e-01, 1.07305683e-01,
-1.31213859e-01, -1.93007499e-01, 1.17079806e+00, 2.76987970e-01,
-9.27469432e-01, 4.39499795e-01, -4.15544622e-02, 7.88270384e-02,
3.29236805e-01, 3.67188096e-01, -1.04401684e+00, 3.53199422e-01,
2.66258687e-01, 7.28520513e-01, -1.70863360e-01, -3.29261243e-01,
-1.86119117e-02, -3.16396415e-01, 1.98385924e-01, -3.98931444e-01,
-2.50344127e-01, 7.89347351e-01, 2.74530977e-01, 3.58546704e-01,
-3.60908270e-01, 4.97751117e-01, -2.81880677e-01, 1.68201163e-01,
-1.12762606e+00, 7.02131689e-01, 1.80761516e-01, -9.53825295e-01,
3.74447078e-01, -3.55577737e-01, 8.39326233e-02, -7.67105103e-01,
-8.43731999e-01, -1.86966315e-01, 5.03540993e-01, -6.08295083e-01,
-3.00569564e-01, -1.36414242e+00, -4.82496992e-02, -9.76607054e-02,
-6.12891853e-01, 1.57747135e-01, -2.03161985e-01, 2.40768135e-01,
6.33511603e-01, 2.32761055e-01, -1.51648432e-01, -3.39404374e-01,
2.62024403e-01, -4.33223426e-01, 1.16399661e-01, -7.55017877e-01,
2.25884423e-01, -3.73176008e-01, -3.69134128e-01, -3.18936348e-01,
-1.70973599e-01, 7.32566595e-01, 4.68904078e-01, 7.00135976e-02,
-3.62482786e-01, -2.02929229e-01, -7.19937533e-02, 2.56802320e-01,
3.79254043e-01, 6.80404246e-01, 4.17938679e-01, 3.91916335e-01,
-4.78704631e-01, 6.18772432e-02, 3.69294941e-01, 2.43110564e-02,
-2.21559495e-01, -6.37414038e-01, 4.22997415e-01, 2.84579862e-02,
1.39831871e-01, -7.43579507e-01, 2.52516031e-01, 1.08011149e-01,
3.73635620e-01, 1.69237405e-01, -1.94794923e-01, 4.08671081e-01,
-5.18766701e-01, 3.21041405e-01, -3.61130059e-01, 9.24525499e-01,
2.80599803e-01, -5.23387730e-01, -9.23230588e-01, 2.09240839e-01,
5.50950229e-01, -5.63352942e-01, -4.63511765e-01, 2.38961935e-01,
3.58597219e-01, 4.27797139e-01, -1.00327037e-01, -1.08362997e+00,
1.55897349e-01, 5.38530573e-02, 1.59043074e-03, 2.29418337e-01,
-5.35291284e-02, -1.12637460e-01, 2.65441805e-01, 4.49611723e-01,
3.90090346e-01, -1.42261416e-01, -7.70705462e-01, 1.08629473e-01,
5.40238500e-01, 1.08955741e+00, -5.29613614e-01, -5.03211975e-01,
3.90169293e-01, 9.20682132e-01, 6.66368484e-01, -3.91029358e-01,
-3.09388995e-01, 2.70938456e-01, 6.76514268e-01, -3.87805164e-01,
-1.60892338e-01, -3.64872932e-01, 3.67217273e-01, -7.62496114e-01,
7.96184301e-01, -4.87817109e-01, -9.04241800e-01, 5.17966866e-01,
-1.11159825e+00, 8.57870877e-02, 8.98796916e-02, 3.31583843e-02,
2.30660737e-01, -3.57683510e-01, 1.25084507e+00, -6.78460658e-01,
7.95050085e-01, 9.12836134e-01, -3.08217525e-01, -4.36114669e-02,
3.08174826e-02, -3.00375223e-01, 4.11211967e-01, 1.09019957e-01,
7.06879079e-01, -6.82136357e-01, 5.54503620e-01, -1.12970269e+00,
-8.21152806e-01, -1.34905732e+00, 3.00113320e-01, -5.02252460e-01,
-2.98326731e-01, -6.62151694e-01, 1.02041280e+00, 1.64372265e-01,
1.27767578e-01, 1.05911744e+00, 4.48069215e-01, 3.38572681e-01,
-1.08860053e-01, -4.10779119e-01, -2.82041848e-01, 1.19134068e+00,
1.02312341e-02, -4.56356674e-01, 1.92146748e-01, 3.40512484e-01,
-4.04280692e-01, -6.11404777e-01, 5.63679859e-02, 4.72349763e-01,
4.93698537e-01, -4.36762571e-01, -1.56004876e-02, 2.46875226e-01,
-1.43379673e-01, 3.10023427e-02, 3.13399971e-01, 2.04513907e-01,
-8.23624253e-01, 1.72084451e-01, 4.49703097e-01, -9.49652433e-01,
1.19886644e-01, -4.77594048e-01, 5.51294923e-01, -7.20850348e-01,
-4.27250206e-01, -4.53100443e-01, 8.13941360e-01, 4.01361167e-01,
6.83571458e-01, -3.42129886e-01, -7.66427994e-01, -3.53065670e-01,
5.49451828e-01, 1.82685345e-01, -1.86077744e-01, -1.42353363e-02,
2.29258999e-01, -3.30613971e-01, 3.69689107e-01, -6.29568338e-01,
7.90782347e-02, 3.44798952e-01, 5.59364378e-01, 7.05829799e-01,
-9.61028263e-02, -1.39723748e-01, -2.31106445e-01, 2.35272795e-01,
-6.72725201e-01, -1.37946084e-02, -1.04533529e+00, -5.14720857e-01,
-6.02638245e-01, 1.42247796e-01, 1.38257787e-01, -3.10868174e-02,
1.48533672e-01, -2.18283951e-01, -4.00203288e-01, -5.81396222e-01,
1.10336840e+00, 1.29402208e+00, 1.06964624e+00, -3.32895130e-01,
2.55944878e-01, 6.79058790e-01, 3.22150648e-01, -1.64049804e-01,
-9.84220207e-02, 6.52461171e-01, 1.86710641e-01, 2.99713403e-01,
-5.97481191e-01, -9.41333696e-02, -9.03365016e-02, 9.17031825e-01,
6.96043000e-02, 3.91068816e-01, 9.05843750e-02, -1.76928818e-01,
8.88674974e-01, 6.19346559e-01, -5.14562845e-01, -4.47102636e-01,
2.60381103e-01, -1.22727379e-01, -6.05612040e-01, 2.77419269e-01,
-2.34546542e-01, -9.54378620e-02, 6.49136305e-03, -4.91520852e-01,
8.34568143e-01, -2.58982517e-02, -2.86573529e-01, -6.15404367e-01,
-1.51788199e+00, 3.47156405e-01, -8.39735866e-01, 3.24092031e-01,
6.57103062e-01, 6.23090267e-01, 2.63404757e-01, -4.45135310e-02,
9.08290148e-01, 1.18319124e-01, 8.70594263e-01, 6.80169523e-01,
-4.84604776e-01, -7.03717947e-01, -1.89168632e-01, 1.16403615e+00,
-3.50110173e-01, -4.15479571e-01, -9.21172857e-01, -2.33189672e-01,
6.42113864e-01, 8.00730109e-01, 3.99987459e-01, 3.83187056e-01,
4.83411551e-01, -4.20992970e-02, 5.06903112e-01, 7.40851760e-01,
9.11108702e-02, 6.55519247e-01, 7.62610734e-01, 1.12601042e-01,
-4.01560158e-01, -2.08203390e-01, -4.87336189e-01, 5.74378014e-01,
5.99273086e-01, -5.23595288e-02, -7.59932876e-01, -3.45638156e-01,
6.99717045e-01, -1.51044503e-01, 5.20237088e-01, -3.08910757e-03,
1.49888724e-01, -2.29050353e-01, -4.98495191e-01, 2.51217410e-02,
4.10942405e-01, -1.57569438e-01, 2.43655652e-01, 1.33666843e-02,
3.19108926e-02, 2.01601386e-01, 1.30144671e-01, 2.91789353e-01,
-1.87403232e-01, -1.12883002e-01, 5.42151570e-01, -2.47579753e-01,
5.09843528e-01, -4.74907577e-01, 1.22318432e-01, -8.71497840e-02,
1.10734373e-01, 2.24654555e-01, 7.06339240e-01, -1.18613824e-01,
1.79778591e-01, 6.78329289e-01, -2.88403273e-01, -3.57292056e-01,
9.37119365e-01, 1.15470958e+00, 1.79152638e-01, 1.75601542e-01,
2.84290433e-01, -3.61450374e-01, 2.07007974e-01, 2.91608930e-01,
-6.35592461e-01, -8.93313050e-01, 1.05036795e-01, 8.57329369e-03,
6.08366072e-01, -5.03044486e-01, 3.17721739e-02, -4.24353957e-01,
3.90238464e-01, -3.29834163e-01, -6.89130187e-01, -4.17219624e-02,
-9.35876787e-01, 2.66513348e-01, 3.34133267e-01, -3.65045339e-01,
-6.92205131e-01, -5.72713852e-01, -4.77733314e-01, -5.86308017e-02,
1.98600173e-01, -1.85073182e-01, -5.17492890e-01, 3.38486731e-01,
-4.74322766e-01, 8.16874862e-01, -7.71266043e-01, 8.25465083e-01,
-2.50290662e-01, 7.52730444e-02, -6.25011086e-01, -8.58676061e-02,
-4.33004260e-01, 4.56393622e-02, -2.78941654e-02, -2.53382444e-01,
-8.48090887e-01, -5.19386292e-01, -6.39506280e-01, -5.87998986e-01,
-3.09086069e-02, -4.45444703e-01, 7.53717065e-01, 1.12176526e+00,
-1.47348925e-01, 5.91460109e-01, 1.49989009e-01, 5.84628761e-01,
-7.06241906e-01, 4.73896340e-02, -4.02556092e-01, -3.51079516e-02,
5.82646608e-01, 4.22980964e-01, -1.13974705e-01, 5.19442677e-01,
-4.21998501e-01, 4.76445556e-02, -6.82383329e-02, 9.83098507e-01,
5.77297986e-01, 6.72681808e-01, 4.63875353e-01, -4.40883100e-01,
3.28395277e-01, -4.51458216e-01, -1.08331466e+00, -2.27949128e-01,
-3.48160297e-01, -6.54514432e-01, -1.06261909e+00, 3.78970280e-02,
3.76855463e-01, 1.23420453e+00, -1.54484093e-01, -2.39598811e-01,
-6.96872354e-01, 1.58317983e-01, 3.26650649e-01, 6.56132340e-01,
9.27726999e-02, 1.17278016e+00, 2.04693019e-01, 9.35090780e-02,
-4.41390455e-01, -3.65751505e-01, 1.49403632e-01, -1.13220736e-01,
-1.06763467e-01, -6.80416882e-01, -5.72383285e-01, -1.00686356e-01,
8.13092351e-01, 3.27822149e-01, -6.00021541e-01, 3.44711006e-01,
8.28786194e-02, 1.25907615e-01, 4.17931914e-01, -8.35630968e-02,
5.91417730e-01, 2.51130730e-01, 4.58533823e-01, -1.83726788e-01,
4.93454754e-01, -4.29039717e-01, -6.57490715e-02, 2.03398407e-01,
-4.31751430e-01, 5.68911254e-01, 2.54821964e-02, -4.16832864e-01,
-2.70133823e-01, 5.73930085e-01, -6.77836776e-01, -5.92604160e-01,
-1.24327138e-01, -1.29152715e+00, -3.77081074e-02, -5.18579423e-01,
-2.62488842e-01, -3.72892916e-01, -3.80493939e-01, 7.40116090e-02,
-5.15156910e-02, -7.21140265e-01, -1.39724612e-01, 7.07901493e-02,
-1.12637803e-01, -1.60605922e-01, 1.51501581e-01, 3.13334197e-01,
1.21444154e+00, -2.14568496e-01, -5.66242695e-01, -2.38805786e-01,
-2.13572249e-01, -1.32878691e-01, 2.12020248e-01, 5.40322185e-01,
1.93933874e-01, 4.43719685e-01, 1.48676664e-01, 3.87566030e-01,
-8.89887452e-01, 8.66037533e-02, -2.93432958e-02, -5.26472628e-02,
8.01454112e-02, 3.83317508e-02, 1.04065776e+00, 4.99512762e-01,
-4.15351212e-01, 1.12056828e+00, 4.18051839e-01, 8.18798468e-02,
-1.22060739e-02, -4.64514703e-01, -6.00997984e-01, -1.78236380e-01,
1.37272656e-01, -1.48927256e-01, -3.94253761e-01, -6.18627429e-01,
-8.96688998e-01, 5.76650023e-01, 6.77368343e-02, 4.78950560e-01,
-8.79291445e-03, 1.49765313e-01, 1.85265213e-01, -7.22151637e-01,
4.68619287e-01, 2.87488699e-01, -4.97989774e-01, 3.60051811e-01,
-7.40101188e-02, 7.89022982e-01, -9.01167750e-01, 4.41429734e-01,
-1.04249132e+00, -9.19685781e-01, 1.15506038e-01, -7.13049531e-01,
-6.65355742e-01, -5.30628860e-01, -3.26595902e-01, 2.66646266e-01,
-1.25525951e-01, 5.60440779e-01, 5.07836461e-01, 3.95468861e-01,
9.60432529e-01, 4.94689703e-01, -3.03658307e-01, -1.77312210e-01,
-4.58492279e-01, -7.47409761e-01, 4.59275484e-01, -6.79710865e-01,
3.75889093e-02, 7.20455572e-02, 1.30812436e-01, -4.99181062e-01,
2.22169235e-01, -4.90931898e-01, 4.04202938e-01, -8.05476069e-01,
-6.52545542e-02, -8.90152752e-01, 7.38128006e-01, -7.10134208e-02,
4.61333185e-01, 6.00929521e-02, -8.14593077e-01, 2.95668125e-01,
-3.19611222e-01, -6.16702795e-01, -3.27287138e-01, 5.32396674e-01,
-3.02708775e-01, -3.89988780e-01, 8.80602375e-03, -6.62351489e-01,
4.32329148e-01, 4.50594246e-01, 4.41902071e-01, 4.36784565e-01,
-2.12716207e-01, 6.03905916e-01, 9.52148795e-01, -5.97970843e-01,
8.71068358e-01, -5.62861085e-01, -9.95771408e-01, 4.22280073e-01,
4.24299121e-01, -1.84334852e-02, 5.01072884e-01, -6.66608214e-01,
8.03120807e-02, 2.01032907e-01, 7.90493011e-01, -2.10665435e-01,
3.26374441e-01, -9.52832401e-02, 6.92926943e-01, 5.12748480e-01,
-8.07392776e-01, -5.92466474e-01, 6.91362977e-01, 6.96171284e-01,
-4.52700555e-01, -1.18983597e-01, -7.88870752e-02, -4.05955195e-01,
-1.73313439e-01, 5.43577015e-01, -5.59811592e-01, -6.02401972e-01,
1.25281483e-01, -7.22728595e-02, -9.14074957e-01, 1.59500167e-01,
3.40227425e-01, 1.24806687e-01, -4.74854290e-01, -4.31868196e-01],
dtype=float32) | 10-06-2022 17:17:56 | 10-06-2022 17:17:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,385 | closed | update attention mask handling | # What does this PR do?
Fixes error when using whisper with inference API.
Working script :
```python
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutomaticSpeechRecognitionPipeline, , load_dataset, AutoModel
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(task="transcribe", language = "en")
>>> model.config.max_length = 224
>>> pipeline = AutomaticSpeechRecognitionPipeline(
model = model,
tokenizer = processor.tokenizer,
feature_extractor = processor.feature_extractor)
>>> print(pipeline(ds[0]["audio"]["array"]))
{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'}
``` | 10-06-2022 16:48:56 | 10-06-2022 16:48:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Using the `pipeline` wrapper also works : `pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium.en", device=0)`. |
transformers | 19,384 | closed | Download pretrained models from a new conda virtualenv with higher python version and higher transformers version | ### System Info
Old
python==3.7
transformers==3.5.0
New
python==3.9
transformers==4.22.2
### Who can help?
@LysandreJik, @NielsRogge
Hello! Nothing really critical I think, but I stumbled upon this message so I share it here. Think this is some kind of legacy handling part, but it works fine. It is just a warning message that I wanted to share.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download bert-base-uncased on environment with python=3.7 & transformers==3.5
`
from transformers import AutoTokenizer, AutoModelWithLMHead
entk = AutoTokenizer.from_pretrained("distilbert-base-uncased")
enlm = AutoModelWithLMHead.from_pretrained("distilbert-base-uncased")
`
2. Create another virtualenv with python=3.9
3. install transformers==4.22.2
4. Download bert-base-uncased by using the same snippet above
### Expected behavior
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 114 files to the new cache system
0%| | 0/114 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 1128, in <module>
move_cache()
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 1071, in move_cache
hub_metadata[url] = get_hub_metadata(url, token=token)
File "/Users/sujoungbaeck/opt/anaconda3/envs/hubert-api-package__3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata
huggingface_hub.file_download._raise_for_status(r)
AttributeError: module 'huggingface_hub.file_download' has no attribute '_raise_for_status'
Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help. | 10-06-2022 15:52:04 | 10-06-2022 15:52:04 | Thank you for the report @sujoung, looking into it.<|||||>Indeed, this will be fixed once https://github.com/huggingface/transformers/pull/19244 is in a release (the release will likely be done Monday or Tuesday)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,383 | closed | Run text-classification example with AdaHessian optimizer | ### System Info
torch 1.12.1+cu113
transformers 4.23.0.dev0
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I want to use AdaHessian optimizer in [text-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) example run_glue_no_trainer.py. To do so I have modified the part of the code where the optimizer is selected. That is, instead of this
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
and this,
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps # Do we need this? backwards does this calculation...
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
I am using this
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
If args.optimizer == 'AdamW':
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
elif args.optimizer == 'AdaHessian':
optimizer = AdaHessian(optimizer_grouped_parameters, lr=args.learning_rate)
and this
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
for step, batch in enumerate(train_dataloader):
# We need to skip steps until we reach the resumed step
if args.resume_from_checkpoint and epoch == starting_epoch:
if resume_step is not None and step < resume_step:
completed_steps += 1
continue
# batch = Variable(**batch, requires_grad=True)
def closure(backward=True):
if backward:
optimizer.zero_grad()
outputs = model(**batch)
loss = outputs.loss
if backward:
# loss = Variable(loss, requires_grad=True) # Didn't help
# create_graph=True is necessary for Hessian calculation
accelerator.backward(loss, create_graph=True)
return loss
loss = closure(backward=False)
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step(closure=closure)
lr_scheduler.step()
progress_bar.update(1)
completed_steps += 1
respectively. The AdaHessian is given [here](https://github.com/davda54/ada-hessian/blob/master/ada_hessian.py).
### Expected behavior
Normally, it should continue training but
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
is returned by
h_zs = torch.autograd.grad(grads, params, grad_outputs=zs, only_inputs=True, retain_graph=i < self.n_samples - 1)
in the optimizer's function
@torch.no_grad()
def set_hessian(self):
"""
Computes the Hutchinson approximation of the hessian trace and accumulates it for each trainable parameter.
"""
params = []
for p in filter(lambda p: p.grad is not None, self.get_params()):
if self.state[p]["hessian step"] % self.update_each == 0: # compute the trace only each `update_each` step
params.append(p)
self.state[p]["hessian step"] += 1
if len(params) == 0:
return
if self.generator.device != params[0].device: # hackish way of casting the generator to the right device
self.generator = torch.Generator(params[0].device).manual_seed(2147483647)
grads = [p.grad for p in params]
for i in range(self.n_samples):
zs = [torch.randint(0, 2, p.size(), generator=self.generator, device=p.device) * 2.0 - 1.0 for p in params] # Rademacher distribution {-1.0, 1.0}
h_zs = torch.autograd.grad(grads, params, grad_outputs=zs, only_inputs=True, retain_graph=i < self.n_samples - 1)
for h_z, z, p in zip(h_zs, zs, params):
p.hess += h_z * z / self.n_samples # approximate the expected values of z*(H@z)
The error is returned because the `grads` created by the list `param` do not contain a `_grad_fun`. I suspect that the problem is related to the input of the optimizer (e.g. loss in the backward function). According to this [post](https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074/43) I have tried for example
loss = Variable(loss, requires_grad=True)
before backwards in the closure which make the script to start running but the accuracy is around 45% and does not improve. Could you please take a look at the problem and make a suggestion to overcome it?
**EDIT:**
I just noticed in the trace back that `lr_scheduler` is mentioned before the error in `torch.autograd.grad`.
```
Traceback (most recent call last):
File "some root/run_glue.py", line 730, in <module>
main()
File "some root/run_glue.py", line 621, in main
optimizer.step(closure=closure)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/accelerate/optimizer.py", line 140, in step
self.optimizer.step(closure)
File "some rootanaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "some rootanaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "some root/AdaHessian.py", line 105, in step
self.set_hessian()
File some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "some rootcubicReg/Code/Optimizers/AdaHessian.py", line 87, in set_hessian
retain_graph=i < self.n_samples - 1)
File "some root/anaconda3/envs/AdaCubic/lib/python3.7/site-packages/torch/autograd/__init__.py", line 278, in grad
allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
0%| | 0/6315 [00:02<?, ?it/s]
```
I was suspecting that something with the `_grad_fn` is happening in accelerator. Thus, commenting
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
makes the optimization procedure to start running which indicates that `_grad_fn` are disabled somehow inside accelerator. Could you please some one suggest a way to overcome this problem? | 10-06-2022 15:32:32 | 10-06-2022 15:32:32 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!
cc @sgugger <|||||>>
Sorry for my misplaced post. I think the problem is solved. To be honest I just re-entered the modifications on the original code more carefully and now it seems to be working. You can delete my post or move it to the forum if you find it more appropriate. Sorry for the inconvenience again. |
transformers | 19,382 | closed | Added tokenize keyword arguments to feature extraction pipeline | # What does this PR do?
The PR adds keyword arguments for the tokenizer for the feature extraction pipeline. Fixes: https://github.com/huggingface/transformers/issues/19374
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
@Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-06-2022 15:11:11 | 10-06-2022 15:11:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>After checking, the broken tests are exactly broken by the lack of `truncation` support.
Also for quality you should be able to to
```
pip install -e .[dev] # or pip install transformers[dev]
make fixup
```
Cheers.<|||||>@Narsil I made the changes you indicate.<|||||>@sgugger I have moved the import to top.<|||||>Thanks a lot! |
transformers | 19,380 | closed | Added type hints for TF: TransfoXL | Based on Issue #16059
I have added type hints for Tensorflow TransfoXL Model.
| 10-06-2022 14:35:37 | 10-06-2022 14:35:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I removed these Optional types as your suggested.
Holp you may check it and then merge my request.
@Rocketknight1 <|||||>Looks good to me, thank you! |
transformers | 19,381 | closed | Tokenizer loading distillert instead of bert | **Machine specs:**
MacBook Pro (13-inch, M1, 2020)
chip Apple M1
Memory 16 GB
Hello,
I try to use the Huggingface pipelines which works fine on colab but on my machine it behaves absurd
`from transformers import AutoTokenizer, TFAutoModelForMaskedLM,FillMaskPipeline
name="bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(name)
model = TFAutoModelForMaskedLM.from_pretrained(name)
unmasker = FillMaskPipeline(model=model,tokenizer=tokenizer)
unmasker("[MASK] is the capital of France.",top_k=10)`
<img width="1063" alt="image" src="https://user-images.githubusercontent.com/14794584/194323827-0f2942a3-94c5-4e7e-a3a7-421d7de8b391.png">
then if you try it on colab it works just fine
<img width="1267" alt="image" src="https://user-images.githubusercontent.com/14794584/194324180-e17e7ffb-9f2c-436b-984f-c92755f5fb89.png">
am I doing something really stupid ?
or is it genuinely a problem ?
Thanks
| 10-06-2022 13:25:31 | 10-06-2022 13:25:31 | Hi this issue probably belongs in `transformers` not in `tokenizers` so I'll transfer the issue.
That being said if you could
> Please share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.
That would help, you're probably running a different version of the code.
also you mention it's running distilbert, how do you know ?
<|||||>From a quick look, the issue likely comes from PyTorch's `mps` support which seems to give different results for the same operations<|||||>> Hi this issue probably belongs in `transformers` not in `tokenizers` so I'll transfer the issue. That being said if you could
>
> > Please share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.
>
here is the system information from the command you have mentioned:
WARNING:tensorflow:From /opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
2022-10-07 13:42:35.226096: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-10-07 13:42:35.226193: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.22.2
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
> That would help, you're probably running a different version of the code. also you mention it's running distilbert, how do you know ?
I just copy paste the same code on Google Colab and it gives a different result, I suppose it's using distillert because a warning message appears, though I am not sure of this .
<|||||>> From a quick look, the issue likely comes from PyTorch's `mps` support which seems to give different results for the same operations
I actually tried removing the PyTorch library, still the same problem. Any suggestions are welcome<|||||>Hmm it seems like `Tensforflow` with M1 is having an issue:
```
2022-10-07 13:42:35.226096: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-10-07 13:42:35.226193: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
```
Don't have an M1 handy to test it on though :(<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,379 | closed | fill-mask with roberta-base and --targets options | Dear @sgugger,
I was trying to use the fill-mask function starting from "roberta-base" and limiting the search to target words using the --targets option.
For numerous target words, however, I get the warning that the word does not exist in the vocabulary.
An example is followed:
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model="roberta-base", tokenizer="roberta-base", top_k=10)
filled = unmasker("When I am hungry, I eat a <mask>.", targets=["pizza", "banana", "pasta"])
for r in filled:
print(r['token_str'], "->", r['score'])
```
The output is:
```
The specified target token `pizza` does not exist in the model vocabulary. Replacing with `p`.
The specified target token `banana` does not exist in the model vocabulary. Replacing with `ban`.
The specified target token `pasta` does not exist in the model vocabulary. Replacing with `past`.
p -> 1.0362288094256655e-07
ban -> 1.0942345918252272e-09
past -> 5.667477598336745e-10
```
This problem does not exist with BERT:
```
pizza -> 0.0014738412573933601
banana -> 0.0009286535205319524
pasta -> 1.1728033314284403e-05
```
Could you explain to me the reason for this behaviour? How can I fix it? | 10-06-2022 12:39:42 | 10-06-2022 12:39:42 | Hi there! Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. In this instance, it's because RoBERTa uses a different tokenization algorithm than BERT which mark the beginning of each word with a special symbol.<|||||>Thanks! |
transformers | 19,378 | closed | Add TF whisper | # What does this PR do?
Adds TF Whisper port of PyTorch implementation
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 10-06-2022 11:52:37 | 10-06-2022 11:52:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,377 | closed | Fix DETR docs example, add post_process_object_detection to DETR docs | # What does this PR do?
- Fixes DETR docs example
- Adds post_process_object_detection method to DETR docs
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-06-2022 10:04:02 | 10-06-2022 10:04:02 | I'm re-running the documentation build to check that the doc is built correctly, will merge afterwards :+1: <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,376 | closed | fixed issue #19368 | Fixes #19368
Following the issue #19368, I've corrected the type hint as "Optional[Tuple[int, float]]".
Please merge this PR. | 10-06-2022 10:00:37 | 10-06-2022 10:00:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(merging -- the failed test is being tracked internally) |
transformers | 19,375 | closed | DeformableDetrForObjectDetection is not supported | ### System Info
when I use
`conda create -n hug numpy matplotlib transformers python=3.8` and activated hug env, or just use `pip install transformers` ,I got the same result
```
(hug) root@e:/# python
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoFeatureExtractor, DeformableDetrForObjectDetection
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'DeformableDetrForObjectDetection' from 'transformers' (/opt/miniconda3/envs/hug/lib/python3.8/site-packages/transformers/__init__.py)
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.conda create -n hug numpy matplotlib transformers python=3.8
2.conda activate hug
3.from transformers import AutoFeatureExtractor, DeformableDetrForObjectDetection
### Expected behavior
Can any one tell me why? | 10-06-2022 09:56:43 | 10-06-2022 09:56:43 | Hi,
Deformable DETR is not yet available in a PyPi release. For now, you have to install the library from source:
```
pip install -q git+https://github.com/huggingface/transformers.git
``` |
transformers | 19,374 | closed | Feature extraction pipeline not consider parameters | ### System Info
- transformers==4.22.2
- python==3.9.2
- Ubuntu 22.04
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from torch.utils.data import Dataset
from transformers import AutoModel, pipeline
from transformers import AutoTokenizer
model_name="anferico/bert-for-patents"
text = ["this is a pan"]
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModel.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True, model_max_length=512)
pipe_ = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=torch.cuda.current_device())
p = pipe_(text, padding=True, truncation=True, pad_to_max_length=True, return_tensors='np')
np.squeeze(p).shape
```
### Expected behavior
There are several problems:
- It does not return Numpy array so I have to make squeeze operation on my own.
- The bigger problem is that it does not consider padding parameters, the expeced return would be (512, 1024) but now return (6, 1024). I have tried any parameter setup at any level but not worked. When I checked the source code, it only consider truncation parameter. | 10-06-2022 09:26:30 | 10-06-2022 09:26:30 | Hi @quancore ,
`feature-extraction` pipeline is a bit of a beast tbh since there are MANY models and architectures behind.
That being said, the use case you describe seems very legit and interesting.
Do you want to open a PR for it ?
In order to prevent issues, the `tokenizer` part of the arguments should probably be sent as a group. For instance `max_length` is both an argument possible for tokenization and for `generate` function and they mean 2 very different things.
So doing :
```python
import torch
from torch.utils.data import Dataset
from transformers import AutoModel, pipeline
from transformers import AutoTokenizer
model_name="anferico/bert-for-patents"
text = ["this is a pan"]
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModel.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True, model_max_length=512)
pipe_ = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=torch.cuda.current_device())
p = pipe_(text, tokenize_kwargs = {"padding": True, "truncation": True, "pad_to_max_length":True}, return_tensors='np')
np.squeeze(p).shape
```
Might be more explicit.
All arguments needs to be explicited in `_sanitize_parameters`.
Would you be willing to open a PR for it ?<|||||>@Narsil I will try to do that, do you have any suggestions for similar reference implementation?<|||||>Does this help ?
https://huggingface.co/docs/transformers/v4.22.2/en/add_new_pipeline
You can go and try at it, doesn't matter how far you go, you can ping me on the PR I'll try to provide some guidance.
Thanks a lot ! |
transformers | 19,373 | closed | remove `return_dict_in_generate` condition on storing scores. | # What does this PR do?
Fixes an issue in `generate` where the `output_scores` (or `output_attentions` or `output_hidden_states` ) cannot be obtained unless `return_dict_in_generate` is set to `True`. This is problematic because it's not what we want when we have a flag for each of these outputs. | 10-06-2022 08:57:03 | 10-06-2022 08:57:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19373). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,372 | closed | [wip: test doc-build] | null | 10-06-2022 08:42:06 | 10-06-2022 08:42:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,371 | closed | Make retribert tokenizers independent from BertTokenizer | Part of a series of commits to step towards resolving #19303
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-06-2022 07:05:19 | 10-06-2022 07:05:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Are these line length check errors coming from the comments? Those are the only lines that approach the cusp of 119, and my editor tells me they're right under the line, and some of those comment lines come straight from the copied-from file<|||||>Took me enough tries, but finally passing checks. Would have saved myself a hefty bit of trouble if I had started with the styling tools.
I couldn't help but notice while going through the tools that `python utils/check_copies.py` will "correct" many existing files. Is this something that's intentionally held back? Does appear to pass checks without the inclusion of all those changes<|||||>Cleared up those little oversights.
Seems like the black version specified in setup.py is 22.3 and the one on my global python install is 22.6, so both should still be within the black project's promise of a yearly standard. The diff tells me that all the changes are just removing blank lines after function signatures or the first line of control flow blocks, which seems to be the usual black policy so I guess they just fixed some edge case that made it add a few empty lines mid-2022.
Also that failed check in a part of code that isn't changed is a bit annoying. `test_run_swag_no_trainer` seems to be building and testing a model, which I guess just failed as part of a stars aligning variance thing. I could change something superficial and invoke another CI run to confirm that I didn't stealth break a different module in a PR to make a module more independent, but the path does seem pretty separate.<|||||>This is because we are using the `--preview` flag, which breaks their promise of compatibility between the versions of the same year. We really like the way it formats all strings (in docstrings/warnings/multi-line strings in general) so we activated it. In three months, we'll switch to the 2023 version and remove the `--preview` flag, which should solve this issue for next year :-)
The failure is flaky indeed. Thanks a lot for your work on this!<|||||>I notice a little pile of deprecation warnings in that same leg of the test suite. If those aren't something that's being intentionally held back for compatibility reasons, I could put together another PR just mopping those up after all of the items from 19303 are cleared off or claimed<|||||>By all means! We are ignoring those for now, but we do need to clean them up at some point! |
transformers | 19,370 | closed | Removed Bert dependency from BertGeneration code base. |
# What does this PR do?
- Related to #19303
- Removed `BertGeneration` dependency from `Bert` code base.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-06-2022 04:56:35 | 10-06-2022 04:56:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your contribution! |
transformers | 19,369 | closed | edit: cast attention_mask to long in DataCollatorCTCWithPadding | # What does this PR do?
many inf values generated when training Wav2Vec2ForCTC by referring to [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) using DeepSpeed library.
because Wav2Vec2ForCTC's forword has logics that sum attention_mask, so if you training model using DeepSpeed,
https://github.com/huggingface/transformers/blob/7e7f62bfa72ca03e9f16285dad182f7c57cd8cab/src/transformers/trainer.py#L2390
this method cast attention_mask's dtype int32 to float16
Wav2Vec2FeatureExtractor makes attention_mask and it's dtype int32
here is example
```
import torch
from transformers import Wav2Vec2FeatureExtractor
feature_extractor = Wav2Vec2FeatureExtractor(return_attention_mask=True)
data = [{'input_values':[0.1,0.1,0.1]},{'input_values':[0.2,0.2,0.2,0.2,0.2]}]
attn_mask = feature_extractor.pad(data,padding = "longest",return_tensors="pt")['attention_mask']
print(attn_mask.dtype)
-> torch.int32
```
so i add one line in DataCollatorCTCWithPadding that attention_mask casting long type from int32
```
batch['attention_mask'] = batch['attention_mask'].to(torch.long)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # [18080](https://github.com/huggingface/transformers/issues/18080)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-06-2022 01:58:02 | 10-06-2022 01:58:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>i add more line
```
if "attention_mask" in batch:
```
bcz some case, feature_extractor has config that ["return_attention_mask": false]
but is
```
if self.processor.feature_extractor.return_attention_mask:
```
more good to read? if so, i'll change that |
transformers | 19,368 | closed | Incorrect type hint of "exponential_decay_length_penalty" in function "generate" | Hi,
Please check below line,
https://github.com/huggingface/transformers/blob/7e7f62bfa72ca03e9f16285dad182f7c57cd8cab/src/transformers/generation_utils.py#L956
According to doc string, "exponential_decay_length_penalty (`tuple(int, float)`, *optional*, defaults to `model.config.exponential_decay_length_penalty`):" (https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1114)
the correct type hint should be "Optional[Tuple[int, float]]", the number of tuple elements should be 2, and must be "int" in position 0, and "float" in position 1. | 10-06-2022 01:02:33 | 10-06-2022 01:02:33 | |
transformers | 19,367 | closed | Improve and fix ImageSegmentationPipeline | # What does this PR do?
- Fixes the image segmentation pipeline test failures caused by changes to the postprocessing methods of supported models
- Updates the ImageSegmentationPipeline tests
- Improves docs, adds 'task' argument to optionally perform semantic, instance or panoptic segmentation
Note: `test_small_model_pt` test is skipped due to a random weight initialization error when loading the `hf-internal-testing/tiny-detr-mobilenetsv3-panoptic` model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ X] Did you write any new necessary tests?
| 10-05-2022 22:10:33 | 10-05-2022 22:10:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts @sgugger thank you for the review! All comments are addressed, I'll merge the branch once all tests are passing.<|||||>> Thanks for making these changes and improving the pipeline β
>
> I think the PR is good to go as is π Would just like to see a bit more test coverage of the different segmentation tasks. Just one per task in the pipeline, possibly adapting existing ones to avoid making the test suite significantly slower. Have you visualised or counted the number of different pixels between the outputs on this branch and main as validation?
Thank you! Yes, I visualized the segments and they are the same. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.