repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,366 | closed | Rework pipeline tests | # What does this PR do?
The test fetcher has been pretty good at identifying which tests to run and even if we're not testing everything on each commit anymore, we've mostly avoided bad surprises.
Except for pipeline tests.
This is because the pipeline tests are structured in a way that makes it hard for the test fetcher to guess it has to run them, as they don't see to rely on anything else than the pipeline.
Also running pipeline tests is annoying as you have to remember to activate a special env variable. It made sense back in the days where we had all tests in one folder, but now that they are nicely structured, this is completely unnecessary.
Thus this PR proposes two things:
1. remove the special marker for pipeline tests and the corresponding env variable. We know they are all in `tests/pipelines`.
2. run all pipeline tests any time there is some code change warranting at least one test, like we do for the examples. They are taking the same time roughly, and since the pipelines are a good integration test, I think it actually makes more sense to test those all the times than the examples. | 10-05-2022 22:07:02 | 10-05-2022 22:07:02 | Note: Flax tests are currently failing because it tries to run the tests inside the pipelines folder, will fix this tomorrow by having all non-pipeline jobs not run any of the tests in the pipelines folder. The `--ignore` flag from pytest does not work for some reason, but the test fetcher can probably fix that somehow ^^.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I love this PR !
> Note: Flax tests are currently failing because it tries to run the tests inside the pipelines folder
Can't we make the tests parsable even when having neither PT nor TF ?
Here this:
`from transformers import DetrForSegmentation` seems to be the culprit (in `tests/pipelines/test_pipelines_for_segmentation.py`).
Shouldn't we have dummy models when `torch` is not available ?<|||||>> Can't we make the tests parsable even when having neither PT nor TF ?
I can certainly do that too, but the pipeline tests are isolated to not be run by the other jobs, so they shouldn't even be run.<|||||>> I can certainly do that too, but the pipeline tests are isolated to not be run by the other jobs, so they shouldn't even be run.
I'm thinking about regular user using JAX (or non libraries actually) doing `pytest -sv tests/` . IMO it'd be nice if the command ran instead of crashing. |
transformers | 19,365 | closed | Fix pipeline tests for Roberta-like tokenizers | # What does this PR do?
The pipeline tests rely on an inheritance test for the (weird) behavior of Roberta-like models/tokenizers having a +2 / -2 on the embeddings. This was working fine until someone decided to encourage the community to uncouple all configs, and now it breaks.
This PR fixes it. | 10-05-2022 21:35:11 | 10-05-2022 21:35:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19365). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,364 | closed | Make `Camembert` TF version independent from `Roberta` | # What does this PR do?
related to #19303
Making the Camembert model (`Tensorflow`version) independent from Roberta
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Camembert should not depend from roberta
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 10-05-2022 21:09:15 | 10-05-2022 21:09:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, I'm getting the following errors from circleci :
```python
FAILED tests/pipelines/test_pipelines_feature_extraction.py::FeatureExtractionPipelineTests::test_pt_LongformerConfig_LongformerModel_LongformerTokenizerFast_nofeature_extractor
FAILED tests/pipelines/test_pipelines_feature_extraction.py::FeatureExtractionPipelineTests::test_pt_LongformerConfig_LongformerModel_LongformerTokenizer_nofeature_extractor
```
I don't see why there is `Longformer` on the error while I haven't touched it. |
transformers | 19,363 | closed | Fix DETR segmentation postprocessing output | # What does this PR do?
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
Applies the same changes as the MaskFormer postprocessing fix [PR](https://github.com/huggingface/transformers/pull/19354).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-05-2022 20:36:36 | 10-05-2022 20:36:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,362 | closed | Misspelled docstring for ensure_valid_input function | ### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```{code}
def ensure_valid_input(model, tokens, input_names):
"""
Ensure input are presented in the correct order, without any Non
Args:
model: The model used to forward the input data
tokens: BatchEncoding holding the input data
input_names: The name of the inputs
Returns: Tuple
"""
```
### Expected behavior
It is a really tiny tiny detail but when I was going through `convert_graph_to_onnx.py` file I noticed that there is a misspelled docstring for `ensure_valid_input` function. Namely, I believe that `"Ensure input are ..."` should be `"Ensure inputs are ...`. | 10-05-2022 19:46:31 | 10-05-2022 19:46:31 | Looks wrong indeed. Would you like to open a PR to fix it?<|||||>@sgugger sure, I will take care of that tomorrow. |
transformers | 19,361 | closed | Moving examples in docstrings of RobertaTokenizer and LongformerTokenizer to doc source files | ### Motivation
When using the `# Copied from` mode of do-repeat-yourself, if there are interactive examples in the source file's docstring, e.g., [`[0, 31414, 232, 328, 2]` in RobertaTokenizer](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/roberta/tokenization_roberta.py#L118), then it is currently impossible for the destination file to have accurate interactive examples without having complicated replace patterns in the `copy-check` module. e.g., [`[0, 31414, 232, 2]` is the correct output of this example in `LongformerTokenizer`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/tokenization_longformer.py#L127).
For now, to pass the copy check we have copied over the output line from the Roberta model to Longformer.
> The example should probably go in the doc source file instead, so we can copy without any worry. For now in this PR I'd leave it as the Roberta output as you did. Then we can do a follow-up PR where we change the doc source files for Roberta and Longformer and remove that example from the two docstrings (I can do it or you can, as you prefer!)
_Originally posted by @sgugger in https://github.com/huggingface/transformers/pull/19346#discussion_r988069210_
| 10-05-2022 18:38:12 | 10-05-2022 18:38:12 | #self-assign<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,360 | closed | Fix gather for metrics | # What does this PR do?
This PR fixes a failing test in the CI due to `gather_for_metrics` not receiving a tuple. This will soon be redundant with a change in Accelerate, but good to have this fix in now anyways with the right version imo
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 10-05-2022 18:37:16 | 10-05-2022 18:37:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,359 | closed | Make `XLMRoberta` model and config independent from `Roberta` | # What does this PR do?
Removes the dependencies of `XLMRobertaConfig` and everything inside `modeling_xlm_roberta.py`. This is related to issue #19303.
I only did this for the PyTorch model as there were some models in the issue that had "PyTorch + TF" and this was not one of them. I can add for tensorflow or flax if needed!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-05-2022 17:14:38 | 10-05-2022 17:14:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger
I ran `make style` and fixed your recommendations, but now there's an error in the `run_tests_torch`. I believe it's [this](https://app.circleci.com/pipelines/github/huggingface/transformers/48816/workflows/882fa75f-7b2f-45af-861c-1beb9881aeea/jobs/583177?invite=true#step-112-2911):
```
OSError: Rocketknight1/esm-2-8m is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
I don't really know how to fix that or what caused it.. any tips? |
transformers | 19,358 | closed | setting max_new_tokens in text-generation pipeline with OPT produces error | ### System Info
python 3.7.12
transformers 4.22.2
Google Vertex AI platform
### Who can help?
@LysandreJik
(Feel free to tag whoever owns OPT if that's not you! – it's not specified in the list)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
test_generator = pipeline(
"text-generation",
model="facebook/opt-125m",
do_sample=True,
device=device
)
response = test_generator(
"Here's how this model responds to a test prompt:",
max_new_tokens=200,
num_return_sequences=1,
)
print(response[0]['generated_text'])
```
### Expected behavior
This should generate text, but it produces this error:
ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
Meanwhile, the official documentation specifically recommends setting 'max_new_tokens' rather than 'max_length':
**max_length** (int, optional, defaults to model.config.max_length) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. In general, prefer the use of max_new_tokens, which ignores the number of tokens in the prompt.
**max_new_tokens** (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
The problem can be worked around by manually setting max_length=None, but that should happen by default as it does with other autoregressive models. The same code runs without error if you swap out the OPT model for EleutherAI/gpt-neo-125M. | 10-05-2022 17:04:58 | 10-05-2022 17:04:58 | Hey @gqfiddler 👋 -- thank you for raising this issue 👀
@Narsil this seems to be a problem between how `.generate()` expects the max length to be defined, and how the `text-generation` pipeline prepares the inputs. When `max_new_tokens` is passed outside the initialization, [this line](https://github.com/huggingface/transformers/blob/4dd784c32f76fb8285f205b94e2a6ebde731a1cd/src/transformers/pipelines/base.py#L1038) merges the two sets of sanitized arguments (from the initialization we have `max_length`, from the new kwargs we have `max_new_tokens`).
To fix this, we can either remove the `ValueError` from generate (but expose ourselves to weird errors) or add more logic to the pipelines e.g. to ignore `max_length` when `max_new_tokens` are is (which is not very pretty). WDYT?<|||||>Hey, thanks for the quick pickup on this!
FWIW, in my opinion the existing error + message is exactly the right response for the case where the caller explicitly passes in a value for both ```max_length``` and ```max_new_tokens```. The problem I'm pointing out here is the case where the caller passes in a value for ```max_new_tokens``` and NOT for ```max_length``` (as recommended in the documentation) and the model still raises this error. It seems pretty unproblematic to me for the pipeline code to ignore ```max_length``` (i.e. set it to None) in this case, since the caller has made clear how they wish to limit the model output. It may not be especially pretty, but that sort of conditional default argument value is plenty common and easy to use, so long as it's documented (e.g., in the "Normalize" parameter for [sklearn's linear regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html), "This parameter is ignored when fit_intercept is set to False.")
At any rate, one way or another, the method recommended in the documentation should run without error. In theory the very most minimal fix would be just to clarify this in the documentation for ```max_new_tokens``` with a note like "To use this parameter, you must set ```max_length``` to 0"... but given that other autoregressive model pipelines handle this case without throwing an error, probably better to change the code here too.<|||||>Hi,
The first `max_length` comes from the use of `config.prefix` in `facebook/opt-125m`: https://huggingface.co/facebook/opt-125m/blob/main/config.json
I don't think `prefix` is correctly used in this configuration. `prefix` is meant for XL variants models to have a large text input prompt because the output quality of the model without it is bad (this is old code I'm referring to very old conversations).
Shouldn't the prefix be added directly in the `tokenizer` itself ? (Like prepending every input ids with EOS regardless ?).
This would seem the most canonical way to handle that IMO.
Regardless of this:
- This is indeed a bug, the user never passed `max_length` so we shouldn't set it for him, but changing that means changing the `model.config` itself instead, which might also not be great. Since it's modifying an object outside of the pipelines control which make thing extremely indirect. Culprit line: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L101
- We have to keep the prefix thing unfortunately because of backward compatibility, but it seems pretty bad to use it since it's highly shadowed behavior.
Easy fixes for the example:
- Define `max_new_tokens` in the instantation instead of call:
```python
from transformers import pipeline
test_generator = pipeline(
"text-generation",
model="facebook/opt-125m",
do_sample=True,
max_new_tokens=200,
)
response = test_generator(
"Here's how this model responds to a test prompt:",
num_return_sequences=1,
)
print(response[0]["generated_text"])
```
- Deactivate `max_length` manually:
```python
from transformers import pipeline
test_generator = pipeline(
"text-generation",
model="facebook/opt-125m",
do_sample=True,
)
response = test_generator(
"Here's how this model responds to a test prompt:",
num_return_sequences=1,
max_length=None,
max_new_tokens=200
)
print(response[0]["generated_text"])
```
<|||||>@Narsil I see! Actually, OPT's tokenizer already adds the `prefix` (`"</s>"`, token id = 2) at tokenization time.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
print(tokenizer.bos_token_id)
# 2
print(tokenizer(["This is a test"]))
# {'input_ids': [[2, 713, 16, 10, 1296]], 'attention_mask': [[1, 1, 1, 1, 1]]}
```
Looking at the [tokenizer configuration](https://huggingface.co/facebook/opt-125m/blob/main/tokenizer_config.json), we see a `"add_bos_token": true`.
Because it also has a `config.prefix`, does this mean that the pipelines add another `</s>`? I suppose it is harmless and, for pipeline reasons, removing `config.prefix` would be fine. The problem is all other uses of `config.prefix` outside the huggingface universe, which we can't control (and thus we shouldn't touch it).
Could we add an ad hoc pipeline exception, e.g [here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L59)? (OPT models would skip this `if` -> `prefix` is not set -> problem solved)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,357 | closed | List of models/tasks failing ONNX inference | ### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.12
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@lewtun
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue was surfaced in https://github.com/huggingface/transformers/pull/19255 (see also https://github.com/huggingface/transformers/issues/19320)
Export any of the following model/task/framework parameterizations to ONNX:
- [ ] ("deberta-v2", "question-answering", "pt"),
- [ ] ("deberta-v2", "multiple-choice", "pt"),
- [ ] ("roformer", "multiple-choice", "pt"),
- [ ] ("groupvit", "default", "pt"),
- [ ] ("perceiver", "masked-lm", "pt"),
- [ ] ("perceiver", "sequence-classification", "pt"),
- [ ] ("perceiver", "image-classification", "pt"),
- [ ] ("bert", "multiple-choice", "tf"),
- [ ] ("camembert", "multiple-choice", "tf"),
- [ ] ("roberta", "multiple-choice", "tf"),
Errors are currently not detected at export time (although see https://github.com/huggingface/transformers/pull/19255). However, running inference on these exported models with any shape other than what they were validated with will fail.
### Expected behavior
* Errors are raised during ONNX export
* Inference runs as expected | 10-05-2022 16:44:27 | 10-05-2022 16:44:27 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,356 | closed | Removed `Bert` interdependency in `tokenization_electra.py` | # What does this PR do?
Related to #19303
Removes `bert` dependency from `tokenization_electra.py`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 10-05-2022 16:35:23 | 10-05-2022 16:35:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot for working on this one! It's missing a few copied froms, I left suggestions. Once you're done, be sure to run `make style` on your branch (for code-formatting) and we should be good to merge!
`make style` changes a lot of files.<|||||>Make sure you install the specific versions of black that we use with `pip install -e .[quality]`, it's probably because you don't ahve the same version as the one we use. |
transformers | 19,355 | closed | Skip failing test while we resolve the issue. | # What does this PR do?
Skipping some Maskformer tests that are failing. | 10-05-2022 16:17:31 | 10-05-2022 16:17:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19355). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,354 | closed | Fix MaskFormer failing postprocess tests | # What does this PR do?
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-05-2022 15:41:42 | 10-05-2022 15:41:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks like there is still a problem with the test. |
transformers | 19,353 | closed | Very different results on inference between mps and cpu for same input | ### System Info
transformers version: 4.22.2
Platform: macOS-12.5-arm64-arm-64bit
Python version: 3.9.13
Huggingface_hub version: 0.10.0
PyTorch version (MPS?): 1.13.0.dev20221005 (True)
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am running the following code:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2').to(torch.device('mps'))
inputs = tokenizer('the man who is tall',
return_tensors='pt').to(torch.device('mps'))
print(inputs)
outputs = model(**inputs).logits
print(outputs[0,0,:])
print()
print()
model = GPT2LMHeadModel.from_pretrained('gpt2').to(torch.device('cpu'))
inputs = tokenizer('the man who is tall',
return_tensors='pt').to(torch.device('cpu'))
print(inputs)
outputs = model(**inputs).logits
print(outputs[0,0,:])
```
What I see as output is:
```
{'input_ids': tensor([[1169, 582, 508, 318, 7331]], device='mps:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1]], device='mps:0')}
/opt/homebrew/Caskroom/miniforge/base/envs/mapi/lib/python3.9/site-packages/torch/_tensor_str.py:115: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
nonzero_finite_vals = torch.masked_select(
tensor([-14.9901, -14.6213, -17.5936, ..., -21.4744, -21.1240, -14.7532],
device='mps:0', grad_fn=<SliceBackward0>)
{'input_ids': tensor([[1169, 582, 508, 318, 7331]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])}
tensor([-33.1021, -31.8638, -35.0600, ..., -38.2193, -38.8318, -32.7428],
grad_fn=<SliceBackward0>)
```
### Expected behavior
I expect the output to be the same (or at least much closer). Am I missing something obvious here? | 10-05-2022 15:36:26 | 10-05-2022 15:36:26 | Interesting! @pcuenca could you take a look here? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,352 | closed | bug with inputs_embeds for bert (tensorflow) | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use the following code for using inputs_embeds for TFBertForTokenClassification:
(text embedding is a tensor with shape (1, SEQ_LEN,768))
```
bert_conf = BertConfig(num_hidden_layers=2, num_labels=2)
bert_model=TFBertForTokenClassification(bert_conf)
res = self.group_bert(inputs_embeds=text_emb, training=training)
```
but it throws the following exception
```
/lv_local/home/tomergur/convo_search_project/experiments/qpp/supervised/train_ragged_bert_qpp.py:71 call *
res = self.bert(inputs_embeds=text_emb, training=training)
/lv_local/home/tomergur/convo_search_project/csp_venv/lib/python3.8/site-packages/keras/engine/base_layer.py:967 __call__ **
inputs, args, kwargs = self._split_out_first_arg(args, kwargs)
/lv_local/home/tomergur/convo_search_project/csp_venv/lib/python3.8/site-packages/keras/engine/base_layer.py:3011 _split_out_first_arg
raise ValueError(
ValueError: The first argument to `Layer.call` must always be passed.
Process finished with exit code 1
```
while this snippet works:
```
bert_conf = BertConfig(num_hidden_layers=2, num_labels=2)
bert_model=TFBertForTokenClassification(bert_conf)
# text embedding is a tensor with shape (1, SEQ_LEN,768)
inputs={'inputs_embeds':text_emb}
res=self.group_bert(group_inputs,training=training)
```
so for some reasons we have ug when using keyword input
### Expected behavior
make it possible to use inputs_embed with keyword args | 10-05-2022 15:05:32 | 10-05-2022 15:05:32 | Hey @tomergur 👋
Sadly, we have no power to prevent this issue. If you look at the traceback from a script like
```python
from transformers import BertConfig, TFBertForTokenClassification
bert_conf = BertConfig(num_hidden_layers=2, num_labels=2)
bert_model=TFBertForTokenClassification(bert_conf)
bert_model(inputs_embeds=[[0, 1, 2]], training=True)
```
you'll see
```
Traceback (most recent call last):
File "/home/joao/transformers/../joao_scripts/dbg.py", line 4, in <module>
bert_model(inputs_embeds=[[0, 1, 2]], training=True)
File "/home/joao/hf/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/joao/hf/lib/python3.10/site-packages/keras/utils/layer_utils.py", line 812, in split_out_first_arg
raise ValueError(
ValueError: The first argument to `Layer.call` must always be passed.
```
In other words, it crashes on `Keras` code before reaching `transformers` code. Passing `input_ids=None` or wrapping in a dictionary (as you mentioned) are the workarounds for this issue -- `Keras` expects the first input defined in the signature of `call` to always be provided.
_______________________________
I'm closing this issue, but feel free to reopen if you have other queries 🤗 |
transformers | 19,351 | closed | Make LayoutLM tokenizers independent from BertTokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Decoupling `LayoutLMTokenizer` and `LayoutLMTokenizerFast` from `BertTokenizer`.
Since only a few class constants change between Bert and LayoutLM, there's a copy flag for every single method in the class.
I wonder whether having prefixes for the class constants could help reducing the amount of code, for instance:
`VOCAB_FILES_NAMES` -> `BERT_VOCAB_FILES_NAMES`
so that we could simply use: `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast with Bert->LayoutLM` on the entire tokenizer class though I suspect there are good reasons not to do that.
Related to #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-05-2022 14:37:57 | 10-05-2022 14:37:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger The PR is ready for review :) |
transformers | 19,350 | closed | Adding type hints for TF TransfoXL | Based on Issue #16059
As the title suggests, this PR adds type hints to the TF TransfoXL model classes. | 10-05-2022 13:52:12 | 10-05-2022 13:52:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19350). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,349 | closed | Remove `Roberta` Interdependency from `tokenization_luke` | # What does this PR do?
Related to #19303
Removes `Roberta` dependency from `tokenization_luke.py`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-05-2022 13:41:54 | 10-05-2022 13:41:54 | > Thanks a lot for working on this one! Looking good for the properties added but the main init should retain the specificities of the Luke tokenizer, otherwise it will break it :-)
Hi,thanks for the super fast review. Will fix asap.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19349). All of your documentation changes will be reflected on that endpoint.<|||||>> I'm sorry if I was unclear in my previous comments. You still need to add the init of Roberta inside the init of Luke. It's just that you should not remove the extra code Luke adds in its init :-)
>
> Also be careful with the docstring changes, they should be reverted.
working on it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,348 | closed | Adding type hints for TFXLnet #19344 | null | 10-05-2022 12:59:13 | 10-05-2022 12:59:13 | |
transformers | 19,347 | closed | Making `ConvBert Tokenizer` independent from `bert Tokenizer` | # What does this PR do?
Fixes #19303
Added `BertTokenizer` class in tokenization_convbert.py and `BertTokenizerFast` in tokenization_convbert_fast.py
| 10-05-2022 12:56:18 | 10-05-2022 12:56:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Done! @sgugger do i need to do same for convbert_fast?<|||||>@sgugger Thanks for quick feeback,
For tokenization_convbert.py i have change the comment to `# Copied from transformers.models.bert.tokenization_bert.BertTokenizer with ConvBertTokenizer->BertTokenizer`
and for tokenization_convbert_fast.py i have change the comment to `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast with ConvBertTokenizerFast->ConvBertTokenizer
`
for convbert_fast `make repo-consistency` gives error : - src/transformers\models\convbert\tokenization_convbert_fast.py: copy does not match models.bert.tokenization_bert_fast.BertTokenizerFast at line 55<|||||>You'll need to add broader patterns than just the full name of the tokenizer as BERT is used in the docstrings for instance. To see what the copy utils wants to modify, you can run `make fix-copies` locally :-)<|||||>After running `make fix-copies` it changes `slow_tokenizer_class = BertTokenizer` but it should be `ConvBertTokenizer` in tokenization_convbert_fast.py<|||||>Yes, that's why I made the suggestions above.<|||||>So, do i need to copy `BertTokenizer` and `BertTokenizerFast` class to tokenization_convbert_fast.py as after adding those classes it passes all tests<|||||>Just accept the suggestions above.<|||||>Done @sgugger are there any more changes? |
transformers | 19,346 | closed | Frees LongformerTokenizer of the Roberta dependency | # What does this PR do?
@sgugger ,
Per the issue #19303, the Roberta tokenizer dependency has been removed from `LongformerTokenizer`.
Thanks for reviewing the PR :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 10-05-2022 12:41:58 | 10-05-2022 12:41:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19346). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,344 | closed | Adding type hints for TFXLnet | As the title suggests, this PR adds type hints to the TFXLnet model classes. | 10-05-2022 12:20:47 | 10-05-2022 12:20:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Based on Issue #16059<|||||>Looks good to me, thanks! |
transformers | 19,343 | closed | Removes Roberta and Bert config dependencies from Longformer | # What does this PR do?
@sgugger ,
Per the issue #19303, the Roberta and Bert config dependencies are removed from `LongformerConfig` and it now directly inherits from `PretrainedConfig`.
- `LongformerConfig` depends on `RobertaConfig` and `RobertaConfig` in turn inherits from `BertConfig`.
- So I've copied over the defaults (`pad_token_id`, `bos_token_id`, `eos_token_id`) from RobertaConfig and the rest of the defaults from BertConfig that are not conflicting with roberta.
- The docstrings from BertConfig are copied over as well.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 10-05-2022 12:11:49 | 10-05-2022 12:11:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Done! Thanks for bearing with me @sgugger :) |
transformers | 19,341 | closed | 🚨 🚨 🚨 Fix ViT parameter initialization | # What does this PR do?
This PR aims to rectify the discrepancy between the training performances of HF and Timm ViT implementations.
- Initializes torch and flax ViT dense layer weights with trunc_normal instead of normal (consistent with the TF implementation.
- Initializes cls_token and positional_embeddings with trunc_normal
- Updates DeiT copy
Partially fixes # ([19305](https://github.com/huggingface/transformers/issues/19305))
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
This issue was brought up over `[here.](https://github.com/huggingface/transformers/issues/19305)`
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 10-05-2022 11:41:21 | 10-05-2022 11:41:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR! For such PRs, please ping both @sgugger and @amyeroberts who are better suited to do a review than I am. Thank you!<|||||>@alaradirik I believe setting eps in layernorm to 1e-6 rather than 1e-12 is also important as mentioned in https://github.com/huggingface/transformers/issues/19305 by @rwightman<|||||>Yes, @alaradirik, but I think for newer users, or anyone new to ViT or just setting up ViT to train, it would be much better if the default was set to 1e-6 rather than 1e-12 so that they don't have to relook for bugs and most of the time the eps value will not be the first thing they look for, or worst case last thing.<|||||>Also, shouldn't Kaiming initialization be used for the nn.Conv2d rather than .normal_() initialization in the class ViTPreTrainedModel or any class that directly inherits from PretrainedModel? And the biases of the nn.Conv2d in ViT should be initialized the same way as PyTorch? (https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv2d) @alaradirik @LysandreJik @NielsRogge @sgugger |
transformers | 19,340 | closed | Roberta Gradient checkpointing to only layers, which requires grad | ### Feature request
Current gradient checkpointing in Roberta, directly uses gradient checkpointing for all layers without checking if they require grad. In our use case we only train last 3 layers of Roberta and want to use gradient checkpointing only for the last 3 layers.
### Motivation
Adding this would further decrease the memory usage on GPU.
### Your contribution
https://github.com/huggingface/transformers/blob/6268694e27f1fc0192ba24e4bec181061b4a9bf8/src/transformers/models/roberta/modeling_roberta.py#L497
at this line we can add another condition, then it would look like this:
if self.gradient_checkpointing and self.training and layer_module.requires_grad
| 10-05-2022 11:36:01 | 10-05-2022 11:36:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,339 | closed | TypeError: Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend. | ### System Info
transformers version: 4.22.2
checkpoint: microsoft/deberta-v3-small
task: ner
error: `TypeError: Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend.
`

### Who can help?
@sgugger
`The following columns in the training set don't have a corresponding argument in `DebertaV2ForTokenClassification.forward` and have been ignored: ner_tags, tokens. If ner_tags, tokens are not expected by `DebertaV2ForTokenClassification.forward`, you can safely ignore this message.
***** Running training *****
Num examples = 43943
Num Epochs = 5
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 13735
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [44], in <cell line: 1>()
----> 1 trainer.train()
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:1521, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1516 self.model_wrapped = self.model
1518 inner_training_loop = find_executable_batch_size(
1519 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1520 )
-> 1521 return inner_training_loop(
1522 args=args,
1523 resume_from_checkpoint=resume_from_checkpoint,
1524 trial=trial,
1525 ignore_keys_for_eval=ignore_keys_for_eval,
1526 )
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:1763, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1761 tr_loss_step = self.training_step(model, inputs)
1762 else:
-> 1763 tr_loss_step = self.training_step(model, inputs)
1765 if (
1766 args.logging_nan_inf_filter
1767 and not is_torch_tpu_available()
1768 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1769 ):
1770 # if loss is nan or inf simply add the average of previous logged losses
1771 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:2499, in Trainer.training_step(self, model, inputs)
2496 return loss_mb.reduce_mean().detach().to(self.args.device)
2498 with self.compute_loss_context_manager():
-> 2499 loss = self.compute_loss(model, inputs)
2501 if self.args.n_gpu > 1:
2502 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:2531, in Trainer.compute_loss(self, model, inputs, return_outputs)
2529 else:
2530 labels = None
-> 2531 outputs = model(**inputs)
2532 # Save past state if it exists
2533 # TODO: this needs to be fixed and made cleaner later.
2534 if self.args.past_index >= 0:
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1444, in DebertaV2ForTokenClassification.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1438 r"""
1439 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1440 Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
1441 """
1442 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1444 outputs = self.deberta(
1445 input_ids,
1446 attention_mask=attention_mask,
1447 token_type_ids=token_type_ids,
1448 position_ids=position_ids,
1449 inputs_embeds=inputs_embeds,
1450 output_attentions=output_attentions,
1451 output_hidden_states=output_hidden_states,
1452 return_dict=return_dict,
1453 )
1455 sequence_output = outputs[0]
1457 sequence_output = self.dropout(sequence_output)
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1101, in DebertaV2Model.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
1091 token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
1093 embedding_output = self.embeddings(
1094 input_ids=input_ids,
1095 token_type_ids=token_type_ids,
(...)
1098 inputs_embeds=inputs_embeds,
1099 )
-> 1101 encoder_outputs = self.encoder(
1102 embedding_output,
1103 attention_mask,
1104 output_hidden_states=True,
1105 output_attentions=output_attentions,
1106 return_dict=return_dict,
1107 )
1108 encoded_layers = encoder_outputs[1]
1110 if self.z_steps > 1:
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:542, in DebertaV2Encoder.forward(self, hidden_states, attention_mask, output_hidden_states, output_attentions, query_states, relative_pos, return_dict)
533 output_states = torch.utils.checkpoint.checkpoint(
534 create_custom_forward(layer_module),
535 next_kv,
(...)
539 rel_embeddings,
540 )
541 else:
--> 542 output_states = layer_module(
543 next_kv,
544 attention_mask,
545 query_states=query_states,
546 relative_pos=relative_pos,
547 rel_embeddings=rel_embeddings,
548 output_attentions=output_attentions,
549 )
551 if output_attentions:
552 output_states, att_m = output_states
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:386, in DebertaV2Layer.forward(self, hidden_states, attention_mask, query_states, relative_pos, rel_embeddings, output_attentions)
377 def forward(
378 self,
379 hidden_states,
(...)
384 output_attentions=False,
385 ):
--> 386 attention_output = self.attention(
387 hidden_states,
388 attention_mask,
389 output_attentions=output_attentions,
390 query_states=query_states,
391 relative_pos=relative_pos,
392 rel_embeddings=rel_embeddings,
393 )
394 if output_attentions:
395 attention_output, att_matrix = attention_output
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:317, in DebertaV2Attention.forward(self, hidden_states, attention_mask, output_attentions, query_states, relative_pos, rel_embeddings)
308 def forward(
309 self,
310 hidden_states,
(...)
315 rel_embeddings=None,
316 ):
--> 317 self_output = self.self(
318 hidden_states,
319 attention_mask,
320 output_attentions,
321 query_states=query_states,
322 relative_pos=relative_pos,
323 rel_embeddings=rel_embeddings,
324 )
325 if output_attentions:
326 self_output, att_matrix = self_output
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:750, in DisentangledSelfAttention.forward(self, hidden_states, attention_mask, output_attentions, query_states, relative_pos, rel_embeddings)
748 if self.relative_attention:
749 rel_embeddings = self.pos_dropout(rel_embeddings)
--> 750 rel_att = self.disentangled_attention_bias(
751 query_layer, key_layer, relative_pos, rel_embeddings, scale_factor
752 )
754 if rel_att is not None:
755 attention_scores = attention_scores + rel_att
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:845, in DisentangledSelfAttention.disentangled_attention_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
842 else:
843 r_pos = relative_pos
--> 845 p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)
846 p2c_att = torch.bmm(key_layer, pos_query_layer.transpose(-1, -2))
847 p2c_att = torch.gather(
848 p2c_att,
849 dim=-1,
850 index=p2c_pos.squeeze(0).expand([query_layer.size(0), key_layer.size(-2), key_layer.size(-2)]),
851 ).transpose(-1, -2)
TypeError: Operation 'neg_out_mps()' does not support i`
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
#load_best_model_at_end = True,
#metric_for_best_model = 'eval_f1',
overwrite_output_dir= True,
gradient_accumulation_steps = 1,
gradient_checkpointing = False,
use_mps_device = True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_test,
tokenizer=tokenizer,
data_collator=data_collator,
#compute_metrics= compute_metrics1,
)`
### Expected behavior
to be able to train using Apple Silicon Chip | 10-05-2022 11:22:52 | 10-05-2022 11:22:52 | As far as I can tell, this an issue in PyTorch not supporting an operation on MPS, so you should probably file your issue there.<|||||>I have overcome all the difficulties I have encountered so far with your solution suggestions about transformers. :) <|||||>Has this issue been resolved?
I'm running into the same problem.
@forrestfaraday, can you share your solution with me? |
transformers | 19,338 | closed | Attempting to enable chunking for CTC (might be not viable). | Co-Authored-By: Sam Waterbury <[email protected]>
# What does this PR do?
Attempts to recreate a different version of https://github.com/huggingface/transformers/pull/18949
I am under the impression, that the overall approach is doomed to not work because of how mctc models work.
However this PR also enables nice to have features.
This attempts to create the condition for MCTC models (like https://huggingface.co/speechbrain/m-ctc-t-large)
to work with the `chunk_length_s` argument.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 10-05-2022 11:13:31 | 10-05-2022 11:13:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19338). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,337 | closed | Making `Camembert` independent from `Roberta`, clean | This pull request is a clean version of #19312
# What does this PR do?
related to #19303
Making the Camembert model (pytorch version) independent from Roberta
I have changed all the classes of camembert by copy paste from Roberta. I have done the litte necessary changes for every thing to work.
I'm still wondering how to change blocks like those, I have to add a specific checkpoint for Camembert ? :
```python
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
checkpoint="deepset/roberta-base-squad2",
output_type=QuestionAnsweringModelOutput,
config_class=_CONFIG_FOR_DOC,
expected_output="' puppet'",
expected_loss=0.86,
)
```
For testing, the following test works well :
```bash
$ RUN_SLOW=1 pytest tests/models/camembert/test_modeling_camembert.py
```
However, I have noticed the tests are only for the `CamembertModel` but not other classes like `CamembertForCausalLM` ...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger related to #19303 | 10-05-2022 10:33:10 | 10-05-2022 10:33:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,336 | closed | Change `BloomConfig` docstring | # What does this PR do?
This PR addresses small changes on the `BloomConfig` docstring that might be slightly confusing.
Original discussion from: https://huggingface.co/bigscience/bloom/discussions/120
Thanks!
cc @sgugger @VictorSanh @SaulLu | 10-05-2022 09:00:01 | 10-05-2022 09:00:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19336). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,335 | closed | [tokenizers] Cache all-special-ids | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-05-2022 08:49:29 | 10-05-2022 08:49:29 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19335). All of your documentation changes will be reflected on that endpoint.<|||||>@SaulLu sorry for the delay, here's my follow-up to https://github.com/huggingface/transformers/pull/19018 where I fixed the test failures :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,333 | closed | Added Type hints for XLM TF | Based on Issue #16059
I have added type hints for Tensorflow XLM model.
@Rocketknight1 Could you kindly check if this is fine?
| 10-05-2022 07:52:21 | 10-05-2022 07:52:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Thanks for the feedback, I have updated the file. |
transformers | 19,332 | closed | Remove bert interdependency from clip tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/19303
Removing `transformers.models.bert.tokenization_bert.BasicTokenizer` from `transformers.models.clip.tokenization_clip`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Tagging @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
The style + quality checks seemed to run smoothly, but let me know if I missed something!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-05-2022 04:06:00 | 10-05-2022 04:06:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,331 | closed | Removed interdependency of BERT's Tokenizer in tokenization of prophetnet | # What does this PR do?
Removes BERT dependency from the ProphetNet tokenizer file.
Fixes a part of [#19303](https://github.com/huggingface/transformers/issues/19303)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 10-05-2022 02:41:00 | 10-05-2022 02:41:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,330 | closed | [WIP]remove XLMTokenizer inheritance from FlaubertTokenizer | # What does this PR do?
Related to #19303
Removes `XLMTokenizer` inheritance from `FlaubertTokenizer`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
# Who can review?
@sgugger
| 10-05-2022 01:22:25 | 10-05-2022 01:22:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,329 | closed | High CER when Fine Tuning TrOCR Transformers to make an OCR for arabic Language | Hello Everyone ,
I need Help /Guidance regarding Creating and arabic OCR using Transformers . I'm using ViT as encoder and arabert as decoder with their pretrained weights using it like this :
```
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", "aubmindlab/bert-base-arabertv02"
)
```
i followed this [Tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb) by [NielsRogge](https://github.com/NielsRogge) ( Thank you Niels ) . My issue is that after training i get poor results such as Caracter error rate is higher than 70% .
ive Tried different encoders , Different decoders , Different epochs, Different learning rate , but i don't know what i miss to make it work.
<html>
<body>
<!--StartFragment-->
Step | Training Loss | Validation Loss | Cer
-- | -- | -- | --
200 | 4.380500 | 4.674773 | 0.693091
400 | 4.213600 | 4.367142 | 0.777522
600 | 4.257300 | 4.247403 | 0.756554
800 | 3.277700 | 4.185711 | 0.712005
1000 | 2.831100 | 4.275863 | 0.748850
1200 | 2.250200 | 4.342288 | 0.788509
1400 | 2.237800 | 4.589494 | 0.768880
<!--EndFragment-->
</body>
</html> | 10-05-2022 00:40:08 | 10-05-2022 00:40:08 | cc @NielsRogge <|||||>Hi,
I would first try to overfit a single batch, as explained in [Karpathy's blog post](http://karpathy.github.io/2019/04/25/recipe/). This makes sure everything is set up properly.
Several folks have shown to successfully fine-tune TrOCR from pre-trained encoder + decoder checkpoints, e.g. for Japanese: https://huggingface.co/spaces/Detomo/Japanese-OCR (or this repo: https://github.com/kha-white/manga-ocr).
See also this thread for more info: https://github.com/microsoft/unilm/issues/627.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same problem with the Bengali language as the cer is around 87%-89%. @DjouadaFarouk did your issue got solved? |
transformers | 19,328 | closed | Code refactor. | # What does this PR do?
Fixes # (issue)
Replacing elif -> if for readability clear code and pep principles
## Before submitting
- [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@patrickvonplaten
@sgugger
| 10-04-2022 21:01:10 | 10-04-2022 21:01:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19328). All of your documentation changes will be reflected on that endpoint.<|||||>Huh? Changing the `elif` to `if` changes what the code does.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,327 | closed | Remove interdependency from OpenAI tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes BERT dependency from the OpenAI tokenizer file.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Resolves OpenAI task in this issue](https://github.com/huggingface/transformers/issues/19303)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Pinging @sgugger as requested :)
Black and `make fix-copies` both seem happy with the changes, but I also saw few other issues coming up from those in other places in the repo I didn't touch, so might have not configured something the right way. Lemme know if it looks ok! | 10-04-2022 20:46:29 | 10-04-2022 20:46:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,326 | closed | removing XLMConfig inheritance from FlaubertConfig | # What does this PR do?
related to #19303
Removes `XLMConfig` dependency from `FlaubertConfig`
the `__init__` from `FlaubertConfig` differs from `XLMConfig` in the following ways:
- `pre_norm` and `layerdrop` are specific to `FlaubertConfig`. So, I have not added and `#Copied from ...`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-04-2022 20:32:35 | 10-04-2022 20:32:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you just put the PR as ready for review and remove WIP from the title? I can't merge draft PRs :-)<|||||>> Can you just put the PR as ready for review and remove WIP from the title? I can't merge draft PRs :-)
Done. Thanks:)<|||||>Thanks for your contribution! |
transformers | 19,325 | closed | Use a dynamic configuration for circleCI tests | # What does this PR do?
This PR entirely rewrites the circle CI setup for the tests run at each commit, to use a configuration generated on the fly depending on which tests should be run. It also offers a Python wrapper around the circleCI API via the new util `create_circleci_config.py`.
This way, when a commit does no modification in the source code/tests/examples, only the quality jobs and the test fetcher are run (see the job below this PR). When a modification only touches the examples, the quality jobs and the example tests are run (see this [job](https://app.circleci.com/pipelines/github/huggingface/transformers/48462)) and when a modification touches some code, all tests jobs and example tests are run (tests jobs running on impacted tests only) as seen in this [report](https://app.circleci.com/pipelines/github/huggingface/transformers/48461).
The generated config is stored as an artifact in the `fetch_tests` job (in txt format because yml would not be rendered). | 10-04-2022 19:45:48 | 10-04-2022 19:45:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Feel free to merge whenever ready |
transformers | 19,324 | closed | Better documentation for pipelines | ### Feature request
The [introduction to pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) documentation does not provide any details on how additional parameters can be passed to the tokenizer during the preprocessing step. After walking through all of the source code, I can see that when instantiating a pipeline via `transformers.pipline(...)` one can simply pass these arguments in as keyword arguments, this is not documented anywhere. It is also not included in any examples.
This request is to have the documentation updated so future users don't need to read the source code. This update should expand beyond tokenizing (as it also handles post_processing, etc...).
### Motivation
It's very often the case that a tokenizer is not called with the default arguments: padding, max length, etc... are often changed. The implementation for pipelines actually makes setting these arguments very simple, but it is not communicated so it is difficult to take advantage of.
### Your contribution
I can contribute to the documentation if needed. | 10-04-2022 18:07:26 | 10-04-2022 18:07:26 | could I get assigned to this?<|||||>ping @Narsil @stevhliu @sgugger <|||||>@Narsil @stevhliu @sgugger could I get assigned to this?<|||||>@rhelmeczi @DIvkov575
Thanks for this proposal, this would be with delight to give the documentation some love here !
`pipeline` is a bit of a magical object with MANY parameters, making them understandable/accessible is definitely a challenge (worth it though !).
Focusing on *some* parameters might be the most important (I don't know I'm just giving ideas).
Adding example in each pipeline docstring: https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ImageClassificationPipeline could also be a good thing.
We could maybe also probably make it mandatory somehow (so that when new pipelines come how we're sure we're showing how to use them.
We also have Tasks : https://huggingface.co/tasks that could be used/reused somehow. @merveenoyan
<|||||>Thanks for the feedback and feel free to take this on if you're interested!
If you have any questions about writing the docs, take a look at this guide [here](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation) :)<|||||>@Narsil Your suggestions are very helpful.
Adding separate documentation for each pipeline makes sense. For example, in the `TextClassificationPipeline` the keyword arguments are both keyword arguments for the tokenizer's call function, and keyword arguments for the `postprocess` function. I think even a brief statement along the lines of (but not necessarily identical to):
> keyword arguments passed to `TextClassificationPipeline.tokenizer.__call__` and `TextClassificationPipeline.postprocess`
where the function names are clickable would be extremely helpful. Simply pointing to the recipient functions also makes this a beginner friendly task. I'm assuming of course that for each pipeline, the keyword arguments are only ever passed along to other functions.
Getting caught up on the documentation should probably be done over several commits: adding one commit at a time for each of the specific pipelines will be much easier to review, that's just my two cents though.
@DIvkov575 Keeping in mind that I'm not a maintainer of this repository, and therefore keeping in mind that my above suggestions are not necessarily ones that will be accepted, you can feel free to add documentation if you feel up to it.<|||||>> Getting caught up on the documentation should probably be done over several commits: adding one commit at a time for each of the specific pipelines will be much easier to review, that's just my two cents though.
I'll go even further, 1 PR per change is the easiest course of action ! Makes it easier to review, and if for some reason one change is more debated is doesn't prevent the other from going in.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,323 | closed | Add Switch transformers | # What does this PR do?
This PR attempts to add Switch Transformers from t5x with @ArthurZucker & @thomwolf
The architecture seems to be similar to a t5 architecture (the architecture is copied from T5), where the FF layer is slightly modified, introducing the first Mixture of Experts (MoE) architecture inside `transformers` library.
paper: https://arxiv.org/abs/2101.03961
weights: https://github.com/google-research/t5x/blob/eb42c2524bf65c8a46624f1a9b9e034d9bc65b14/docs/models.md#converted-mesh-tensorflow-checkpoints
original modeling code: https://github.com/google/flaxformer/tree/b725bd2a51d70e866d819c92de166fbf24425e6a/flaxformer/architectures/moe
# TODOs:
- [x] Make the forward pass run
- [x] Convert the weights in Pytorch format and upload them on the Hub
- [x] Match the logits between the original implementation and ours | 10-04-2022 16:59:56 | 10-04-2022 16:59:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot @sgugger for your comments!
Would love to have another round of review as I added some modification for `accelerate` and `bnb` compatibility 🙏 <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.<|||||>Failing tests seems to be unrelated to this PR, merging! |
transformers | 19,322 | closed | LongT5ForConditionalGeneration NAN losses with bf16 | ### System Info
transformers version: 4.23.0.dev0
torch version: 1.12.1
OS: Ubuntu 20
Cuda: 11.6
The problem is that LongT5 is supposed to work with bf16=True, but it doesn't. It is known that fp16 should fail in this, and I have tried it and effectively fails. However, Longt5 is supposed to be trained on bf16, therefore it would be expected that turning bf16 to True would work.
My training arguments look like this:
```python
{
"evaluation_strategy": "epoch",
"num_train_epochs": 4,
"do_train": True,
"do_eval": False,
"eval_steps": 2,
"logging_strategy":"epoch",
"save_strategy": "epoch",
"save_total_limit": 4,
"seed": 69,
"bf16": True,
"dataloader_num_workers": 32,
"adam_epsilon": 1e-8,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"group_by_length": False,
"gradient_checkpointing": False,
"lr_scheduler_type": "linear",
"learning_rate": 1e-4,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
"gradient_accumulation_steps": 64,
"warmup_ratio": 0.08
}
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If you need a script for reproduction please let me know.
### Expected behavior
Longt5 (as I understand from the forum etc) should work with bf16.
| 10-04-2022 16:54:20 | 10-04-2022 16:54:20 | Agree that it should work with `bf16` - when you mean it fails, how does it fail? Just bad training results ? Could you define this here?
Also cc @ArthurZucker here<|||||>Sorry I did not specify this, you're right! What I meant is that losses are nans, therefore the model does not learn (it happens the same as when using fp16 with a bf16-pretrained model).<|||||>Okey cc @ArthurZucker and @stancld here<|||||>Thanks! If there is anything I can do to help please let me know, I need this kind of urgently so I'm very much interested in it working properly :) <|||||>Any update?<|||||>Hey! Sorry I will have a look tomorrow! 🤗<|||||>Okay! :) Thanks!<|||||>Could you give me a training script and the full error stack so that I can work on your issue? 🤗 Sorry for the delay <|||||>Yeah, I'll post it here as soon as I can so that you can reproduce it :) thank you very much!!<|||||>First run:
```bash
pip install transformers datasets rouge_score
```
Then, with this script you can replicate it. It is set with bf16=True.
```python
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainer, Seq2SeqTrainingArguments, AutoTokenizer, DataCollatorForSeq2Seq
from datasets import load_dataset, load_metric
import nltk
import numpy as np
nltk.download('punkt')
model_str = "google/long-t5-tglobal-base"
metric = load_metric("rouge")
tokenizer = AutoTokenizer.from_pretrained(model_str)
dataset = load_dataset("IIC/sqac_tests")
max_input_length = 128
max_target_length = 16
def preprocess_function(examples):
inputs = [doc for doc in examples["context"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["title"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True)
model = AutoModelForSeq2SeqLM.from_pretrained(model_str)
batch_size = 1
args = Seq2SeqTrainingArguments(
"fail_test",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=20,
predict_with_generate=True,
bf16=True, # change to fp16 in no Ampere GPU available.
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
```
With this you can observe that both training and validation losses are Nans... Hope it helps to figure out what happens, if I can provide any more help please let me know :) @ArthurZucker <|||||>Did you have the time to try the script? @ArthurZucker @stancld <|||||>Hi, any updates? @ArthurZucker @stancld @patrickvonplaten I did not receive any answer after sending the reproduction script....<|||||>Hey sorry not yet ! <|||||>Okay thanks! Let me know if I can help in some way... :) @ArthurZucker <|||||>Hey! Just tested your script and both losses are not Nan. Since you seem to be using a dev version, where you on main?
It seems that with most recent versions it works perfectly well.
```python
{'eval_loss': 2.1923375129699707, 'eval_rouge1': 26.7045, 'eval_rouge2': 12.0, 'eval_rougeL': 25.4545, 'eval_rougeLsum': 25.8144, 'eval_gen_len': 19.0, 'eval_runtime': 5.0999, 'eval_samples_per_second': 1.961, 'eval_steps_per_second': 1.961, 'epoch': 20.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [02:37<00:00, 3.10it/s]
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 157.0429, 'train_samples_per_second': 1.274, 'train_steps_per_second': 1.274, 'train_loss': 3.485249328613281, 'epoch': 20.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [02:37<00:00, 1.27it/s]
```
I used both `transformers==4.22` and `4.23`. <|||||>Thank you so much for trying it out! Let me try it again with latest version (4.25.0.dev) to check if it is working now also in my machine. I was trying to 4.23.0.dev, so maybe if it doesn't work also in 4.25.0.dev I'll have to turn to 4.22 or 4.23 :) <|||||>Great, with the last version it does work!! Thank you very much for helping me ! @ArthurZucker |
transformers | 19,321 | closed | Call _set_save_spec() when creating TF models | Much like confused ducklings, subclassed Keras models tend to imprint on the first concrete input shapes they see unless we explicitly `build()` them with more general shapes. However, the default `tf.keras.Model.build()` makes several restrictive assumptions and doesn't work for us.
The solution is to directly call `model._set_save_spec()` with the shapes we want. We do this in the `__init__` to make sure that it happens before the model is built or called with any inputs. We can also now remove the override on `model.save()`, which is no longer necessary now that we're fixing this properly.
Fixes #19231 | 10-04-2022 16:35:47 | 10-04-2022 16:35:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,320 | closed | ONNX conversion of deberta_v2 models | ### System Info
transformers: 4.22.2
platform: Ubuntu 20.04.2
python: 3.8.10
### Who can help?
DeBERTa-v2: @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following command fails:
python -m transformers.onnx --model=microsoft/deberta-v3-small onnx/
The model can be microsoft/{mdeberta-v3-base, deberta-v3-small, deberta-v3-base, etc.}.
The main problem lies in the `symbolic(...)` method of the class `XSoftmax` class, that is called while tracing the model. However, there are several other warnings while converting the model (Are they important?)
### Expected behavior
Model saved as ONNX format. | 10-04-2022 15:09:40 | 10-04-2022 15:09:40 | I believe I solved it (rather 'I forced it to work')
(line: 159)
```python
t = self.type().dtype() if hasattr(self.type(), 'dtype') else self.type().scalarType()
TYPE_CAST = {
'float': torch.float64,
'int': torch.int64
}
t = TYPE_CAST[t.lower()]
output = masked_fill(
g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(t).min))
)
```
This is not tested with all models, but at least works for now. Hopefully you'll find a nicer solution.<|||||>cc @michaelbenayoun @lewtun <|||||>[UPDATE]
Remember when I said that there were warnings while tracing the model? Well, I know now they are important.
I cannot use the traced model with an input different from the one I used to trace the model.
This is the error I get:
```python
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_389' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:35 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) size != 0 && (input_shape.Size() % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{12,15,15}, requested shape:{-1,12,136,136}
```
Please can you fix this?<|||||>Hey @kobiche thanks for reporting this error, we also noticed this in #19255 when validating the `deberta` models with different batch size / seq len compared to the one used to trace the model.
We'll aim to patch this ASAP<|||||>https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L577
And ONNXRuntime dosen't support torch.where op now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,319 | closed | convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py does not work on fairseq wav2vec2-xls-r weights | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
on bash,
1. wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
2. wget https://huggingface.co/facebook/wav2vec2-xls-r-300m/raw/main/config.json
3. python3 convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path output --checkpoint_path xlsr2_300m.pt --config_path config.json --not_finetuned
Then this error arises:
```
Traceback (most recent call last):
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 273, in <module>
convert_wav2vec2_checkpoint(
File "/home/heatz123/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 254, in convert_wav2vec2_checkpoint
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_path])
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/checkpoint_utils.py", line 436, in load_model_ensemble_and_task
task = tasks.setup_task(cfg.task)
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/tasks/__init__.py", line 39, in setup_task
cfg = merge_with_parent(dc(), cfg)
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/dataclass/utils.py", line 500, in merge_with_parent
merged_cfg = OmegaConf.merge(dc, cfg)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/omegaconf.py", line 321, in merge
target.merge_with(*others[1:])
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 331, in merge_with
self._format_and_raise(key=None, value=None, cause=e)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 329, in merge_with
self._merge_with(*others)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 347, in _merge_with
BaseContainer._map_merge(self, other)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 314, in _map_merge
dest[key] = src._get_node(key)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 258, in __setitem__
self._format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ConfigKeyError: Key 'multiple_train_files' not in 'AudioPretrainingConfig'
full_key: multiple_train_files
reference_type=Optional[AudioPretrainingConfig]
object_type=AudioPretrainingConfig
```
note that I am using fairseq 0.12.2 (which was installed by default using `pip install fairseq`)
### Expected behavior
converting fairseq -> pytorch(transformers) weights should work without any errors. | 10-04-2022 14:36:33 | 10-04-2022 14:36:33 | And after some inspections on fairseq library, I found that this change can make the conversion work also on wav2vec2-xls-r weights:
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py#L249
to
```python
task_arg = argparse.Namespace(task='audio_pretraining')
task = fairseq.tasks.setup_task(task_arg)
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_path], task=task)
```
Can I make a PR on this change?<|||||>Yes please! Would you like to open a PR for this change? You can also update the conversion script for Wav2Vec2 Conformer - it's exactly the same logic in `convert_wav2vec2_conformer_checkpoint` 🤗<|||||>I see, thank you. Will open a PR within a day. |
transformers | 19,318 | closed | where can I find document about BertPreTrainedModel? | hello,
I am trying to find document about BertPreTrainedModel which is used on colbertv1
I only found the source code below site ( 4.11.3 bert_modeling )
https://huggingface.co/transformers/v3.5.1/_modules/transformers/modeling_bert.html
but I didn't find any document for this in the official site for latest version
https://huggingface.co/
So, My question is **where can I find document about BertPreTrainedModel? is this removed?** | 10-04-2022 13:38:13 | 10-04-2022 13:38:13 | What are you trying to do with `BertPreTrainedModel`? This is an abstract class that isn't really public facing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,317 | closed | HF <-> megatron checkpoint reshaping and conversion for GPT | # What does this PR do?
With respect to GPT model,
1. Now, user would be able to convert Megatron-LM GPT model having different tensor parallel and pipeline parallel sizes to a universal transformers checkpoint for `gpt2` model. This checkpoint is also sharded by default (10GB per shard) or the value which user provides. Sample command to run to convert from Megatron-LM to Transformers checkpoint (below command is run for a checkpoint having tp-size 2 and pp-size 1):
```
python checkpoint_reshaping_and_interoperability.py \
--convert_checkpoint_from_megatron_to_transformers \
--load_path "megatron_lm_gpt/iter_0005000" \
--save_path "hf_checkpoint" \
--max_shard_size "200MB" \
--tokenizer_name "/home/sourab/code-parrot-minimal" \
--print-checkpoint-structure
```
Ouput logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-megatron_to_trfs-log
2. Reverse conversion from transformers checkpoint to Megatron checkpoint with variable TP and PP sizes is supported. Sample command for it is given below (converting to a checkpoint with `target_tensor_model_parallel_size`=2 and `target_pipeline_model_parallel_size`=1. `target_data_parallel_size` is used when `--use_distribured_optimizer` is passed):
```
python checkpoint_reshaping_and_interoperability.py \
--load_path "hf_checkpoint" \
--save_path "megatron_lm_checkpoint" \
--target_tensor_model_parallel_size 2 \
--target_pipeline_model_parallel_size 1 \
--target_data_parallel_size 2 \
--target_params_dtype "bf16" \
--make_vocab_size_divisible_by 128 \
--print-checkpoint-structure
```
Output logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-trfs_to_megatron-log
A quick test to make sure everything is working properly:
Code as well as output logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-gistfile1-txt
| 10-04-2022 13:21:44 | 10-04-2022 13:21:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I can't use this from transformers to megatron for accelerate-megatron-plugin to continue finetune the checkpoint. is there any thing i missing? then load the converted checkpoint with resume_from_checkpoint args, it seems that the gpus are reinit and only one gpu is 100% utility. <|||||>Hello, thank you for the converting tools.
By the way, is there any plan that the script of T5 HF model <-> Megatron model will be supported and when? |
transformers | 19,316 | closed | Fix for sequence regression fit() in TF | Fixes #19308
Keras really doesn't like 1-dimensional label tensors. We've caught most of the cases where this causes problems with the dummy loss, but sequence regression slipped through and is now fixed! | 10-04-2022 13:15:38 | 10-04-2022 13:15:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,315 | closed | Added Type hints for LED TF | Based on Issue #16059
I have added type hints for Tensorflow LED model.
@Rocketknight1 Could you kindly check if this is fine? | 10-04-2022 12:49:10 | 10-04-2022 12:49:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,314 | closed | Added Type hints for LED TF | Based on Issue #16059
I have added type hints for Tensorflow LED model. | 10-04-2022 12:44:18 | 10-04-2022 12:44:18 | |
transformers | 19,313 | closed | [Docs] Fix link | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes link to `accelerate`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-04-2022 12:44:16 | 10-04-2022 12:44:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,312 | closed | [WIP] Making `Camembert` independent from Roberta | # What does this PR do?
related to #19303
Making the Camembert model (pytorch version) independent from Roberta
I have changed all the classes of camembert by copy paste from Roberta. I have done the litte necessary changes for every thing to work.
I'm still wondering how to change blocks like those, I have to add a specific checkpoint for Camembert ? :
```python
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
checkpoint="deepset/roberta-base-squad2",
output_type=QuestionAnsweringModelOutput,
config_class=_CONFIG_FOR_DOC,
expected_output="' puppet'",
expected_loss=0.86,
)
```
For testing, the following test works well :
```bash
$ RUN_SLOW=1 pytest tests/models/camembert/test_modeling_camembert.py
```
However, I have noticed the tests are only for the `CamembertModel` but not other classes like `CamembertForCausalLM` ...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger related to #19303 | 10-04-2022 12:29:45 | 10-04-2022 12:29:45 | In the `CamembertPreTrainedModel` I left `base_model_prefix = "roberta"` to be equal to "roberta". If I change it into "camembert" the test fails. This is normal ?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your answers @sgugger. I took some time to understand the concepts of `flake8` and `repo-consistency-<|||||>I have done one thing without being very sure about it (that was the only way to make `make fixup` happy). I added the `CamembertPreTrainedModel` class into the `__init__` and `dummy_pt_objects` (as in the PR). I don't know if this is the correct way to do it ?<|||||>I don't understand why some torch tests failed just by replacing `self.camembert` with `self.roberta`. <|||||>Hi @sgugger, I have done git rebase from the huggingface:main, this brought all the other commit over my branch, this might be very difficult to read. If you want I can close this pull request and open a clean one ?<|||||>Arg, yes we would need a clean PR: even if I can see the changes are not related, it will mess up authorship of this commit when we merge your PR (it will show all the other random commits of your rebase) and if somehow this PR introduces a bug, when we come look at it later, it will be hard to see what changed the error.
Tip: for GitHub you need to force-push after a rebase when your PR is already open (with `git push -u origin branch --force`) |
transformers | 19,311 | open | NER Training on custom Data | https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/examples/pytorch/token-classification/run_ner.py#L200
[README.md](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification#readme) mentions providing text files for training and validation, however, run_ner.py expects CSV or JSON files only. | 10-04-2022 09:13:31 | 10-04-2022 09:13:31 | I have the same question! Can anyone tell me how to design my custom data file format? Thank u!!<|||||>Please use the [forums](https://discuss.huggingface.co/) to discuss questions like this as we keep the issues for bugs and feature requests only.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>@sgugger @WeiShi-9 @akshaydhok07 @vanpelt @pvl
I have the same problem, how to set custom data for trainning with the [pipeline]([run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py))
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>@sgugger
The similar question has been created in the [forum](https://discuss.huggingface.co/t/custom-files-for-run-ner-py/9156), but no one handle it.
Also, the similar issue actually hasn't been solved in https://github.com/huggingface/transformers/issues/8698 .
If you can provide a tiny example for csv or json format in README, that should be very helpful. 😃 <|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>你好,我已经收到您的邮件~ |
transformers | 19,310 | closed | Add `BloomForQuestionAnswering` | # What does this PR do?
This PR adds the class `BloomForQuestionAnswering`, insipred from [`GPTJForQuestionAnswering`](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/models/gptj/modeling_gptj.py#L1024), as the community asked for the release of this class (See the discussion here: https://huggingface.co/bigscience/bloom/discussions/46#633b35f21fd49ee0b64e29d2)
cc @sgugger @ydshieh @LysandreJik @ArthurZucker | 10-04-2022 09:05:12 | 10-04-2022 09:05:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I have just fixed your suggestions @sgugger 💪 Gently pinging you here, and let me know if you need me to open a PR for GPTJ too ;) !<|||||>Thanks a lot!! |
transformers | 19,309 | closed | Update README.md | # Fixed link in the main README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-04-2022 08:35:54 | 10-04-2022 08:35:54 | > # Fixed link in the main README.md
>
> Fixes # (issue)
> ## Before submitting
>
> * [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
>
> * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
> Pull Request section?
>
> * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
> to it if that's the case.
>
> * [ ] Did you make sure to update the documentation with your changes? Here are the
> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
>
> * [ ] Did you write any new necessary tests?
>
>
> ## Who can review?
>
> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
>
> @sgugger
- Hello @sgugger , I was reading the documentation and following the installation steps. I encountered a bug with the link in the documentation I have added in the screenshot below.

- The link https://huggingface.co/docs/transformers/examples redirects to a page which does not exist in the website. Below:

- It also does not redirect to the page but throws 404 error. Below:

- I have added a valid link (as per the docs) instead of existing link. Please checkout this issue and whether link added by me is valid or not. Thank you.
- My added link: https://github.com/huggingface/transformers/tree/main/examples
- If there is an issue or addition in this version(as it is saying in the webpage) please do look into that.
<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,308 | closed | TFSequenceClassifierOutput can't return loss batch_size when num_labels=1 | ### System Info
[**Colab Env**](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1 @sgugger @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1lTxUPTa0XPRXJRV1NDYzwvzcMOY-fqJf?usp=sharing
### Expected behavior
`STS-B` task can train with `model.fit()` | 10-04-2022 07:28:08 | 10-04-2022 07:28:08 | Verified the issue, making a patch now!<|||||>@goreng2 I've submitted a fix at #19316. I'll let you know when this is merged to main.<|||||>@goreng2 The fix should now be merged, but you'll have to install from `main` to use it until the next official release. To do that, replace `pip install transformers` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. After our next release you can change your code back to just `pip install transformers`.
If this doesn't resolve your problem, feel free to comment and re-open this issue!<|||||>@Rocketknight1 It works! Thanks :) you save me! |
transformers | 19,307 | closed | Removing BertConfig inheritance from LayoutLMConfig | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #19303
Remove LayoutLMConfig dependence on BertConfig.
The `__init__` from `LayoutLMConfig` diverges from `BertConfig` in the following way:
- `max_2d_position_embeddings` is an argument specific to LayoutLM
For that reason, I didn't add any `# Copied from ...`
Other change:
Previously, the arguments `position_embedding_type`, `use_cache`, `classifier_dropout` were not part of the `__init__` function for LayoutLM and were always set to the BertConfig default (through the `super().__init___()` call). Because there is no inheritance anymore, I added back those arguments to `__init__`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-03-2022 22:34:19 | 10-03-2022 22:34:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Welcome to the repo, Arnaud :)
Pinging @sgugger for review |
transformers | 19,306 | closed | T5 vocab size discrepancy between config and tokenizer | ### System Info

transformers version == '4.18.0'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy the code in image
### Expected behavior
same vocab size | 10-03-2022 22:03:24 | 10-03-2022 22:03:24 | I found the same issue for deberta-v3 #19301. It should be fine as long as the model's vocab is larger than the tokenizer's. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,305 | closed | Huge discrepancy between HuggingFace and Timm for ViT and other vision transformers | ### Feature request
Differences between HugginFace and Timm implementation of Vision Transformers can be listed as below:
-Missing stochastic depth (https://arxiv.org/abs/2012.12877)
-Using m.weight.data.normal_(mean=0.0, std=0.02) instead of trunc_normal_()
-Missing trunc_normal_() init for the position embedding and cls_token
My DeiT started training properly once I started using the trunc_normal_() init and stochastic depth for my huggingface ViT model. Also, I remove the pruning head functionality and I no longer inherit the HuggingFace ViT model class from the "PretrainedModel" class, but I'm not actually sure if this actually caused my training to work properly.
### Motivation
These things could mean the difference between getting Nan or not during training for DeiT using the process from https://arxiv.org/abs/2012.12877
### Your contribution
Would love to share my code but I can't. I refer you to read the code (https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py) | 10-03-2022 20:56:31 | 10-03-2022 20:56:31 | cc @NielsRogge @amyeroberts @alaradirik <|||||>@CharlesLeeeee thank you for bringing this up! We are aware of the discrepancy and aim to rectify it soon. We will fix the parameter initialization issue shortly and open a separate PR to add stochastic depth.
cc @LysandreJik @NielsRogge @amyeroberts <|||||>I believe setting eps in layernorm to 1e-6 rather than 1e-12 is also important.<|||||>FWIW there is an issue related to this on the timm side as well https://github.com/rwightman/pytorch-image-models/issues/1477
As per my comments, the init issue should be minor / non consequential as it would not result in a significant difference given that std == .02. I've trained from scratch with much more significantly different inits and the end results aren't too far off.
The layer norm eps is likely an issue though, that was not mentioned on the timm side. For float16, 0 + 1e-12 = 0, not so for 1e-6 or 1e-5, which are defaults for all vision models I'm aware of that use LN. It looks like there are possibly other models that incorrectly use 1e-12 such as convnext? This could cause stability issues at reduced precision and will change the validation results for weights pretrained with 1e-5 or 1e-6. Generally 1e-12 should only be used as eps if you're sticking with float32 (or all uses of that eps are guaranteed to be upcast to float32).
<|||||>Kaiming initialization should be used for the nn.Conv2d rather than .normal_() initialization in the class ViTPreTrainedModel or any class that directly inherits from PretrainedModel. And the biases of the nn.Conv2d in ViT should be initialized the same way as PyTorch. (https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv2d) @LysandreJik @NielsRogge @amyeroberts @alaradirik<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@CharlesLeeeee you are partially right, it seems that ViT uses PyTorch's default initialization scheme for `nn.conv2d`, at least in [timm](https://github.com/rwightman/pytorch-image-models/blob/7c4ed4d5a43f46084cc9b6f20a5edb8839bbeb14/timm/models/vision_transformer.py#L395). The JAX init however uses a LeCun normal as seen [here](https://github.com/rwightman/pytorch-image-models/blob/7c4ed4d5a43f46084cc9b6f20a5edb8839bbeb14/timm/models/vision_transformer.py#L415-L418).
I'm working on this in #19449 |
transformers | 19,304 | closed | Correct typos and fix a broken link in docs | # What does this PR do?
Correct typos in [docs](https://github.com/huggingface/transformers/tree/main/docs) and update the trainer doc link from https://github.com/huggingface/transformers/blob/main/docs/source/main_classes/trainer.mdx to https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-03-2022 20:21:19 | 10-03-2022 20:21:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,303 | closed | Make all models folder independent from each other | Transformers has a Do Repeat Yourself policy in the sense that it does not provide building blocks that we then mix and match, but we strive to have each model be self-contained in terms of code, at the price of code duplication. You can find more about this philosophy in [this blog post](https://huggingface.co/blog/transformers-design-philosophy).
There are instances in the library (mostly with older models) where this is not respected. This issue will serve as a tracker for all those instances, so that the library is cleaner and each model/tokenizer/config is easier to tweak by itself. This will also make it easier for us to test individual models in autonomy.
If you wish to make a contribution to Transformers, you can help! Pick a config/model/tokenizer in the list below (double-check someone is not working on it already by searching this page!) and indicate with a comment that wish to work on it. Read our [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) as well as the section below, and once you are ready, open a PR and tag @sgugger on it.
## How to remove a dependency from another model
There are two different types of dependencies: either a configuration/model/tokenizer uses an intermediate object from another model (example: some tokenizer uses the `BasicTokenizer` defined in the `tokenization_bert` module, or it subclasses another configuration/model/tokenizer.
In the first case, the object code should just be copied inside the file, with a "Copied from" statement. This will make sure that code is always kept up to date even if the basic object is modified. For instance, if a tokenizer is using `BasicTokenizer`, go copy the code in `tokenization_bert` for that class, then paste it in the tokenizer module you are treating and add the following copied from comment:
```py
# Copied from transformers.models.bert.tokenization_bert.BasicTokenizer
class BasicTokenizer(object):
...
```
In the second case, the code of the class (and all its building blocks) should be copied and renamed to be prefixed by the model: for instance if you are copying code from the modeling_bert module to build Roberta, you replace all `BertLayer`, `BertOutput` etc... by `RobertaLayer`, `RobertaOutput`...
You should then add a copied from statement (when the copy is without any modification) like this one:
```py
# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta
class RobertaAttention(nn.Module):
...
```
Note the replacement pattern that will adapt all names used. Note that:
- you can add more of those patterns, separated by a comma like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py#L1388).
- you can ask to replace all possible casings like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/mobilebert/modeling_mobilebert.py#L1549)
- you can just copy one method and not the whole class like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/roberta/modeling_roberta.py#L741)
**NB:** No need for copied from statements in the config (the defaults are probably different anyway).
## Objects to cover
### Configurations
- [x] Flaubert config (should not use XLM)
- [x] LayoutLM config (should not use Bert)
- [x] LongformerConfig (should not use Roberta)
- [x] MarkupLMConfig (should not Roberta)
- [x] RobertaConfig (should not use Bert)
- [x] XLM-ProphetNet config (should not use ProphetNet)
- [x] XLM-Roberta config (should not use Roberta)
### Models
- [x] BertGeneration (should not use BertEncoder)
- [x] Camembert (should not use Roberta) (PyTorch + TF)
- [x] Flaubert (should not use XLM) (PyTorch + TF)
- [ ] mT5: ~PyTorch~, TensorFlow, Flax (should not use T5)
- [x] XLM-ProphetNet (should not use ProphetNet)
- [ ] Xlm-Roberta: ~PyTorch~, TensorFlow, Flax (should not use Roberta)
### Tokenizers
- [x] BertJapanese (should not use any imports from tokenization bert)
- [x] Blenderbot (should not use Roberta) (slow/fast)
- [x] Clip (should not use BasicTokenizer from Bert)
- [x] ConvBERT (should not use Bert) (slow/fast)
- [x] Cpm tokenizer (should not use XLNet) (slow/fast)
- [x] Derberta tokenizer (should not use GPT2) (slow/fast)
- [x] DistilBert (should not use Bert) (slow/fast)
- [x] Electra (should not use Bert) (fast)
- [x] Flaubert (should not use XLM)
- [x] Funnel (should not use Bert) (slow/fast)
- [x] Herbert (should not BasicTokenizer from Bert and XLM)
- [x] LayoutLM (should not use Bert) (slow/fast)
- [x] LED (should not use BART) (slow/fast)
- [x] Longformer (should not use Roberta) (fast tokenizer)
- [x] Luke (should not use Roberta)
- [x] Lxmert (should not use Bert) (slow/fast)
- [x] MobileBert (should not use Bert) (slow/fast)
- [x] Openai-GPT (should not use BasicTokenizer from Bert)
- [x] ProphetNet (should not use BasicTokenzier and WordPieceTokenizer from Bert)
- [x] Retribert tokenizer (should not use Bert) (slow/fast)
- [x] Roformer tokenizer (should not use any imports from tokenization bert)
- [x] Squeezebert tokenizer (should not use Bert) (slow/fast) | 10-03-2022 16:38:01 | 10-03-2022 16:38:01 | Hello! Happy to take LayoutLM Config and Tokenizer :) <|||||>Hi, I would like to work on this. thanks.<|||||>Hi @OtherHorizon , as explained in the issue, please pick a model/config and/or tokenizer so that others know not to pick the same one :-)<|||||>Hi @sgugger, I like this philosophy, and I can work on DoRYing `LongformerConfig` and `Longformer Tokenizer` :) <|||||>Hi @sgugger, I would like to work on RobertaConfig config and DistilBert tokenizer! <|||||>Hi @sgugger, I would love to work on `Camembert` model.<|||||>Hello @sgugger I can work on the `Xlm-Roberta` model and config<|||||>Heya! I'd like to grab the `OpenAI-GPT` tokenizer :)<|||||>Hi! I'd Like to work on `Flaubert` config and model<|||||>Hi, I would like to work on mT5 and ProphetNet.<|||||>Hey! I can take a look at ClipTokenizer!<|||||>Hi, i would like to work on `Electra` and `Luke` Tokenizer<|||||>Hi, I would like to work on `BertGeneration`.
<|||||>Hi! I'd Like to work on `ConvBERT` and `Lxmert` tokenizer.<|||||>Hi! I would like to work on `MobileBert` tokenizer.<|||||>Hello! I can also take up refactoring XLM-ProphetNet model and config. <|||||>Hello, commenting to mark retribert<|||||>Hello! I would like to work on `Herbert` tokenizer.<|||||>Hello, I would like to work on Roformer tokenizer<|||||>I can work on LED <|||||>Hi, I'd would gladly like to work on the Funnel tokenizer.<|||||>Hi all - I'd like to work on `Blenderbot` and `Squeezebert tokenizer` 😄<|||||>Hi, I'd like to work on `MarkupLM` config. <|||||>Coming in to mark BertJapanese and Cpm now that I have some idea of how this goes<|||||>I would like to work on `Derberta tokenizer` maybe a typo . Just to confirm, task is to remove `GPT2Tokenizer` from https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/deberta/tokenization_deberta.py#L66<|||||>I can work on `mt5 model` if free! @divyanshugit Can I take the issue, if you haven't already started?<|||||>hi @sgugger i would love to work on MarkupLMConfig<|||||>@sgugger Would like to work on the Roberta config.Please assign me<|||||>hi @sgugger i can try working on the ~~XLM-ProphetNet model~~
edit: just saw there was a pr for this, I can take whatever is left (if any)<|||||>The fast tokenizers for ELECTRA and Longformer are still available FYI :-)<|||||>@sgugger I would like to contribute fast tokenizers `ELECTRA` and `Longformer`.
Edit :
@sirmammingtonham I missed your message. I can take `ELECTRA`, you can take `Longformer` ?<|||||>@sgugger I would like to work on `Funnel Tokenizer`<|||||>@Threepointone4 yep I can take the longformer fast tokenizer thanks!<|||||>@sgugger , Are there any tasks available further. I'd like to work on this
<|||||>I think everything is taken as of now. If someone reports back they can't complete the model they picked, I will let you know!<|||||>@sgugger what if the code is changed in the source after it has been copied to the target? Do you have to manually copy again the new version?
<|||||>No, that's what the `Copied from` statements are for, they will make sure the code always stay up to date.<|||||>Hi @sgugger I would like to work on RobertaConfig <|||||>Hi @sgugger , I would like to work on `BertJapanese` and `Cpm` tokenizer.<|||||>Hi @sgugger, while browsing through the different models, I realized that CamembertConfig is dependent on RobertaConfig. But, it seems the config is exactly the same as RobertaConfig. So essentially CamembertConfig is actually dependent on BertConfig, since RobertaConfig is dependent on BertConfig.
However, I could not find CamembertConfig on the list above, so I am unsure if there is a need for seperating the RobertaConfig dependecy from CamembertConfig. Just incase it is, I will start a PR and maybe I can get some feedback from there.<|||||>Hi @sgugger, I would like to work on RobertaConfig next. Unsure if someone is working on it currently or not, since the last [pull request](https://github.com/huggingface/transformers/pull/19856) submitted for RobertaConfig was 14 days ago. That pull request was later closed, and no new PR was opened since then. I will submit a PR just incase, no one is working on it.<|||||>Hi @sgugger ! I would like to work on `Luke` tokenizer<|||||>Hey @sgugger, Can I take the `mT5` model and `DistilBert` tokenizer? Thanks,<|||||>Sure<|||||>mt5 is the last one on the list!<|||||>@sgugger I'll give mt5 a shot<|||||>Ah, @SD-13 are you still working on this?<|||||>> Ah, @SD-13 are you still working on this?
Yes, I am on it.<|||||>@SD-13 No worries! If you haven't started already, I've already been poking around at it for a bit and I'd like to keep going with it. Totally up to you, though :D<|||||>@SamuelzXu , I already started! Please feel free to take some other issue, thanks for asking though :)<|||||>Hi @SD-13 , I noticed for your PR you didn't make any modifications to the mt5 tokenizers, which have dependencies on T5Tokenizer. Mind if I take care of that part if you're not already?<|||||>Hey @SamuelzXu, I don't think it's there.
<|||||>As @ydshieh discovered, a couple of models slipped through the cracks, check the list at the top if you still want to contribute :-) <|||||>Hey @sgugger , Can I take the `Xlm-Roberta` and `mt5` model. Thanks,<|||||>@SD-13 You are very welcome to take them. But it would be a good idea to work on one model type at a time :-) and take another one if it's still available ❤️ when you finish the selected one.<|||||>@ydshieh , yeah very true, thanks for reminding that. I would take `mt5`<|||||>@SD-13 I was referring to [this line](https://github.com/huggingface/transformers/blob/4f1c9d162ee53004be4d63b76627bd0c2f5a31a9/src/transformers/models/mt5/__init__.py#L37) in init.py which defines MT5Tokenizer as T5Tokenizer<|||||>@SD-13 [And also](https://github.com/huggingface/transformers/blob/4f1c9d162ee53004be4d63b76627bd0c2f5a31a9/src/transformers/models/mt5/__init__.py#L44) MT5TokenizerFast defined as T5TokenizerFast<|||||>@sgugger I'll take a look at the conversion for XLM-Roberta. Could you add [MT5Tokenizer](https://github.com/huggingface/transformers/blob/4f1c9d162ee53004be4d63b76627bd0c2f5a31a9/src/transformers/models/mt5/__init__.py#L37) and [MT5TokenizerFast](https://github.com/huggingface/transformers/blob/4f1c9d162ee53004be4d63b76627bd0c2f5a31a9/src/transformers/models/mt5/__init__.py#L44) to the list please? Looks like they're defined as the T5Tokenizer and T5TokenizerFast, respectively. |
transformers | 19,302 | closed | Don't automatically add bug label | # What does this PR do?
This PR removes the auto-added "bug" label in issues created with the Bug template. It's better if we had said label ourselves since those issues range from questions, to bug in the user's code and not always correspond to a bug in the library. | 10-03-2022 16:03:42 | 10-03-2022 16:03:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,301 | closed | deberta-v3 has 100 more vocabs than its tokenizer | ### System Info
- `transformers` version: 4.22.1
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.8.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
Hi @LysandreJik @SaulLu , I think this issue needs both of you to help or confirm:
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```Python
model_type = "microsoft/deberta-v3-base"
tokenizer = AutoTokenizer.from_pretrained(model_type)
print(tokenizer.vocab_size) # output: 128000
print(len(tokenizer.vocab)) # output: 128001, the extra one is padding?
config = AutoConfig.from_pretrained(model_type)
print(config.vocab_size) # output: 128100
model = AutoModel.from_pretrained(model_type, config=config)
print(print(len(model.embeddings.word_embeddings.weight)) # 128100, which is consistent with the config
```
### Expected behavior
The deberta model should have the same vocab_size as its tokenizer. | 10-03-2022 14:40:38 | 10-03-2022 14:40:38 | Maybe of interest to @ArthurZucker as well<|||||>> Maybe of interest to @ArthurZucker as well
Thanks Lysandre!<|||||>Hi @wenmin-wu,
It seems that the difference between the size of the tokenizer and the size of the `word_embeddings` matrix is voluntary on the part of the deberta-v3's authors as you can see in the issue https://github.com/microsoft/DeBERTa/issues/103.
By experience, it can happen that the size of `word_embeddings` is bigger than the total number of known tokens, for constraints on the size of the matrix (which should be a multiple of a certain number for example) or by precautions because the shape of the model was decided before having the final form of the tokenizer or to have "available" tokens which could be added later (with a little fine-tuning anyway).<|||||>Hi @SaulLu , thanks for your explanation. I'm also confused with the tokenizer:
1. The tokenizer doesn't directly return the index of that token, e.g.
```Python
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base", use_fast=True)
tokenizer("test")
# outputs: {'input_ids': [1, 1010, 2], 'token_type_ids': [0, 0, 0], 'attention_mask': [1, 1, 1]}
tokenizer.vocab["test"]
# outputs: 9982
# 9982 != 1010
```
2. How does the tokenizer handle the space before/after the special character? e.g.:
```Python
tokenizer("you (are) nice", return_attention_mask=False, return_token_type_ids=False)
# outputs: {'input_ids': [1, 274, 287, 6614, 285, 1085, 2]}
tokenizer("you(are)nice", return_attention_mask=False, return_token_type_ids=False)
# outputs: {'input_ids': [1, 274, 555, 6614, 285, 22184, 2]}
```
seems the difference is ` (` -> `287` and `(` -> `555`, `) ` -> `1085` and `)` -> `22184`
but actually, their decoded value is the same (special character without space):
```Python
tokenizer.decode(287) == tokenizer.decode(555), tokenizer.decode(1085) == tokenizer.decode(22184)
# Outputs: True, True
```<|||||>Hi @wenmin-wu,
> The tokenizer doesn't directly return the index of that token, [...]
By default, the `"microsoft/deberta-v3-base"` model adds a prefix space. You can see it this by running (knowing that the `▁` symbol corresponds to a space):
```python
print(tokenizer.convert_ids_to_tokens(tokenizer("test").input_ids))
# outputs: ['[CLS]', '▁test', '[SEP]']
```
> How does the tokenizer handle the space before/after the special character?
To see the token to which each id corresponds I advise you to use the `convert_ids_to_tokens` method instead of `decode`. On the example you shared, you will see that spaces are well included in the following token when they exist:
```python
print(tokenizer.convert_ids_to_tokens(tokenizer("you (are) nice", return_attention_mask=False, return_token_type_ids=False).input_ids))
# outputs: ['[CLS]', '▁you', '▁(', 'are', ')', '▁nice', '[SEP]']
print(tokenizer.convert_ids_to_tokens(tokenizer("you(are)nice", return_attention_mask=False, return_token_type_ids=False).input_ids))
# outputs: ['[CLS]', '▁you', '(', 'are', ')', 'nice', '[SEP]']
```
For information, the `decode` method is a method that does its best to reconstitute a text from a sequence of token ids produced by a generative model. This task being very complicated I suggest you try to use this method only when it is really necessary (i.e. only for a sequence of token ids produced by a generative model).
I hope this helped you<|||||>@SaulLu Got it, thanks a lot for your detailed explanation<|||||>I'm so glad I could help you! I'm closing this issue as I feel like it answers all your questions. :hugs: <|||||>Yep @SaulLu, your answers helped me a lot with the [Feedback Prize - English Language Learning](https://www.kaggle.com/competitions/feedback-prize-english-language-learning/leaderboard) Kaggle competition. I'm in the gold zone now. Many thanks!<|||||>@SaulLu Got another improvement again. I'm a Prize Contender now! Many thanks! |
transformers | 19,300 | closed | Trainer with save_total_limit=1 keeps 2 checkpoints when EarlyStoppingCallback is active | ### System Info
Transformers: 4.21.3
Python: 3.10.4
OS: Linux 5.4.0-91-generic #102-Ubuntu SMP x86_64
Torch: 1.12.1+cu113
GPU: A100 40G
### Who can help?
@sgugger
### Information
As the title says. See below for details.
### Tasks
I trained BERT-based sequence classifiers.
### Reproduction
1. Run a training () with `EarlyStoppingCallback` active, `save_total_limit` set to 1 and `early_stopping` to something (e.g. 3)
2. Run it for a number of iterations greater than `early_stopping`
3. Check the number of checkpoints.
What happens is that if the best performing checkpoint is not the last, both the last and the best checkpoints are kept.
### Expected behavior
With `save_total_limit=1`, I would expect that only the best checkpoint is kept. | 10-03-2022 13:48:09 | 10-03-2022 13:48:09 | Thanks for the writeup. That's not a bug, but very much intended. This is the only instance in which the `save_total_limit` argument is not fully respected, because:
- we need to keep the best model
- we also need to keep the last model to be able to resume training if a crash happens<|||||>@sgugger "If a crash happens". But what about when the training has ended? The checkpoint is not really needed anymore then, is it?
In any case, I could not find anything about this exception in the documentation. If it is not because of my inaptitude at looking up information (which it well may be), it might be worth adding a sentence to the description of the `save_total_limit`.<|||||>Sure, do you want to make a PR for this?<|||||>Just to clarify: a PR for what? :smile: <|||||>To add the exception to the documentation and/or delete the checkpoint at the end of training.<|||||>@sgugger I would like to fix this issue, can help on where this fix needs to go ? <|||||>@sgugger I think we can close this ticket<|||||>Thank you guys for taking care of this! |
transformers | 19,299 | closed | Model's internal loss is different than calculated loss at tf.keras.metric.Metric | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.5
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using tf.keras.Model version of gpt2, and training as casual language modelling.
```
gpt2_model.compile(..., metrics=[PerplexityMetric()]) #using inner loss (hf_compute_loss)
gpt2_model.fit(...)
```
The problem occurs when metric is calculated. I did not come up with an idea to use inner loss of transformers in tf.keras.metric.Metric (if anyone knows how it's good to rid of duplicate calculation) . So calculating loss as written in def hf_compute_loss in TFCausalLanguageModelingLoss. PerplexityMetric as:
```
class PerplexityMetric(tf.keras.metrics.Metric):
def __init__(self, name='perplexity', **kwargs):
super(PerplexityMetric, self).__init__(name=name, **kwargs)
self.cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
self.perplexity = self.add_weight(name='tp', initializer='zeros')
def _calculate_perplexity(self, real, pred):
unmasked_loss = self.cross_entropy(tf.nn.relu(real), pred)
# make sure only labels that are not equal to -100 affect the loss
loss_mask = tf.cast(real != -100, dtype=unmasked_loss.dtype)
masked_loss = unmasked_loss * loss_mask
reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(
loss_mask
)
loss_ = tf.reshape(reduced_masked_loss, (1,))
perplexity = tf.math.exp(loss_[-1])
return perplexity
def update_state(self, y_true, y_pred, sample_weight=None):
perplexity = self._calculate_perplexity(y_true, y_pred)
self.perplexity.assign(perplexity)
def result(self):
return self.perplexity
def reset_state(self):
# reset at the start of each epoch.
self.perplexity.assign(0.0)
```
### Expected behavior
Both inner loss calculation and the loss calculation in the metric are same. What I expect is that calculated losses are the same in training output. Last float within brackets is loss calculated in PerplexityMetric() (_loss__).
```
1/7 [===>..........................] - ETA: 2s - loss: 8.0234 - perplexity: 1708.3318[7.44036245]
2/7 [=======>......................] - ETA: 2s - loss: 8.1277 - perplexity: 1703.3676[7.78889942]
3/7 [===========>..................] - ETA: 1s - loss: 8.2354 - perplexity: 2413.6597[7.75492764]
4/7 [================>.............] - ETA: 1s - loss: 8.3068 - perplexity: 2333.0405[7.91714859]
5/7 [====================>.........] - ETA: 0s - loss: 8.3837 - perplexity: 2743.9358[7.48942709]
6/7 [========================>.....] - ETA: 0s - loss: 8.3680 - perplexity: 1789.0269[7.8069911]
```
I did not understand the difference between loss values. Is there any wrong calculation in PerplexityMetric or is this a buggy behaviour? | 10-03-2022 10:24:26 | 10-03-2022 10:24:26 | Hi @uunal, I don't think this is a bug in `transformers`. The main issue is that Keras displays the loss as the average for the whole epoch so far, but your Metric only displays the loss for the most recent batch. This is because you overwrite it with each batch. A better approach might be to store `self.total_loss` and `self.num_batches` and then in `result()` you could return `tf.math.exp(self.total_loss / self.num_batches)`.
I'm going to close this issue for now, but if you believe you've identified a bug in `transformers`, feel free to re-open it!<|||||>Thank you for the quick response and your suggestion. I understood the difference in losses.
If anyone interested later on, adding the updated metric code below:
```
class PerplexityMetric(tf.keras.metrics.Metric):
def __init__(self, name="perplexity", **kwargs):
super(PerplexityMetric, self).__init__(name=name, **kwargs)
self.cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
self.total_loss = self.add_weight(name="pl", shape=(), initializer="zeros")
self.num_batches = self.add_weight(
name="nb", shape=(), initializer="zeros"
)
def _calculate_loss(self, real, pred):
unmasked_loss = self.cross_entropy(tf.nn.relu(real), pred)
# make sure only labels that are not equal to -100 affect the loss
loss_mask = tf.cast(real != -100, dtype=unmasked_loss.dtype)
masked_loss = unmasked_loss * loss_mask
reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask)
loss_ = tf.reshape(reduced_masked_loss, (1,))
return loss_[-1]
def update_state(self, y_true, y_pred, sample_weight=None):
loss_ = self._calculate_loss(y_true, y_pred)
# update state variables
self.num_batches.assign_add(1)
self.total_loss.assign_add(loss_)
def result(self):
# perplexity: subword-level
return tf.math.exp(self.total_loss / self.num_batches)
def reset_state(self):
# reset at the start of each epoch.
self.total_loss.assign(0.0)
self.num_batches.assign(0)
``` |
transformers | 19,298 | closed | Bug with Training T5 Tokenizers on New Data | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.135-122.509.amzn2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
# Train T5's tokenizer on some new data.
training_corpus = ["12rdpo2rkfp", "$##@sdfag", "ja23m d@#"]
tokenizer = AutoTokenizer.from_pretrained("t5-large")
new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=110)
# Print the vocabulary sequentially.
for i in range(110):
print(new_tokenizer.convert_ids_to_tokens([i])[0])
# You'll see sentinel tokens such as `<extra_id_1>` are NOT at the end of the vocabulary.
```
### Expected behavior
The sentinel tokens in T5 must be at the end of the vocabulary. This constraint is stated in the documentation (e.g., [here](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer.extra_ids)), and official examples are relying on it. The code below is trying to find sentinel tokens from the back of the vocabulary (`len(self.tokenizer) - sentinel_ids`).
https://github.com/huggingface/transformers/blob/5cd16f01db3b5499d4665e8624801ed30ba87bdd/examples/flax/language-modeling/run_t5_mlm_flax.py#L378
However, when I follow [Hugging Face Course](https://huggingface.co/course/en/chapter6/2?fw=pt) to train T5's tokenizer on new data. The new tokenizer does not conform to this constraint. | 10-03-2022 06:53:46 | 10-03-2022 06:53:46 | Hi @yangky11 ,
Thank you very much for pointing this out! It is indeed a problem!
To fix it, we'll have to look into whether we need to pay special attention to the `train_new_from_iterator` method of the `T5TokenizerFast` tokenizer or (even better) whether we really need the sentinel ids to be at the end.<|||||>@SaulLu Thanks for your response!
I'm not familiar with every detail of T5. But to my knowledge, the only reason we need them to be at the end is that user code sometimes relies on this feature (as in the example I mentioned). It's also possible that you add APIs to `T5TokenizerFast` to allow the user to query the ids of sentinel tokens. Then the user code can handle them on their side.<|||||>Also cc @ArthurZucker here FYI<|||||>@patrickvonplaten @SaulLu I can pick this issue. What should be the approach we need to take ?
<|||||>Also cc @LysandreJik @sgugger here <|||||>Any luck on this?<|||||>@patrickvonplaten @sgugger Can we decide what approach to take to fix this ? <|||||>I think choosing how to resolve this issue requires some discussion.
I think we should change the example scripts that look for sentinel tokens based on the fact that they are the last tokens in the vocabulary. Rather, it should be based on the fact that they are the tokens in the `additional_special_tokens` attribute (respectively `additional_special_tokens_ids` for their ids).
However, an important thing to know with this suggestion is that even if for T5 tokenizers there is no reason to have any additional special tokens other than sentinel ids, there is still a risk that a user adds a new additional special token that is not a sentinel token and this would lead to a (silent) error.
Which leads me to a second opinion, I think this issue shows here that a `breaking change `would be very beneficial to the user experience. Indeed, for T5, sentinel ids have a very important meaning for the model (as much as bos, eos, or sep tokens for bert type models for example) and I think it would be justified to have a dedicated attribute for them (such as `sentinel_tokens`) rather than being in the additional tokens list of `SpecialTokensMixin`. Or, another possibility would be to be able to name the tokens in the list of additional tokens so that we can rely on this naming to retrieve them from the list rather than their position in the list. <|||||>@SaulLu I was also thinking along same lines, as a simpler fix remove the pointers of sentinel tokens as the last tokens and then use an additional attribute to get those token.
@patrickvonplaten @sgugger what do you folks think of this approach ? <|||||>As long as it's done in examples that are focused on T5-only (e.g. not the generic `run_summarization`), no problem with me!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
Has this issue been resolved? I see at least the `run_t5_mlm_flax.py` example is still relying on the old behavior. |
transformers | 19,297 | closed | Floating point exception (core dumped) when using `transformers.onnx` | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Code
```bash
python -m transformers.onnx --model=google/long-t5-tglobal-base --feature=seq2seq-lm onnx
```
#### Error Message
```bash
Validating ONNX model...
Floating point exception (core dumped)
```
#### Another Trial
Although the model isn't an official one, but it appears in the [document](https://huggingface.co/docs/transformers/model_doc/longt5#transformers.LongT5ForConditionalGeneration.forward.example). Showed the same message as above.
```bash
python -m transformers.onnx --model=Stancld/longt5-tglobal-large-16384-pubmed-3k_steps --feature=seq2seq-lm onnx
```
### Expected behavior
It shouldn't raise `Floating point exception (core dumped)`. | 10-03-2022 06:30:41 | 10-03-2022 06:30:41 | cc @lewtun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,296 | closed | AssertionError: Padding_idx must be within num_embeddings MarianModel | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The code is taken from official huggingface documentation [here](https://huggingface.co/docs/transformers/model_doc/marian#transformers.MarianConfig.example)
from transformers import MarianModel, MarianConfig
#### Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration
configuration = MarianConfig()
#### Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration
model = MarianModel(configuration)
#### Accessing the model configuration
configuration = model.config
### Expected behavior
I am trying to initialize MarianModel without using pretrained weights. So instead of this code
```
model_name = "penpen/novel-zh-en"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
````
I wan to use it with out using pretrained model. Instead I want it to be trained from scratch. So i used MarianConfig, but it throw the error.
```
AssertionError: Padding_idx must be within num_embeddings
```
| 10-03-2022 04:19:39 | 10-03-2022 04:19:39 | Hi @talhaanwarch 👋 This seems to be an error in our default value for the size of the embeddings. Thank you for raising it!
Meanwhile, you should be able to run
```python
from transformers import MarianModel, MarianConfig
# Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration
configuration = MarianConfig()
configuration.vocab_size = configuration.pad_token_id + 1
# Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration
model = MarianModel(configuration)
``` |
transformers | 19,295 | closed | I think group_by_length feature can be faster using a namely length list | ### Feature request
Current group_by_length requires ```lengths``` list that can be fully-loaded from dataset:
```
class Trainer:
...
def _get_train_sampler(self) -> Optional[torch.utils.data.Sampler]:
...
if self.args.group_by_length:
if is_datasets_available() and isinstance(self.train_dataset, datasets.Dataset):
lengths = (
self.train_dataset[self.args.length_column_name]
if self.args.length_column_name in self.train_dataset.column_names
else None
)
```
For example, If I want to make lengths list with 'attention_mask' column, ```lengths``` list would be look like:
```
[[1,1,1,1,1],[1,1,1,1,1,1,1,1,1],[1,1,1] ...]
```
But If I want to use length information only, why can't I make this list like:
```
[5,9,3...] # length of each row
```
If I make this 'length file' before training with group_by_length feature, then the ```lengths``` list can be easily made by :
```
def appender(length):
temp = []
temp.append([1 for _ in range(length)])
return temp
class customized_trainer(Trainer):
def handle_group_by_length(self, filepath=''):
with open(filepath,'r') as f:
lengthsfile=json.load(f) # json or whatsoever
import multiprocessing as mp
pool = mp.Pool(6)
lengths = pool.map(appender, lengthsfile)
lengths = [ent for sublist in lengths for ent in sublist]
self.lengths = lengths
```
```
trainer.handle_group_by_length('file.json') # this would load the length file
```
This reduced the loading time into about 1/10, 30 mins into 3 mins for me.
### Motivation
While I tried to train with ~25GB text dataset, this code required about 30 minutes to load ```lengths``` list.
I found that loading full column in the group_by_length feature is the problem.
So I changed the code to load much smaller dataset - column length list - and It worked.
### Your contribution
This would require users to make "row length list file' before training. So I'm not that assertive to this proposal. | 10-03-2022 04:14:57 | 10-03-2022 04:14:57 | cc @sgugger <|||||>As explained in the documentation, you should provide the `lengths` column in the dataset. If you use the `Dataset.map` method to build it, you won't need to implement any multiprocessing since it will do it for you. It will also cache the result so you only need to the computation once. |
transformers | 19,294 | closed | Cant run same int8 example on local machine that works in Colab | ### System Info
This also looks like a bug:
(but consumer hardware Dell workstation with 64GBs RAM and RTX 3060 12GBs )
```
Traceback (most recent call last):
File "/home/user/.local/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
### Who can help?
I dont see the Bloom model above, but Im trying to follow this example:
https://huggingface.co/docs/transformers/perf_infer_gpu_one
using the colab for "BLOOM-3B"
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I'm trying to run the colab: https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing
Which runs fine using the "load_in_8bit=True" arg, but gives this error locally with any of the models. Ive tried downloading with huggingface_hub, git lfs clone and using normal cache (with the smaller model).
"TypeError: BloomForCausalLM.__init__() got an unexpected keyword argument 'load_in_8bit'"
Somehow AutoModelForCausalLM is passing off to BloomForCausalLM which is not finding load_in_8bit.. I'm going to try installing the latest transformers package..
```
#name = "/home/user/bloom-mnt/huggingface2/models--bigscience--bloom-7b1/snapshots/fdd9eac0805a9fa2d0641982eceda25885251975"
#name = "/home/user/bloom-mnt/huggingface2/models--bigscience--bloom-3b/snapshots/515ae965cc83b9ebbf0054de106c434bd4ec35dc"
name = "/home/user/bloom-mnt/huggingface2/bloom-560m"
#name = "bigscience/bloom-1b7"
text = "Hello my name is"
max_new_tokens = 20
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#device = ("cpu")
model_8bit = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model_8bit = model_8bit.to(device)
def generate_from_model(model, tokenizer):
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(input_ids=encoded_input['input_ids'].to(device))
return tokenizer.decode(output_sequences[0], skip_special_tokens=True)
print(generate_from_model(model_8bit, tokenizer))
```
### Expected behavior
success | 10-03-2022 03:47:23 | 10-03-2022 03:47:23 | I solved this by using "pip install git+https://github.com/huggingface/transformers.git" for the latest transformers package. I installed the one locally in the last month, but things must change that fast in this space. I leaving the issue for others, but closing.<|||||>I'm getting `"AttributeError: /home/user/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats"` now, and dont see any search results for this error.<|||||>If someone knows about this error above, that would be great. <|||||>This error turned out to be from not having Cuda installed properly. PyTorch comes with its own many times, but bitsandbytes doesnt use it.<|||||>Hello,
I stumbled over the same error: "undefined symbol: cget_col_row_stats", but I have cuda installed (Stable Diffusion works)
Can you tell me what you mean by "installing Cuda properly"?<|||||>@OWKenobi Well, on my Ubuntu 2204 system, with RTX3060, the highest Cuda available was 11.7, however the "compatible" Cuda version is 11.6 (look for cuda116 in the Torch package name). 11.6 however requires a slightly older driver 515.65.01. So I had to remove the one from `ubuntu-drivers autoinstall` and manually install 515. Then had to install Cuda 11.6. Hope that helps.
There might be one other AskUbuntu post about this:
https://askubuntu.com/questions/1392998/cuda-installation-uncomprehensible-conflicts |
transformers | 19,293 | closed | T5 Tokenizer Prepends Space after Each Added (Extra) Token | ### System Info
```
$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.22.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.10.7
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('t5-base')
tokenizer.add_tokens(['<']) # '>' is already in the vocab
tokenizer.decode(tokenizer('a>=5').input_ids)
# prints 'a>=5</s>' as expected (no space after >)
tokenizer.decode(tokenizer('a<=5').input_ids)
# prints 'a< =5</s>'
### Expected behavior
There shouldn't be a space after the `<` character. | 10-03-2022 03:40:47 | 10-03-2022 03:40:47 | In case it helps with debugging, it returns a different result with `use_fast=False`:
```
tokenizer = AutoTokenizer.from_pretrained('t5-base', use_fast=False)
tokenizer.add_tokens(['<'])
tokenizer.decode(tokenizer('a<=5').input_ids)
# 'a < =5</s>' (notice the space both before and after the `<`)<|||||>Maybe also of interest to @ArthurZucker <|||||>I did some digging, and I believe the culprit (in the non-fast, but potentially fast, tokenizer) is this:
https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L433
```
# Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert)
if special_tokens:
if len(new_tokens) == 1:
_insert_one_token_to_ordered_list(self.unique_no_split_tokens, new_tokens[0])
else:
self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens)))
else:
# Or on the newly added tokens
if len(tokens_to_add) == 1:
_insert_one_token_to_ordered_list(self.unique_no_split_tokens, tokens_to_add[0])
else:
self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add)))
self._create_trie(self.unique_no_split_tokens)
```
Do you know if that's a fundamental restriction with respect to the models? Or if not, could we potentially expose a flag that disables this behavior?<|||||>@ankrgyl What is the expected behaviour of decode should be ? <|||||>My desired behavior would be that `a<=5` round trips (i.e. encodes and then decodes) to `a<=5`, not `a < =5`<|||||>@LysandreJik @ArthurZucker I debugged this issue.
Incase of of input is a a>=5
calling self.tokens_trie.split() returns ['a>=5']
Incase of of input is a a<=5
calling self.tokens_trie.split() returns ['a', '<', '=5']
Here '<' is not part of the original vocab is added. <|||||>Hi @raghavanone just to clarify, that has been known the whole time. The specific issue here is that added tokens (in this case `<`) _cannot_ split words (see the code snippet I pasted above).<|||||>@ankrgyl Did some more digging, this is being done intentionally when we add new token to tokenizer. Look at _decode function in tokenization_utils.py . I do not think this a bug.
@ArthurZucker Please validate this understanding.<|||||>Hey! @raghavanone you are right, this is not a bug!
@ankrgyl you can use the following snippet :
```python
>>> tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)
>>> 'a<=5</s>'
```
(this works for the slow tokenizer, not for the fast, will have a look) tell me if that fixes your issue! 😄 <|||||>@ArthurZucker Just pondering why is slow and fast tokenizer functionally not equal ?<|||||>Well.. this is not really intended ^^ But mostly the `fast` is an entire library mostly implemented in `rust`, so we must have forgotten to update this argument when adding it to the `transformers` tokenizers. cc @LysandreJik and @SaulLu FYI 🤗 <|||||>@ArthurZucker confirmed with both the slow and fast tokenizers (`build_tokenizer()` below is a wrapper function in my code that simply adds `<` as a special token):
```
In [2]: tokenizer = model.build_tokenizer()
In [3]: tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)
Out[3]: 'a< =5</s>'
In [4]: tokenizer = model.build_tokenizer(use_fast=False)
In [5]: tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)
Out[5]: 'a<=5</s>'
```<|||||>Awesome, closing this issue, will open a PR in tokenizers when I have bandwith to try to match the outputs. <|||||>@ArthurZucker happy to help with this |
transformers | 19,291 | closed | make more clear fail on numpy tensor in marian | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-02-2022 18:04:29 | 10-02-2022 18:04:29 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19291). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,290 | closed | ValueError: The following `model_kwargs` are not used by the model: ['length'] | ### System Info
4.22.2
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
prompt = tokenizer(user_input, return_tensors='pt', return_length=True)
prompt = {key: value.to(device) for key, value in prompt.items()}
out = gpt.generate(**prompt, ...)
```
When using "return_length=True" with the tokenizer, the error is given. This is from a change in a recent version and did not happen in older versions.
`ValueError: The following `model_kwargs` are not used by the model: ['length'] (note: typos in the generate arguments will also show up in this list)`
### Expected behavior
Model should not produce an error when "return_length" is set to True
Downgrade to 4.21.0 fixes the problem and according to my googling this is what people are doing | 10-02-2022 15:37:16 | 10-02-2022 15:37:16 | Hi @CrackerHax,
Thanks for the issue.
Could I ask you to share also the lines of code that you used to initialize the model, tokenizer and user_input which work with the `4.21.0` version of transformers and not the newer versions?<|||||>FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:
``` py
import torch
from lavis.models import load_model_and_preprocess
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
# this also loads the associated image processors
model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
# preprocess the image
# vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
# generate caption
model.generate({"image": image})
# ['a large fountain spewing water into the air']
```
In my case I was getting the following error:
```
The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)
```<|||||>> Hi @CrackerHax,
>
> Thanks for the issue.
>
> Could I ask you to share also the lines of code that you used to initialize the model, tokenizer and user_input which work with the `4.21.0` version of transformers and not the newer versions?
config = transformers.GPTJConfig.from_pretrained("./gpt-j-6B-8bit)
tokenizer = AutoTokenizer.from_pretrained("./gpt-j-6B-8bit",add_prefix_space=True)
vocab = tokenizer.get_vocab()
gpt = GPTJForCausalLM.from_pretrained("./gpt-j-6B-8bit",low_cpu_mem_usage=True)
prompt = tokenizer(user_input, return_tensors='pt', return_length=True)
prompt = {key: value.to(device) for key, value in prompt.items()}
out = gpt.generate(**prompt, max_length=max_length, do_sample=True, temperature=temperature, repetition_penalty=2.0)
out = tokenizer.decode(out[0], skip_special_tokens=True)<|||||>> FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:
>
> ```python
> import torch
> from lavis.models import load_model_and_preprocess
> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
> # loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
> # this also loads the associated image processors
> model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
> # preprocess the image
> # vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
> image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
> # generate caption
> model.generate({"image": image})
> # ['a large fountain spewing water into the air']
> ```
>
> In my case I was getting the following error:
>
> ```
> The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)
> ```
Same here. Can anyone help fix this?<|||||>@zzxslp
Change these line at https://github.com/salesforce/BLIP/blob/main/models/med.py#L932 as following:
from
```python
def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs):
input_shape = input_ids.shape
# if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
if attention_mask is None:
attention_mask = input_ids.new_ones(input_shape)
# cut decoder_input_ids if past is used
if past is not None:
input_ids = input_ids[:, -1:]
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"past_key_values": past,
"encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None),
"encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None),
"is_decoder": True,
}
```
to
```python
def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **model_kwargs):
input_shape = input_ids.shape
# if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
if attention_mask is None:
attention_mask = input_ids.new_ones(input_shape)
# cut decoder_input_ids if past is used
if past is not None:
input_ids = input_ids[:, -1:]
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"past_key_values": past,
"encoder_hidden_states": encoder_hidden_states,
"encoder_attention_mask": encoder_attention_mask,
"is_decoder": True,
}
```
Why: https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/generation_utils.py#L899<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still needs to be addressed<|||||>Would you like to take a look at this @ArthurZucker?<|||||>Sure!
<|||||>@ArthurZucker I can look at this issue, But I am setup for reproduction due to size of model, how do you folks setup for reproducing bug for larger models ? <|||||>Well, in that case I just downloaded the model. You can usually initialize a random tiny model using a different configuration.
Could you tell me why you need to set `return_length=True`. This has the effect of adding `length` to the list of inputs, and is not useful when generating. This argument is mostly used when training a fast tokenizer.<|||||>> Well, in that case I just downloaded the model. You can usually initialize a random tiny model using a different configuration. Could you tell me why you need to set `return_length=True`. This has the effect of adding `length` to the list of inputs, and is not useful when generating. This argument is mostly used when training a fast tokenizer.
Yes, in my use case I need the length when generating. Unless you have a really good reason, it shouldn't be removed but fixed.<|||||>The thing is that the `generate` function does not take the `length` argument, and there is no reason to add it as it is not used. Which is why I don't understand why you would use it? 😉 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> The thing is that the `generate` function does not take the `length` argument, and there is no reason to add it as it is not used. Which is why I don't understand why you would use it? 😉
Yes it does, it takes min and max length as arguments. The only other option is to use an older version of huggingface. Why would it work in earlier versions and not now? This is useful for adjusting max_length dynamically such as:
```
max_t=max_length+prompt['length'][0]
if(max_length<min_length):
max_length=min_length
prompt = {key: value.to(device) for key, value in prompt.items()}
out = gpt.generate(**prompt, min_length=min_length, max_length=max_t, do_sample=do_sample)
```<|||||>> > FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:
> > ```python
> > import torch
> > from lavis.models import load_model_and_preprocess
> > device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
> > # loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
> > # this also loads the associated image processors
> > model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
> > # preprocess the image
> > # vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
> > image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
> > # generate caption
> > model.generate({"image": image})
> > # ['a large fountain spewing water into the air']
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > In my case I was getting the following error:
> > ```
> > The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)
> > ```
>
> Same here. Can anyone help fix this?
transformers==4.25.1 will be ok?<|||||>No, the `generate` function take a `max_length` argument, and almost always has.
We don't adapt the library to external ones like `lavis` so this will not be adressed<|||||>I feel like you did not read my reply. I KNOW the generate function uses max_length.<|||||>Oh sorry, I was just responding to the question of whether we will change this or not!
THe `max_new_token` should do what you want, it takes into account the length of the input |
transformers | 19,289 | closed | Call to pipeline.predict() fails | ### System Info
- `transformers` version: 4.21.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Execute the following piece of code resulted in an exception that is pasted below.
```python
from transformers import pipeline
pipe = pipeline("text-classification")
print(pipe.predict(["This restaurant is awesome"]))
```
Exception:
```
Traceback (most recent call last):
File "pipeline_test.py", line 5, in <module>
print(pipe.predict(["This restaurant is awesome"]))
File "miniconda3/envs/mlflow-py3.9/lib/python3.9/site-packages/transformers/pipelines/base.py", line 840, in predict
return self(X=X)
File "miniconda3/envs/mlflow-py3.9/lib/python3.9/site-packages/transformers/pipelines/text_classification.py", line 138, in __call__
result = super().__call__(*args, **kwargs)
TypeError: __call__() missing 1 required positional argument: 'inputs'
```
### Expected behavior
Successful predictions as shown below
```
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
### Proposed fix
I dig a bit deeper into the implementation based on the exception and found out that this [change](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) fixes the issue. If this indeed a fix, I am happy to create a PR. | 10-02-2022 15:08:31 | 10-02-2022 15:08:31 | Hi @s-udhaya ,
The code you're referring to is very old in the codebase and was created for compat with stuff like scikit-learn, over which I have very little knowledge.
The recommended way to call the pipeline is to do;
```python
from transformers import pipeline
pipe = pipeline("text-classification")
print(pipe("This restaurant is awesome"))
```
```python
from transformers import pipeline
def dataset():
for i in range(1000):
# Load from somewhere, a dataset, some file etc..
yield "This restaurant is awesome"
pipe = pipeline("text-classification")
for out in pipe(dataset()):
print(out)
```
Ofc you can send a list if you want, but it's not going to be used for batching any way (batching is an orthogonal concept in pipelines which you activate by using the parameter `pipeline(..., batch_size=n)`)<|||||>Hi @Narsil ,
Many thanks for the detailed response. Actually I am in the process of building a dedicated [Mlflow flavor for transformers](https://github.com/s-udhaya/mlflow/tree/add-transformers-flavor/mlflow/transformers) and I am heavily using this awesome pipeline abstraction as this drastically reduces the complexity of my implementation.
As you could see [here](https://github.com/s-udhaya/mlflow/blob/add-transformers-flavor/examples/transformers/finetune_trainer_xlm_autolog.py), I use the recommended way to call the pipeline so far. However in order to have a unified interface across all mlflow model flavours, It is essential that the predict function to work. As far as I understand, this [fix](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) should resolve the issue. Would you be kind enough to have a look at it, if you have time or redirect me to someone who can assist me in here?
Thanks again.<|||||>> As far as I understand, this [fix](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) should resolve the issue.
The problem is that because the purpose of this code as since been lost, we would be very hesitant to change it because of our commitment to never make breaking changes.
> However in order to have a unified interface across all mlflow model flavours, It is essential that the predict function to work.
If that is the core of the issue (which sounds odd) can't you just wrap the pipeline in another dummy object that just uses `predict` to use `__call__` ?
You can ofc open a PR with your proposed change and motivations. We will study it, and let others chime in if there is a good reason for the current code.<|||||>Hi @Narsil ,
> If that is the core of the issue (which sounds odd) can't you just wrap the pipeline in another dummy object that just uses `predict` to use `__call__` ?
This is exactly what happens in Pipeline's predict method. I would rather prefer to fix the issue at the source than creating a workaround.
> The problem is that because the purpose of this code as since been lost, we would be very hesitant to change it because of our commitment to never make breaking changes.
I understand your concern regarding breaking changes. however I believe, this change is not going to be a breaking change, and the predict function is not usable in its current state.
I will go ahead and create a PR. Prior to that I will run all the tests to make sure, the fix does not introduce any breaking changes. Let us see how it rolls :)
Thanks for the response.<|||||>> I will go ahead and create a PR. Prior to that I will run all the tests to make sure, the fix does not introduce any breaking changes. Let us see how it rolls :)
Thanks ! Again as I said, this is very old code maybe it broke a long time ago already and your PR is good (and we probably need to add a test to make sure we don't break again).<|||||>@Narsil Thanks for your input. I have created a PR with the fix and added few basic tests to make sure we don't break again. Please have a look at it whenever you have time and feel free to propose changes if necessary. |
transformers | 19,288 | closed | docker-build: Update actions/checkout to v3 | For hacktoberfest 2022 :) | 10-02-2022 11:43:03 | 10-02-2022 11:43:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR @Sushrut1101! I'll ping Yih-Dar @ydshieh for review but he's off for the week; he'll review when he's back! Thanks :)<|||||>> Thanks for your PR @Sushrut1101! I'll ping Yih-Dar @ydshieh for review but he's off for the week; he'll review when he's back! Thanks :)
> Actually checked and it's fine for me; thanks for your PR @Sushrut1101!
You're welcome! 👍😃 |
transformers | 19,287 | closed | fix marianMT convertion to onnx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19283
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-02-2022 11:14:55 | 10-02-2022 11:14:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Can you please confirm that the slow tests pass with your change on convert.py, ie run this
6 passed, 426 deselected, 35 warnings |
transformers | 19,286 | closed | make small fixes in logging | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix some typing errors in logging file
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-02-2022 10:15:13 | 10-02-2022 10:15:13 | > Hello! What problem does that solve? :)
Fix style of imports and froze constant dict to save it unmodified and preserve from errors.<|||||>@LysandreJik, can you help me please with imports problems in tests?<|||||>I am not sure we want to accept this PR, unfortunately I do not understand what problem it fixes. It seems to me that this already works well, do you have an example of where something would fail to do its intended purpose? Thanks!<|||||>> I am not sure we want to accept this PR, unfortunately I do not understand what problem it fixes. It seems to me that this already works well, do you have an example of where something would fail to do its intended purpose? Thanks!
Frozendict -- its some good for python coding constraint, that dissalow chanding of dict. So its can help to avoid bags in future.
And some minor code style changes to rid from NOQA<|||||>@LysandreJik, ping<|||||>Thanks for your PR @kventinel, but I don't think this fixes any real issue so we will not be merging this PR as it is. Thanks. |
transformers | 19,285 | closed | A developer environment, for faster and efficient contributions. | ### Feature request
Hello maintainers.
I would like to add gitpod to your repo to help beginners contribute.
- Gitpod is an online IDE which can be launched from any GitHub page. Within seconds, Gitpod provides a fully working development environment, including a VS Code-powered IDE and a cloud-based Linux container explicitly configured for the project
- Gitpod is highly contextual, such that it opens the IDE in the correct mode depending on the context:
- If you are looking at a particular file of a certain commit on GitHub, starting a Gitpod workspace will check out the right version and open the file you’ve been looking at in the IDE.
- Starting a Gitpod workspace from an issue will automatically create a branch and preconfigure the commit message.
- Once you are in the IDE, you can interact with GitHub in various ways. Besides the obvious Git integration, you can do things like commenting inline in editors, approving and even merging PRs.
### Motivation
I have seen many repos adopting gitpod to help new contributors in their open-source journey. It's hassle-free. You don't need to worry about dependencies not being present, since gitpod does all the heavy lifting.
### Your contribution
I can add a badge to your repo in the [CONTRIBUTING.md](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) file similar to this one:
<a href="https://gitpod.io/#<your-repository-url>">
<img
src="https://img.shields.io/badge/Contribute%20with-Gitpod-908a85?logo=gitpod"
alt="Contribute with Gitpod"
/>
</a> | 10-02-2022 07:46:34 | 10-02-2022 07:46:34 | Hey @AnirudhDaya, what difference is there with https://github.dev/huggingface/transformers ?
Thank you!<|||||>Sorry my bad. It didn't catch my eye.<|||||>No worries :) |
transformers | 19,284 | closed | Added type hints for TF: rag model | Based on Issue https://github.com/huggingface/transformers/issues/16059
Type hints for the [TFRagModel](https://huggingface.co/docs/transformers/model_doc/rag) have been added.
@Rocketknight1 Could you please check the changes and merge if it's fine?
Thanks a lot. | 10-02-2022 04:20:44 | 10-02-2022 04:20:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 can you take a look at this? I just added type hints and fixed a syntax error in one previous type. Still it shows it failed these 2 tests. I have no idea what these tests do, can you shed some light on this issue?<|||||>Thanks a lot @Rocketknight1, I have made the suggested changes!<|||||>Looks perfect now, thank you! |
transformers | 19,283 | closed | python -m transformers.onnx --model=Helsinki-NLP/opus-mt-en-zh onnx/ | ### System Info
transformers:4.22.2
python3.8.4
win10
raise ValueError(
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of:
2.8133392333984375e-05
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m transformers.onnx --model=Helsinki-NLP/opus-mt-en-zh onnx/
### Expected behavior
Export onnx and translate through onnx
https://www.kaggle.com/code/catchlife/translate-opt
Custom export has incorrect translation results | 10-02-2022 03:43:36 | 10-02-2022 03:43:36 | Also reproduced on mac<|||||>Fast fix: run with `--atol 1e-4`.<|||||>The --atol 1e-4 method can be run, but the running result seems incorrect<|||||>I think it's not a big difference for such big NN. In ouputs you have values much greater than 1e-4, so problem probably in some other place.<|||||>After exporting, the data dimension is incorrect. Can you give me a code? Thank you very much<|||||>> Can you give me a code?
All code is here:)
Why do you think, that problem in dimensions? They realy checked [here](https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/convert.py#L430)
<|||||>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
model_ckpt = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)
# Export model
feature = "seq2seq-lm"
onnx_path = f"onnx/{model_ckpt}-{feature}/"
# Run this from a Jupyter notebook
!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}
# Test export with inputs
batch_size = 4
encoder_inputs = tokenizer(
["Studies have been shown that owning a dog is good for you"] * batch_size,
return_tensors="np",
)
decoder_inputs = tokenizer(
["Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen"]
* batch_size,
return_tensors="np",
)
all_inputs = {
"input_ids": encoder_inputs["input_ids"],
"attention_mask": encoder_inputs["attention_mask"],
"decoder_input_ids": decoder_inputs["input_ids"],
"decoder_attention_mask": decoder_inputs["attention_mask"],
}
# Generate ONNX outputs
ort_session = ort.InferenceSession(f"{onnx_path}model.onnx")
onnx_config = MarianOnnxConfig(ref_model.config, task=feature)
onnx_named_outputs = list(onnx_config.outputs.keys())
onnx_outputs = ort_session.run(onnx_named_outputs, all_inputs)<|||||>How to get results<|||||>So... And what your problem here?<|||||>How to get text<|||||>The code is not exactly the same, but the problem is the same. The other is Marian<|||||>The result is a four-dimensional tensor, but no matter how it is processed, it cannot be decoded to get the correct translation, so I think the result is incorrect<|||||>4 dims = number_of_outputs x batch_size x n_words x embedding<|||||>The dimensions can still be solved, but the decoder input is incredible, it is obvious that this code knows the result in advance, it is impossible for me to predict the result in advance when I translate<|||||>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from onnxruntime import InferenceSession
tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh")
session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx")
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))<|||||>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xieyouxi/anaconda3/envs/HuggingFace-torch-gpu/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 196, in run
raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs))
ValueError: Model requires 4 inputs. Input Feed contains 2<|||||>https://github.com/huggingface/transformers/issues/18518<|||||>@CatchDr You did not provide decoder inputs, hence the error message. Have you tried to do what is suggested in #18518?<|||||>>
He and I are actually a problem, he this also did not solve the<|||||>> >
>
> He and I are actually a problem, he this also did not solve the
The code snippet you shared and that fails does not do what is suggested in #18518. Could you try the following?
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from onnxruntime import InferenceSession
tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh")
session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx")
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt")
inputs["decoder_input_ids"] = torch.tensor([0], dtype=torch.long)
inputs["decoder_attention_mask"] = torch.tensor([1], dtype=torch.long)
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```<|||||>Please wait, about 10 minutes.<|||||>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
import onnxruntime as ort
model_ckpt = "Helsinki-NLP/opus-mt-en-zh"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)
# Export model
feature = "seq2seq-lm"
onnx_path = f"onnx/{model_ckpt}-{feature}/"
# Run this from a Jupyter notebook
!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from onnxruntime import InferenceSession
tokenizer=AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-zh")
session = InferenceSession("onnx/Helsinki-NLP/opus-mt-en-zh/model.onnx")
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt")
inputs["decoder_input_ids"] = torch.tensor([0], dtype=torch.long)
inputs["decoder_attention_mask"] = torch.tensor([1], dtype=torch.long)
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
outputs

Very sorry, as a rookie, said many times to describe clearly, here is all the code, run it<|||||>
I tried to make some changes, but the dimensions seem to be incorrect again<|||||>@CatchDr The result you get is correct. Some post-processing is necessary to generate the whole sentence. If you just want to convert your model to the ONNX format and translate sentences, I suggest you to use Optimum. It will do all the generation work for you. For example:
```python
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model = ORTModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-zh", from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-zh")
onnx_translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer)
result = onnx_translation("Using DistilBERT with ONNX Runtime!")
```<|||||>I would like to know how to post-process to get the right result, because I have tried this solution you mentioned and the result is very poor<|||||>text="Vehicle detection technology is of great significance for realizing automatic monitoring and AI-assisted driving systems. The state-of-the-art object detection method, namely, a class of YOLOv5, has often been used to detect vehicles. However, it suffers some challenges, such as a high computational load and undesirable detection rate. To address these issues, an improved lightweight YOLOv5 method is proposed for vehicle detection in this paper. In the presented method, C3Ghost and Ghost modules are introduced into the YOLOv5 neck network to reduce the floating-point operations (FLOPs) in the feature channel fusion process and enhance the feature expression performance. A convolutional block attention module (CBAM) is introduced to the YOLOv5 backbone network to select the information critical to the vehicle detection task and suppress uncritical information, thus improving the detection accuracy of the algorithm. Furthermore, CIoU_Loss is considered the bounding box regression loss function to accelerate the bounding box regression rate and improve the localization accuracy of the algorithm. To verify the performance of the proposed approach, we tested our model via two case studies, i.e., the PASCAL VOC dataset and MS COCO dataset. The results show that the detection precision of the proposed model increased 3.2%, the FLOPs decreased 15.24%, and the number of model parameters decreased 19.37% compared with those of the existing YOLOv5. Through case studies and comparisons, the effectiveness and superiority of the presented approach are demonstrated."
You can try to translate this text for comparison, the result is very poor<|||||>@CatchDr You can take a look at [this example](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForCausalLM.forward.example) and change the arguments of the `generate` method if you want to decode your outputs in a different way (see [here](https://huggingface.co/blog/how-to-generate) for the possible decoding strategies). But maybe this model is simply not good enough for what you are trying to achieve. <|||||>outputs = session.run(output_names=["logits"], input_feed=dict(inputs))
There should be some errors here<|||||>I've tried every decoding method I can think of and can't get the results I want, expecting something completely different, so I'm asking for help here |
transformers | 19,282 | closed | Fix the error message in run_t5_mlm_flax.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-02-2022 03:12:43 | 10-02-2022 03:12:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,281 | closed | ci(stale.yml): upgrade actions/setup-python to v4 |
# What does this PR do?
Update actions/setup-python to v4 in stale.yml
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
**No but I can create one and reference it here if it's necessary.**
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
**I understand no documentation changes are necessary for this change.**
- [x] Did you write any new necessary tests?
**I understand no tests are necessary for this change.**
## Who can review?
🤷🏽, no human committer checking Git Blame | 10-01-2022 17:11:43 | 10-01-2022 17:11:43 | |
transformers | 19,280 | closed | ci(stale.yml): update actions/checkout to v3 |
# What does this PR do?
Update actions/checkout to v3 in stale.yml
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
**No but I can create one and reference it here if it's necessary.**
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
**I understand no documentation changes are necessary for this change.**
- [x] Did you write any new necessary tests?
**I understand no tests are necessary for this change.**
## Who can review?
🤷🏽, no human committer checking Git Blame | 10-01-2022 17:08:42 | 10-01-2022 17:08:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,279 | closed | Wrap Deit integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in Deit integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-01-2022 14:59:29 | 10-01-2022 14:59:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,278 | closed | Wrap DebertaV2 integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in DeberatV2 integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :-) | 10-01-2022 14:49:20 | 10-01-2022 14:49:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,277 | closed | Update no_trainer script for summarization | # What does this PR do?
Update `run_summarization_no_trainer.py` in the example to include `accelerator.gather_metrics`
Fixes a part of #18437
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr , @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 10-01-2022 14:30:58 | 10-01-2022 14:30:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot for your PR! You also need to remove the initialization of `samples_seen` since that variable is not used anymore. This will fix the quality check.
Done; thanks for pointing it out. |
transformers | 19,276 | closed | M2M100 generates repetitive text as <eos> token not produced while decoding | ### System Info
- `transformers` version: 4.11.3
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (one V100 GPU)
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi. I am using M2M100_418M to translate Spanish and French samples to English. However, I observe that for some samples output contains repetitive text.
While debugging, I observe that `<eos>` token is not being produced for these samples early on as expected and same token gets predicted. The model totally ignores the beginning or ending of the input sentence as well in a few scenarios.
Note: max char limit for any input is less than 256 characters.
Code:
```
def translate_m2m(text, src_lang, tgt_lg):
tokenizer.src_lang = src_lang
encoded_src = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device)
generated_tokens = model.generate(**encoded_src, forced_bos_token_id=tokenizer.get_lang_id(tgt_lg))
translated = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
return translated[0]
# load model and tokenizer from checkpoints available on HF
model = M2M100ForConditionalGeneration.from_pretrained(m2m_hf_ckpt)
tokenizer = M2M100Tokenizer.from_pretrained(m2m_hf_ckpt)
model = model.to(device)
```
Adding all such samples with their outputs below:
| Source Language | Target Language | Input | Output |
| ----------- | ----------- |----------- | ----------- |
| es | en | no ya ya tengo ya tengo nueva contraseña ya la ya ya tengo nueva contraseña pero no no entrar mire lo que pasa es que me sale donde voy a voy a decir como me sale ya ya con nueva contraseña mire pongo mi contraseña ingreso con mi y | I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password I have a new password |
| es | en |sea en esta la tarjeta de doble click doble click aparecer confirmar tarjeta | Double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click double-click |
| es | en | que lopues ya yo recibími servicio ya nosotros lo libra vamos o si no es posible pues usted | I’m going to get you, I’m going to get you, I’m going to get you, I’m going to get you, I’m going to get you, I’m going to get you, I’m going to get you. |
| fr | en | non ca va ca des choses qui mais je me rappelle moi j'avais pas que mon compte assez on donc c'est vous a pas de problème par rapport | I don't know what I'm doing, but I don't know what I'm doing, I don't know what I'm doing, I don't know what I'm doing |
| en | es | hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there hey there | Hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay hay |
**Additional Information:**
- issue goes away while randomly removing any words or even adding spaces etc. but there is no consistent pattern or token where always holds true, hence, making it difficult to identify the root cause of this problem.
- same issue occurs when no. of beams = [1,2,3,4,5]
- can be reproduced for different languages (not only true for the ones listed above)
### Expected behavior
I expect M2M100 model to not produce repetitive text and that too randomly on any token for almost all major languages. This is a major problem as there no pattern or token that can be assumed to be causing this. | 10-01-2022 12:25:33 | 10-01-2022 12:25:33 | Hello Reyha (My answer doesn’t represent Hugging Face)
I think there is no bug here, it’s just the output of the raw model. I think you may have to do some post-processing, or you should implement some frequency penalties for each generated token.
However, I recommend looking at this article https://huggingface.co/blog/how-to-generate by Patrick von Platen, in the conclusion he states:
> "As ad-hoc decoding methods, top-p and top-K sampling seem to produce more fluent text than traditional greedy - and beam search on open-ended language generation. Recently, there has been more evidence though that the apparent flaws of greedy and beam search - mainly generating repetitive word sequences - are caused by the model (especially the way the model is trained), rather than the decoding method, cf. [Welleck et al. (2019)](https://arxiv.org/pdf/1908.04319.pdf). Also, as demonstrated in [Welleck et al. (2020)](https://arxiv.org/abs/2002.02492), it looks as top-K and top-p sampling also suffer from generating repetitive word sequences."<|||||>Thanks a lot for the in-detail answer @Mustapha-AJEGHRIR - can only agree :-) <|||||> @Mustapha-AJEGHRIR @patrickvonplaten - Thanks for confirming, really appreciate it. |
transformers | 19,275 | closed | Fix `ViTMSNForImageClassification` doctest | # What does this PR do?
`>>> torch.manual_seed` requires `# doctest: +IGNORE_RESULT`, otherwise we get
```
Expected nothing
Got:
<torch._C.Generator object at 0x7fe914fdbcd0>
```
I missed this detail when reviewing #19183
Confirmed it works now (on GCP CPU VM) | 10-01-2022 07:45:45 | 10-01-2022 07:45:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,274 | closed | Wrap ConvBert integration test forward passes with torch.no_grad() | # What does this PR do?
This PR wraps forward passes in ConvBert integration tests with `torch.no_grad()`, as proposed in issue #14642. This avoids the computation of unnecessary gradients during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please check it?
Thanks :) | 10-01-2022 07:15:06 | 10-01-2022 07:15:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,273 | closed | Wrap BigBird integration test forward passes with torch.no_grad() | # What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in BigBird integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :) | 10-01-2022 06:16:54 | 10-01-2022 06:16:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,272 | closed | Autotokenizer/ LED/BARTTokenizer won't cast to CUDA | ### System Info
Hello,
I'm running into the following issue when trying to run an LED model. It appears that the tokenizer won't cast into CUDA. I've seen this work in the past, but apparently something has gone amiss. I'm not entirely sure why this behavior is being exhibited.
`from transformers import (
AutoTokenizer,
LEDForConditionalGeneration,
)
import torch
import pickle
device = "cuda:0" if torch.cuda.is_available() else "cpu"
primer_path = "allenai/PRIMERA-multixscience"
tokenizer = AutoTokenizer.from_pretrained(primer_path, return_tensors="pt").to(device)
model = LEDForConditionalGeneration.from_pretrained(primer_path).to(device)
model.gradient_checkpointing_enable()
PAD_TOKEN_ID = tokenizer.pad_token_id
DOCSEP_TOKEN_ID = tokenizer.convert_tokens_to_ids("<doc-sep>")`
`2022-09-30 05:10:51 : Traceback (most recent call last):
2022-09-30 05:10:51 : File "/summarization.py", line 17, in <module>
2022-09-30 05:10:51 : tokenizer = AutoTokenizer.from_pretrained(primer_path, return_tensors="pt").to(device)
2022-09-30 05:10:51 : AttributeError: 'LEDTokenizerFast' object has no attribute 'to'`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Simply try to import PRIMERA and use a tokenizer with CUDA
### Expected behavior
I would expect the tokenizer to cast onto the GPU, not give an exception that to is not an attribute. | 09-30-2022 22:42:00 | 09-30-2022 22:42:00 | Hello @M-Chimiste Please assign me and btw I'm new to the contribution so guide accordingly
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,271 | closed | Breakup export guide | This PR breaks up `serialization.mdx` into separate ONNX and TorchScript docs. Currently, this doc is quite long which makes it difficult for users to quickly find what they're looking for and it also discusses two different topics. I think separating these topics into their own docs makes it easier for users to focus on just one thing at a time, and it's also easier to skim (I also included some light edits for clarity).
Let me know what you think! :) | 09-30-2022 22:14:58 | 09-30-2022 22:14:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,270 | closed | Bump joblib from 1.1.0 to 1.2.0 in /examples/research_projects/decision_transformer | [//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [joblib](https://github.com/joblib/joblib) from 1.1.0 to 1.2.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/joblib/joblib/blob/master/CHANGES.rst">joblib's changelog</a>.</em></p>
<blockquote>
<h2>Release 1.2.0</h2>
<ul>
<li>
<p>Fix a security issue where <code>eval(pre_dispatch)</code> could potentially run
arbitrary code. Now only basic numerics are supported.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1327">joblib/joblib#1327</a></p>
</li>
<li>
<p>Make sure that joblib works even when multiprocessing is not available,
for instance with Pyodide
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1256">joblib/joblib#1256</a></p>
</li>
<li>
<p>Avoid unnecessary warnings when workers and main process delete
the temporary memmap folder contents concurrently.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1263">joblib/joblib#1263</a></p>
</li>
<li>
<p>Fix memory alignment bug for pickles containing numpy arrays.
This is especially important when loading the pickle with
<code>mmap_mode != None</code> as the resulting <code>numpy.memmap</code> object
would not be able to correct the misalignment without performing
a memory copy.
This bug would cause invalid computation and segmentation faults
with native code that would directly access the underlying data
buffer of a numpy array, for instance C/C++/Cython code compiled
with older GCC versions or some old OpenBLAS written in platform
specific assembly.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1254">joblib/joblib#1254</a></p>
</li>
<li>
<p>Vendor cloudpickle 2.2.0 which adds support for PyPy 3.8+.</p>
</li>
<li>
<p>Vendor loky 3.3.0 which fixes several bugs including:</p>
<ul>
<li>
<p>robustly forcibly terminating worker processes in case of a crash
(<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1269">joblib/joblib#1269</a>);</p>
</li>
<li>
<p>avoiding leaking worker processes in case of nested loky parallel
calls;</p>
</li>
<li>
<p>reliability spawn the correct number of reusable workers.</p>
</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/joblib/joblib/commit/5991350e03493fbf27bb596429a935e0c40fb536"><code>5991350</code></a> Release 1.2.0</li>
<li><a href="https://github.com/joblib/joblib/commit/3fa218887770467695573e37e1c7179fd1b5065d"><code>3fa2188</code></a> MAINT cleanup numpy warnings related to np.matrix in tests (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1340">#1340</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/cea26ff2080dc4e9b51957e57994f48351086193"><code>cea26ff</code></a> CI test the future loky-3.3.0 branch (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1338">#1338</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/8aca6f4fc29c36e011201bbfe2da227b58da55e3"><code>8aca6f4</code></a> MAINT: remove pytest.warns(None) warnings in pytest 7 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1264">#1264</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/067ed4f7cc88aef0f4160d6ef7155d40767fee08"><code>067ed4f</code></a> XFAIL test_child_raises_parent_exits_cleanly with multiprocessing (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1339">#1339</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac4ebd540840f92f2c12f47ad001b555d2bb1ce2"><code>ac4ebd5</code></a> MAINT add back pytest warnings plugin (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1337">#1337</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/a23427d1700e32d4fc5d49c16d72e3f3c24f65f9"><code>a23427d</code></a> Test child raises parent exits cleanly more reliable on macos (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1335">#1335</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac0969194aea9c9282a7532cfcda9746bc3b379b"><code>ac09691</code></a> [MAINT] various test updates (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1334">#1334</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/4a314b152fe0b71b53b6092ed67be528ec81392e"><code>4a314b1</code></a> Vendor loky 3.2.0 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1333">#1333</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/bdf47e95c7204499397f0cd9ef6b3198c71976ce"><code>bdf47e9</code></a> Make test_parallel_with_interactively_defined_functions_default_backend timeo...</li>
<li>Additional commits viewable in <a href="https://github.com/joblib/joblib/compare/1.1.0...1.2.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-30-2022 20:24:38 | 09-30-2022 20:24:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,269 | closed | Bump joblib from 0.16.0 to 1.2.0 in /examples/research_projects/visual_bert | [//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [joblib](https://github.com/joblib/joblib) from 0.16.0 to 1.2.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/joblib/joblib/blob/master/CHANGES.rst">joblib's changelog</a>.</em></p>
<blockquote>
<h2>Release 1.2.0</h2>
<ul>
<li>
<p>Fix a security issue where <code>eval(pre_dispatch)</code> could potentially run
arbitrary code. Now only basic numerics are supported.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1327">joblib/joblib#1327</a></p>
</li>
<li>
<p>Make sure that joblib works even when multiprocessing is not available,
for instance with Pyodide
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1256">joblib/joblib#1256</a></p>
</li>
<li>
<p>Avoid unnecessary warnings when workers and main process delete
the temporary memmap folder contents concurrently.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1263">joblib/joblib#1263</a></p>
</li>
<li>
<p>Fix memory alignment bug for pickles containing numpy arrays.
This is especially important when loading the pickle with
<code>mmap_mode != None</code> as the resulting <code>numpy.memmap</code> object
would not be able to correct the misalignment without performing
a memory copy.
This bug would cause invalid computation and segmentation faults
with native code that would directly access the underlying data
buffer of a numpy array, for instance C/C++/Cython code compiled
with older GCC versions or some old OpenBLAS written in platform
specific assembly.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1254">joblib/joblib#1254</a></p>
</li>
<li>
<p>Vendor cloudpickle 2.2.0 which adds support for PyPy 3.8+.</p>
</li>
<li>
<p>Vendor loky 3.3.0 which fixes several bugs including:</p>
<ul>
<li>
<p>robustly forcibly terminating worker processes in case of a crash
(<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1269">joblib/joblib#1269</a>);</p>
</li>
<li>
<p>avoiding leaking worker processes in case of nested loky parallel
calls;</p>
</li>
<li>
<p>reliability spawn the correct number of reusable workers.</p>
</li>
</ul>
</li>
</ul>
<h2>Release 1.1.0</h2>
<ul>
<li>
<p>Fix byte order inconsistency issue during deserialization using joblib.load
in cross-endian environment: the numpy arrays are now always loaded to
use the system byte order, independently of the byte order of the system
that serialized the pickle.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1181">joblib/joblib#1181</a></p>
</li>
<li>
<p>Fix joblib.Memory bug with the <code>ignore</code> parameter when the cached function
is a decorated function.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/joblib/joblib/commit/5991350e03493fbf27bb596429a935e0c40fb536"><code>5991350</code></a> Release 1.2.0</li>
<li><a href="https://github.com/joblib/joblib/commit/3fa218887770467695573e37e1c7179fd1b5065d"><code>3fa2188</code></a> MAINT cleanup numpy warnings related to np.matrix in tests (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1340">#1340</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/cea26ff2080dc4e9b51957e57994f48351086193"><code>cea26ff</code></a> CI test the future loky-3.3.0 branch (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1338">#1338</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/8aca6f4fc29c36e011201bbfe2da227b58da55e3"><code>8aca6f4</code></a> MAINT: remove pytest.warns(None) warnings in pytest 7 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1264">#1264</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/067ed4f7cc88aef0f4160d6ef7155d40767fee08"><code>067ed4f</code></a> XFAIL test_child_raises_parent_exits_cleanly with multiprocessing (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1339">#1339</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac4ebd540840f92f2c12f47ad001b555d2bb1ce2"><code>ac4ebd5</code></a> MAINT add back pytest warnings plugin (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1337">#1337</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/a23427d1700e32d4fc5d49c16d72e3f3c24f65f9"><code>a23427d</code></a> Test child raises parent exits cleanly more reliable on macos (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1335">#1335</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac0969194aea9c9282a7532cfcda9746bc3b379b"><code>ac09691</code></a> [MAINT] various test updates (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1334">#1334</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/4a314b152fe0b71b53b6092ed67be528ec81392e"><code>4a314b1</code></a> Vendor loky 3.2.0 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1333">#1333</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/bdf47e95c7204499397f0cd9ef6b3198c71976ce"><code>bdf47e9</code></a> Make test_parallel_with_interactively_defined_functions_default_backend timeo...</li>
<li>Additional commits viewable in <a href="https://github.com/joblib/joblib/compare/0.16.0...1.2.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-30-2022 19:50:43 | 09-30-2022 19:50:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,268 | closed | Bump joblib from 0.16.0 to 1.2.0 in /examples/research_projects/lxmert | Bumps [joblib](https://github.com/joblib/joblib) from 0.16.0 to 1.2.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/joblib/joblib/blob/master/CHANGES.rst">joblib's changelog</a>.</em></p>
<blockquote>
<h2>Release 1.2.0</h2>
<ul>
<li>
<p>Fix a security issue where <code>eval(pre_dispatch)</code> could potentially run
arbitrary code. Now only basic numerics are supported.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1327">joblib/joblib#1327</a></p>
</li>
<li>
<p>Make sure that joblib works even when multiprocessing is not available,
for instance with Pyodide
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1256">joblib/joblib#1256</a></p>
</li>
<li>
<p>Avoid unnecessary warnings when workers and main process delete
the temporary memmap folder contents concurrently.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1263">joblib/joblib#1263</a></p>
</li>
<li>
<p>Fix memory alignment bug for pickles containing numpy arrays.
This is especially important when loading the pickle with
<code>mmap_mode != None</code> as the resulting <code>numpy.memmap</code> object
would not be able to correct the misalignment without performing
a memory copy.
This bug would cause invalid computation and segmentation faults
with native code that would directly access the underlying data
buffer of a numpy array, for instance C/C++/Cython code compiled
with older GCC versions or some old OpenBLAS written in platform
specific assembly.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1254">joblib/joblib#1254</a></p>
</li>
<li>
<p>Vendor cloudpickle 2.2.0 which adds support for PyPy 3.8+.</p>
</li>
<li>
<p>Vendor loky 3.3.0 which fixes several bugs including:</p>
<ul>
<li>
<p>robustly forcibly terminating worker processes in case of a crash
(<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1269">joblib/joblib#1269</a>);</p>
</li>
<li>
<p>avoiding leaking worker processes in case of nested loky parallel
calls;</p>
</li>
<li>
<p>reliability spawn the correct number of reusable workers.</p>
</li>
</ul>
</li>
</ul>
<h2>Release 1.1.0</h2>
<ul>
<li>
<p>Fix byte order inconsistency issue during deserialization using joblib.load
in cross-endian environment: the numpy arrays are now always loaded to
use the system byte order, independently of the byte order of the system
that serialized the pickle.
<a href="https://github-redirect.dependabot.com/joblib/joblib/pull/1181">joblib/joblib#1181</a></p>
</li>
<li>
<p>Fix joblib.Memory bug with the <code>ignore</code> parameter when the cached function
is a decorated function.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/joblib/joblib/commit/5991350e03493fbf27bb596429a935e0c40fb536"><code>5991350</code></a> Release 1.2.0</li>
<li><a href="https://github.com/joblib/joblib/commit/3fa218887770467695573e37e1c7179fd1b5065d"><code>3fa2188</code></a> MAINT cleanup numpy warnings related to np.matrix in tests (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1340">#1340</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/cea26ff2080dc4e9b51957e57994f48351086193"><code>cea26ff</code></a> CI test the future loky-3.3.0 branch (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1338">#1338</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/8aca6f4fc29c36e011201bbfe2da227b58da55e3"><code>8aca6f4</code></a> MAINT: remove pytest.warns(None) warnings in pytest 7 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1264">#1264</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/067ed4f7cc88aef0f4160d6ef7155d40767fee08"><code>067ed4f</code></a> XFAIL test_child_raises_parent_exits_cleanly with multiprocessing (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1339">#1339</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac4ebd540840f92f2c12f47ad001b555d2bb1ce2"><code>ac4ebd5</code></a> MAINT add back pytest warnings plugin (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1337">#1337</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/a23427d1700e32d4fc5d49c16d72e3f3c24f65f9"><code>a23427d</code></a> Test child raises parent exits cleanly more reliable on macos (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1335">#1335</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/ac0969194aea9c9282a7532cfcda9746bc3b379b"><code>ac09691</code></a> [MAINT] various test updates (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1334">#1334</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/4a314b152fe0b71b53b6092ed67be528ec81392e"><code>4a314b1</code></a> Vendor loky 3.2.0 (<a href="https://github-redirect.dependabot.com/joblib/joblib/issues/1333">#1333</a>)</li>
<li><a href="https://github.com/joblib/joblib/commit/bdf47e95c7204499397f0cd9ef6b3198c71976ce"><code>bdf47e9</code></a> Make test_parallel_with_interactively_defined_functions_default_backend timeo...</li>
<li>Additional commits viewable in <a href="https://github.com/joblib/joblib/compare/0.16.0...1.2.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-30-2022 19:25:55 | 09-30-2022 19:25:55 | |
transformers | 19,267 | closed | Fix formatting in DataCollator docs | Noticed the `x_mask_tokens` functions in the data collator docs look a bit strange with the numbered bullets so this PR removes them:
 | 09-30-2022 19:08:02 | 09-30-2022 19:08:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19267). All of your documentation changes will be reflected on that endpoint.<|||||>Hmm looks like the bullet points are still there even though we didn't include any in Markdown. I wonder if maybe the `doc-builder` automatically adds them before any list elements? cc @mishig25 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,266 | closed | M2M100 training does not improve model's performance | ### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Linux-5.4.0-1072-aws-x86_64-with-glibc2.27
- Python version: 3.8.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hello,
I’m trying to fine-tune M2M100 using the example script run_translation.py and it seems that the model is not improving.
I am using the following command:
```
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path facebook/m2m100_418M \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--output_dir output_dir --overwrite_output_dir \
--fp16 \
--do_train --do_eval --do_predict \
--max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \
--num_train_epochs 0.001 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--predict_with_generate --forced_bos_token ro
```
Just to give you an example, if I train for 1 epoch I can get 20 BLEU points in the test set, but if I train for 3 epochs I get around 10 BLEU points.
Am I doing anything wrong? Does M2M100 requires any specific hyperparameter/hyperparameter configuration?
Thanks
### Expected behavior
I was expecting the model performance to improve for this specific language-pair (en-ro) | 09-30-2022 17:14:36 | 09-30-2022 17:14:36 | @patil-suraj could you help me with this problem?<|||||>Hi @joao-alves97, you should really ask those questions on the [forums](https://discuss.huggingface.co/) where the whole community will be able to help. We keep GitHub issues for bugs (with a reproducer) or feature request only :-)<|||||>I already did it @sgugger and no one answered me: https://discuss.huggingface.co/t/m2m100-training-does-not-improve-model-performance/23787 . And it seems to be a bug in my opinion. It is not normal that the loss continues decreasing but the performance does not improve at all. Something must be wrong in the M2M100 implementation. I did an exhaustive hyperparameter search and it did not solve the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,265 | closed | Position_ids misuses self.padding_idx | ### System Info
issue is platform-independent
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1ZuL1pc0fMBOnJahF18xArcFihfAmLwDo?usp=sharing
### Expected behavior
I'm self-assigning this and will file a PR - I'm just creating this issue so there's a record of it and I don't lose track! The issue is that the `position_ids` in several models (`RoBERTa` and any models that copied code from it) misuse `self.padding_idx` in at least two ways. Firstly, `self.padding_idx` is passed as the padding_idx to self.position_ids. This means that the position embedding in that position is never updated, even though there is no such thing as a padding token in position embeddings.
Secondly, `create_position_ids_from_inputs_embeds`, which was also written for `RoBERTa` before being copied to several other models initializes the position IDs starting from `self.padding_idx`, which does not make sense. This can cause a crash (see Colab above) when the model's padding_idx is not a low integer value, but even when it doesn't crash the model it creates incorrect position IDs whenever the user passes input embeds instead of input IDs. | 09-30-2022 16:50:05 | 09-30-2022 16:50:05 | Update: I think this is actually reproducing behaviour in the original RoBERTa, and isn't a bug after all. It's probably unnecessary when copied to all the other models out there, but it's not the end of the world as long as everyone keeps their padding_idx low.<|||||>@Rocketknight1 I ran through the same issue while training `XLMRobertaForMaskedLM`.
Here's the issue details:
1. The default padding token id is `1`
2. `create_position_ids_from_input_ids` will set position id of padding tokens to `1`, and the actual sequence starts at `2`
3. For input sequences of length truncated to max length, (defaults to `512`), the position id for the last token will be `513`
4. While creating positional embeddings, token with position id `513` (should be `512`) will through an indexing exception
I believe this needs to be addressed either by:
- XLMPreTrainedTokenizer and set padding token's id to `0` instead of `1`
- `create_position_ids_from_input_ids` implementation
<|||||>@YazanShannak Thank you for the report - this is similar to what I observed as well. Unfortunately, I don't think we can change the padding token ID for RoBERTa and RoBERTa-based models without breaking compatibility with the original implementation.
My suspicion is that the code for `create_position_ids_from_input_ids` is actually correct for RoBERTa, but was copied to other models and causes issues there. Can you check if you get the same error when using RoBERTa instead of XLMRoBERTa? |
transformers | 19,264 | closed | [WIP] Fix ImageSegmentationPipeline | # What does this PR do?
- Fixes the ImageSegmentationPipeline test errors caused by a `[recent PR](https://github.com/huggingface/transformers/pull/19205)` that adds new post-processing methods to DETR.
The following PRs will need to be merged first to ensure DETR and MaskFormer have consistent `post_process_panoptic_segmentation` and `post_process_semantic_segmentation outputs `:
- https://github.com/huggingface/transformers/pull/19172
- https://github.com/huggingface/transformers/pull/19262
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
This issue fixes a pipeline error caused by recent changes to image segmentation post-processing methods of DETR.
https://github.com/huggingface/transformers/pull/19248
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-30-2022 16:29:54 | 09-30-2022 16:29:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19264). All of your documentation changes will be reflected on that endpoint.<|||||>The issues addressed by this PR are fixed in this [PR](https://github.com/huggingface/transformers/pull/19367). |
transformers | 19,263 | closed | 🚨🚨🚨 TF: Remove `TFWrappedEmbeddings` (breaking: TF embedding initialization updated for encoder-decoder models) | # What does this PR do?
Banishes `TFWrappedEmbeddings` from our codebase, and updates all models that depended on it to use `tf.keras.layers.Embedding` instead. This corresponds to ~1/3 of the models that were using `TFSharedEmbeddings`.
All models, except for T5, had the same changes as TFBart -- the changes are pretty much copy-paste from each other, so you can review the PR rather quickly. Give T5 the appropriate attention, though.
| 09-30-2022 16:27:52 | 09-30-2022 16:27:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @Rocketknight1 plz check the question in the PR header 🙏 <|||||>> If I'm right about that (and I'm not sure I am!), would it make more sense to either pass names to the layers in __init__, and/or to override model.build() to do the weight creation in there, rather than putting it all in call()?
As far as I've read/tried, sadly it is not possible. The root issue is that the name of a layer is nested in the first layer it is called from by default. If the first call using layer `foo` happens inside a layer `bar`, a variable in `foo` will always be `bar/foo/variable:0`. We need to have the exact same names to cross-load weights, so we either override this nested name behavior (`tf.named_scope` with a scope ending in `/` is the only way AFAIK), or we add some per-model cross-loading logic to match the two names 😞
I mean, yeah, we might be able to edit build, but we would have to set a named scope there as well to get the correct variable names.
<|||||>Ugh, right. Moving to `build()` might be a little tidier, but it would require a lot of model-specific coding that isn't really worth it for something like that, so I'm fine leaving this where it is!<|||||>@Rocketknight1 The ideal solution would be to be able to have some flag to not inherit the nested name at build time, or to be able to inject the right named scope somehow.
Now that I'm looking again at how weights are initialized, it might work if I override [this line](https://github.com/keras-team/keras/blob/7f3aa8aaa55f087544b500286ae360489d5eba8e/keras/engine/base_layer.py#L474) 🤔 I will give it a go, since it would lead to much clearer code!<|||||>@Rocketknight1 no good -- upon further digging on how the names are created inside a Keras layer, the only useful tool/variables I found was [this internal function](https://github.com/keras-team/keras/blob/7f3aa8aaa55f087544b500286ae360489d5eba8e/keras/engine/base_layer.py#L3661) which **in theory** enables manual scope naming at layer definition time. The catch is that it sets a global variable, and would remove the correct nested names for the other layers (unless we decide to update `name` everywhere).
In practice, I wasn't able to get it to work, but I don't think it's worth diving into.<|||||>@gante Agreed, and thanks for taking the time to check it out!<|||||>(embedding weight initialization updated, à la #19460 )<|||||>@sgugger now with the updated embedding initialization (same as in the PR you just reviewed for TF Bart #19460).
As discussed, this fixes an incorrect behavior but is technically a breaking change. Should I add 🚨 to the PR title? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.