repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,154 | closed | Extend Transformers Trainer Class to Enable PyTorch SGD/Adagrad Optimizers for Training | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR intends to extend Transformers Trainer Class with two PyTorch common optimizer algorithms, [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html?highlight=sgd#torch.optim.SGD) and [Adagrad ](https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html), for model training.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-10-2022 07:18:17 | 05-10-2022 07:18:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,153 | closed | Extend Transformers Trainer Class to Enable PyTorch Torchscript for Inference | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR intends to extend Transformers Trainer Class with PyTorch [Torchscript](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) (`torch.jit.trace`) to speed up model inference with just-in-time compilation.
Users could simply enable "--jit_mode" Trainer input args to get benifits.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-10-2022 07:11:19 | 05-10-2022 07:11:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This may be more [optimum](https://github.com/huggingface/optimum/)'s domain than Transformers, what do you think @mfuntowicz @LysandreJik @lewtun ?<|||||>> This may be more [optimum](https://github.com/huggingface/optimum/)'s domain than Transformers, what do you think @mfuntowicz @LysandreJik @lewtun ?
Thanks for the ping and thank you @jianan-gu for opening the PR!
I agree that inference-based improvements are better suited in `optimum`, so in this case we'd need to implement a dedicated `torchscript` subpackage that effectively extends the `Trainer` in this manner.
Having said that, I wonder if it's overkill to create a new package just to support this feature, which is natively included in PyTorch.
@mfuntowicz @michaelbenayoun do you see any other benefits that could be had by having a dedicated subpackage for `torchscript` in `optimum`? <|||||>I'd be happy to work on this PR if you give me a green light that you'll accept this feature.
We can mark is as experimental - should you decide that it'd fit better elsewhere down the road. So it would be easier to experiment and try it on. <|||||>Adding UT Tests in test_trainer.py for jit which covers `evaluate `and `predict`.
Thanks.<|||||>> What is the sounds of a tree falling in the forest when there is nobody to hear it?
>
> All these new features need to be documented on the user-side so that they actually get used. So the same as with IPEX let's add user docs and as before ideally a small benchmark to give users an incentive to try the feature.
>
> updade: I added a doc for inference as we haven't created it and then you can push the usage examples there - like we did with IPEX PR. So please fill out the blank and then we are good to go I think.
>
> and added a few nits in the code.
Though this jit mode works both for CPU and GPU, the current IPEX release covers CPU side optimizations with jit mode for model inference. Therefore we only have added and updated the inference doc for CPU (perf_infer_cpu.mdx), please take a review of the contents @stas00 , thanks. (for the small benchmark, for now, we only have relative performance numbers like shown in https://github.com/huggingface/transformers/issues/17137, so we will prepare for the numbers with --skip_memory_metrics 0 as a follow-up)
<|||||>@sgugger, would you like to have a quick look at the last changes since your review - mainly the newly added doc. Thank you!
otherwise it's good to be merged. |
transformers | 17,152 | closed | pre-training deberta model on TPUs | ### System Info
I want to pre-train the DeBERTa model (TFDebertaV2ForMaskedLM) using TPUs.
However, I got errors when I set "relative_attention" as Tue.
Although I tried to pre-train using a small model (the number of layers: 1, hidden dimension: 256),
I got the same error messages.
Is there any problem in Huggingface's DeBERTa model related to Tensorflow or TPUs?
Is there any way to solve this problem?
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The model config I setted is as below:
```
hidden_size: 256
intermediate_size: 1024
num_attention_heads: 8
num_hidden_layers: 2
vocab_size: 64000
attention_probs_dropout_prob: 0.1
hidden_dropout_prob: 0.1
initializer_range: 0.02
max_position_embeddings: 512
type_vocab_size: 0
layer_norm_eps: 1e-7
pos_att_type: ["p2c", "c2p"]
hidden_act: "gelu"
relative_attention: true
max_relative_positions: -1
position_buckets: 256
norm_rel_ebd: "layer_norm"
share_att_key: true
```
Then, I used TFDebertaV2ForMaskedLM class to pre-train the DeBERTa model.
### Expected behavior
The error messages I've encountered are as below:
```
2022-05-10 07:26:13.050675: I tensorflow/core/tpu/kernels/tpu_compile_op_common.cc:176] Compilation of 11977631266728880069 with session name took 556.019199ms and failed
2022-05-10 07:26:13.050718: E tensorflow/core/tpu/kernels/tpu_compilation_cache_external.cc:113] Input 1 to node `while/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/cond_1/transpose` with op Transpose must be a compile-time constant.
XLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. This error means that a shape or dimension argument could not be evaluated at compile time, usually because the value of the argument depends on a parameter to the computation, on a variable, or on a stateful operation such as a random number generator.
[[{{node while/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/cond_1/transpose}}]]
[[while/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/cond_1]]
[[while]]
2022-05-10 07:26:13.050752: F tensorflow/core/tpu/kernels/tpu_program_group.cc:86] Check failed: xla_tpu_programs.size() > 0 (0 vs. 0)
https://symbolize.stripped_domain/r/?trace=7fc408fc003b,7fc408fc00bf,7fc35113be1b,7fc356b4faf9,7fc356bbc8c1,7fc356bbd4d9,7fc356bb3b3c,7fc356bb5b4c,7fc34d0614af,7fc34cff51d3,7fc34cfe7631,7fc356b3a474,7fc356b376d6,7fc34d7217ff,7fc408f62608&map=8803c190592e98180ccd06f60438606e92563a77:7fc34e143000-7fc362c20e00,d8d5a9b6e95c8bb11b4f7a7a6ac042e0e1f0a2ce:7fc34c504000-7fc34e089760
*** SIGABRT received by PID 337336 (TID 338092) on cpu 1 from PID 337336; stack trace: ***
PC: @ 0x7fc408fc003b (unknown) raise
@ 0x7fc34b99862d 976 (unknown)
@ 0x7fc408fc00c0 3904 (unknown)
@ 0x7fc35113be1c 720 tensorflow::tpu::TpuProgramGroup::Initialize()
@ 0x7fc356b4fafa 1264 tensorflow::tpu::TpuCompilationCacheExternal::InitializeEntry()
@ 0x7fc356bbc8c2 1008 tensorflow::tpu::TpuCompilationCacheInterface::CompileIfKeyAbsentHelper()
@ 0x7fc356bbd4da 128 tensorflow::tpu::TpuCompilationCacheInterface::CompileIfKeyAbsent()
@ 0x7fc356bb3b3d 1056 tensorflow::tpu::TpuCompileOpKernelCommon::ComputeInternal()
@ 0x7fc356bb5b4d 544 tensorflow::tpu::TpuCompileOpKernelCommon::Compute()
@ 0x7fc34d0614b0 496 tensorflow::ThreadPoolDevice::Compute()
@ 0x7fc34cff51d4 2368 tensorflow::(anonymous namespace)::ExecutorState<>::Process()
@ 0x7fc34cfe7632 48 std::_Function_handler<>::_M_invoke()
@ 0x7fc356b3a475 160 Eigen::ThreadPoolTempl<>::WorkerLoop()
@ 0x7fc356b376d7 64 std::_Function_handler<>::_M_invoke()
@ 0x7fc34d721800 80 tensorflow::(anonymous namespace)::PThread::ThreadFn()
@ 0x7fc408f62609 (unknown) start_thread
https://symbolize.stripped_domain/r/?trace=7fc408fc003b,7fc34b99862c,7fc408fc00bf,7fc35113be1b,7fc356b4faf9,7fc356bbc8c1,7fc356bbd4d9,7fc356bb3b3c,7fc356bb5b4c,7fc34d0614af,7fc34cff51d3,7fc34cfe7631,7fc356b3a474,7fc356b376d6,7fc34d7217ff,7fc408f62608&map=8803c190592e98180ccd06f60438606e92563a77:7fc34e143000-7fc362c20e00,d8d5a9b6e95c8bb11b4f7a7a6ac042e0e1f0a2ce:7fc34c504000-7fc34e089760,b6a2e09e144eca321e149e2a834bcad7:7fc33ded8000-7fc34bcea2b0
E0510 07:26:13.215024 338092 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0510 07:26:13.215058 338092 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0510 07:26:13.215066 338092 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0510 07:26:13.215075 338092 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0510 07:26:13.215084 338092 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0510 07:26:13.215094 338092 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0510 07:26:13.215100 338092 coredump_hook.cc:525] RAW: Discarding core.
E0510 07:26:13.863904 338092 process_state.cc:779] RAW: Raising signal 6 with default behavior
```
| 05-10-2022 06:49:03 | 05-10-2022 06:49:03 | I found the error raised at the disentangled_att_bias function,
especially when trying to calculate the disentangled attention bias score.
https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/deberta/modeling_tf_deberta.py#L683<|||||>The suspect is the take_along_axis function
https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L525<|||||>The variable "permutation" used in take_along_axis, e.g. tf.transpose(x, perm=permutation), is not compile-time constant. Therefore, I think the XLA compiler raises an error.
https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L532<|||||>Hey @Kyeongpil do you have any updates on this issue?<|||||>@WissamAntoun To train DeBERTa using TPUs, tf.rank(x) in take_along_axis should be replaced with constants, that is 3.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,151 | closed | [trainer/deepspeed] load_best_model (reimplement re-init) | This PR fixes https://github.com/huggingface/transformers/issues/17114
The `deepspeed_reinit` hack proved to not always work, as some stored args appear to be either stale or wrong (e.g. the optimizer can be a deepspeed outer optimizer which shouldn't be the case), so trying to just do a full init from scratch instead.
And then there was an issue on the deepspeed side with model getting deepspeed hooks added multiple time which was breaking everything. Fixed in https://github.com/microsoft/DeepSpeed/pull/1947.
I spent many hours trying to reproduce the problem in the usual way via examples scripts to make a test, but alas, it just won't fail in the right places. So I ended up re-implemented `test_load_best_model` using code derived from @base-y's repro example. So super appreciating having their script.
### Blocking events
- [x] merge https://github.com/microsoft/DeepSpeed/pull/1947
- [x] update the dependency table to the new ds release after the merge 0.6.5
Fixes: https://github.com/huggingface/transformers/issues/17114 | 05-10-2022 05:13:34 | 05-10-2022 05:13:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>when will this pr be merged ?<|||||>We needed to wait for a new deepspeed release, which I see has happened already, so yes, we can merge this shortly. |
transformers | 17,150 | closed | [trainer] sharded _load_best_model | Looks like a copy-in-paste issue. This code path is probably untested.
@sgugger | 05-10-2022 03:11:22 | 05-10-2022 03:11:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for explaining why the testing of this path is complicated, Sylvain.
I think I can make it partially tested by using zero3 w/o `"stage3_gather_16bit_weights_on_model_save"` which would make it fall through and at least test that condition. I will be adding these tests here https://github.com/huggingface/transformers/pull/17151
|
transformers | 17,149 | closed | Add datasets.Dataset to trainer type hints | # What does this PR do?
This fixes a minor type hinting issue. `Trainer.__init__` currently assumes that `train_dataset` (if not `None`) is of type `torch.util.data.Dataset` (or `Dataset` for short), but that is a generic type (just as e.g. `List` is generic and needs to be "filled" as in `List[str]`). This method also works perfectly well with `datasets.Dataset`, although this class _cannot_ be expressed with a generic type. There are two options:
1. Fix this upstream in Datasets by turning `datasets.Dataset` into a generic type or
2. Replace `Optional[Dataset]` with `Union[Dataset, datasets.Dataset, None]` in `Trainer` and `Seq2SeqTrainer`
This PR implements option 2.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 05-09-2022 23:02:04 | 05-09-2022 23:02:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17149). All of your documentation changes will be reflected on that endpoint.<|||||>I don't think this fix is right as a `datasets.Dataset` should be recognized by the `Dataset` type (it implements `__getitem__` and `__len__`). <|||||>The problem there is that https://github.com/pytorch/pytorch/blob/da3ebaebee61c9594a0fb5d74ece6a1da2eff33a/torch/utils/data/dataset.py#L34 declares `Dataset` as a generic type - and some type checkers complain that `datasets.Dataset` violates the protocol by virtue of not being a generic type. I can see three options for resolving this:
1) Turn `datasets.Dataset` into a generic type
2) Implement another protocol that works just like the linked `Dataset` generic type except it _isn’t_ generic, and make that the type for `train_dataset`; `Dataset` would satisfy this protocol too, I believe
3) Just allow `datasets.Dataset` alongside `Dataset` (i.e. this PR)
3 is the easiest, 2 feels hacky to me, 1 might be workable but might be a breaking change for `datasets.Dataset` typing? Not sure tbh.<|||||>We also don't care at all, in honesty. We don't use type hints to satisfy the whims of type-checkers, because this is Python, not a typed language, and there is only so much a type hint can cover. We use type hints so that the documentation is clear on what is expected when the user checks it. In this instance `Dataset` is perfect for this purpose, so we won't add anything else just for the sake of type-checkers.<|||||>I mean, up to you - suppressing type hinter complaints for those two lines is a trivial matter even now. Solving the issue makes VSCode very slightly more usable for dev, and marginally less confusing for its users, but if you think this change would leave the docs (which are undoubtedly used by many more users) worse off, feel free to close this PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,148 | closed | Add MLFLOW_FLATTEN_PARAMS support in MLflowCallback | # What does this PR do?
This PR add support for the environment variable `MLFLOW_FLATTEN_PARAMS`.
When first level parameters hold a dictionary value, it will be logged to MLflow as a string. Currently, it will skip that parameter when the string exceed 250 characters. This is especially true with the task_specific_params which can end up being a long string.
Current warning message look like this:
> Trainer is attempting to log a value of "{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}" for key "task_specific_params" as a parameter. MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute.
With this PR, the warning message is updated to:
> Trainer is attempting to log a value of "{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}" for key "task_specific_params" as a parameter. MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute. You can use `MLFLOW_FLATTEN_PARAMS` environment variable to flatten the parameters and avoid this message.
When a user set the env variable with `os.environ['MLFLOW_FLATTEN_PARAMS'] = "True"`, the parameters will be properly sent to MLflow and logged as such:
```
task_specific_params.summarization.length_penalty 1.0
task_specific_params.summarization.max_length 128
task_specific_params.summarization.min_length 12
task_specific_params.summarization.num_beams 4
task_specific_params.summarization_cnn.length_penalty 2.0
task_specific_params.summarization_cnn.max_length 142
task_specific_params.summarization_cnn.min_length 56
task_specific_params.summarization_cnn.num_beams 4
task_specific_params.summarization_xsum.length_penalty 1.0
task_specific_params.summarization_xsum.max_length 62
task_specific_params.summarization_xsum.min_length 11
task_specific_params.summarization_xsum.num_beams 6
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger I added a flatten_dict function in .utils/py_utils.py as it didn't seem right to add this in integrations.py.
| 05-09-2022 22:47:50 | 05-09-2022 22:47:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger the script `utils/check_inits.py` used for CI has a bug. Line 252:
```
submodule = short_path.replace(os.path.sep, ".").replace(".py", "")
```
This would lead to files starting by "py" to be improperly named. For example, `utils/py_utils.py` is interpreted as `utils_utils`.
A better replace() usage may be:
```
submodule = short_path.replace(".py", "").replace(os.path.sep, ".")
```<|||||>Looks like you messed the rebase a little bit and there are now commits in this PR that shouldn't be here.<|||||>Yep. I messed up that rebase...<|||||>Thanks again! |
transformers | 17,147 | closed | Missing sentencepiece model for remBert | The code from https://huggingface.co/docs/transformers/main/en/model_doc/rembert#overview yields this error:
```
transformers.utils.hub.RepositoryNotFoundError: 404 Client Error: Repository Not Found for url: https://huggingface.co/rembert/resolve/main/sentencepiece.model
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import RemBertTokenizer, RemBertModel
import torch
tokenizer = RemBertTokenizer.from_pretrained("rembert")
model = RemBertModel.from_pretrained("rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
### Expected behavior
```shell
load the model from the hub
```
| 05-09-2022 21:34:35 | 05-09-2022 21:34:35 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Some links in the rembert documentation point to high-level `rembert` on the model hub, whereas the correct identifier is `google/rembert`.
I can update the documentation :)<|||||>Issue was fixed with #17641 , so I'm closing it :) |
transformers | 17,146 | closed | Add GreaseLM model | # What does this PR do?
Adds GreaseLM model. For more details, see https://github.com/snap-stanford/GreaseLM and https://arxiv.org/abs/2201.08860
For a quick check, see this [colab](https://colab.research.google.com/drive/19ePb4fZysn45wkhzAzOlcDSEWAGnSgto)
CommonSenseQA [validation](https://colab.research.google.com/drive/17Qy6X_rY3qRzetFS1znGmdS1JyP25jOO) set results.
OpenBookQA [validation](https://colab.research.google.com/drive/1RAkGV-KBVq1FOOpLt_MnZ_z9aZlNPVW7) set results.
OpenBookQA [test](https://colab.research.google.com/drive/18SXxMPMvr98JwsqTVKjNe-Dif3XEXuqs) set results.
Results for both OpenBookQA and CommonSenseQA match the reported results from the original repo.
## To do:
- [x] Make sure all documentation and tests are up to HF standards
- [x] Needs careful review due to ~~two~~ three soft dependencies: spacy, torch_scatter and torch_sparse
@LysandreJik @NielsRogge | 05-09-2022 20:13:42 | 05-09-2022 20:13:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17146). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,145 | closed | Fix install directions for all docs using no_trainer scripts | # What does this PR do?
This PR fixes the directions in the documentation for installing Accelerate, as pointed out here: https://github.com/huggingface/accelerate/issues/349
Currently they state to install just with pip, but as they are changed based on the git version of Accelerate, users should install with it instead.
The general Accelerate docs can remain the same however.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 05-09-2022 19:15:00 | 05-09-2022 19:15:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,144 | closed | 'BaseModelOutput' object has no attribute '_OrderedDict__map' when using Wav2Vec 2.0 | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27
- Python version: 3.8.8
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten, @anton-l
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
librispeech_samples_ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# load audio
audio_input, sample_rate = sf.read(librispeech_samples_ds[0]["file"])
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0])
```
which is taken from [the official documentation](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#use-wav2vec-20-with-transformers)
### Expected behavior
```shell
Not throw an error. However, I get the error
AttributeError: 'BaseModelOutput' object has no attribute '_OrderedDict__map'
```
| 05-09-2022 15:26:00 | 05-09-2022 15:26:00 | I was able to reproduce this pythonically on python 3.8.8
```python
from collections import OrderedDict
from dataclasses import dataclass
class A(OrderedDict):
def __post_init__(self):
self["a"] = 1
@dataclass
class B(A):
some_val = None
b = B()
```
(A would be `ModelOutput` in this case, and B would be `BaseModelOutput`)
which throws the same error
```
AttributeError: 'B' object has no attribute '_OrderedDict__map'
```<|||||>I updated the python version to `3.8.13` and it worked
```
- `transformers` version: 4.8.2
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27
- Python version: 3.8.13
- PyTorch version (GPU?): 1.10.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
``` |
transformers | 17,143 | closed | LogSumExp trick `question_answering` pipeline. | # What does this PR do?
Fix #17140
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-09-2022 14:53:27 | 05-09-2022 14:53:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,142 | closed | bert: add conversion script for BERT Token Dropping TF2 checkpoints | Hi,
this PR adds a conversion script for BERT models, that were trained with the recently introduced "Token Dropping for Efficient BERT Pretraining" approach, introduced in [this paper](https://arxiv.org/abs/2203.13240):

Models are trained with the TensorFlow 2 implementation from the TensorFlow models repository, which can be found [here](https://github.com/tensorflow/models/tree/master/official/projects/token_dropping). Note: The model architecture only needs changes during pre-training, but the final pre-trained model is compatible with the original BERT architecture!
Unfortunately, the authors do not plan to release pre-trained checkpoints.
But I have pre-trained several models with their official implementation and I've also released the checkpoints and the PyTorch converted model weights on the Hugging Face Model Hub:
https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased
This is a multi-lingual model, that was trained on ~130GB of historic and noisy OCR'ed texts with a 64k vocab.
## Conversion Script Usage
In order to test the conversion script, the following commands can be used to test the conversion:
```bash
wget "https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased/resolve/main/ckpt-1000000.data-00000-of-00001"
wget "https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased/resolve/main/ckpt-1000000.index"
wget "https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased/resolve/main/config.json"
python3 convert_bert_token_dropping_original_tf2_checkpoint_to_pytorch.py --tf_checkpoint_path ckpt-1000000 --bert_config_file config.json --pytorch_dump_path ./exported
```
This outputs:
```bash
All model checkpoint weights were used when initializing BertForMaskedLM.
All the weights of BertForMaskedLM were initialized from the model checkpoint at ./exported.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.
```
## Masked LM Predictions
The masked lm predictions are pretty good and are comparable with the multil-lingual model, that was trained with the official BERT implementation. Just use the inference widget on the Hugging Face Model Hub.
In this example, the sentence `and I cannot conceive the reafon why [MASK] hath` is used to test the model. For a good comparison, the [32k hmBERT](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) is used that was trained with the official BERT implementation on the same corpus:
```json
[
{
"score": 0.3564337193965912,
"token": 1349,
"token_str": "she",
"sequence": "and I cannot conceive the reafon why she hath"
},
{
"score": 0.21097686886787415,
"token": 903,
"token_str": "it",
"sequence": "and I cannot conceive the reafon why it hath"
},
{
"score": 0.10645408183336258,
"token": 796,
"token_str": "he",
"sequence": "and I cannot conceive the reafon why he hath"
},
{
"score": 0.0170532688498497,
"token": 1049,
"token_str": "we",
"sequence": "and I cannot conceive the reafon why we hath"
},
{
"score": 0.01265314407646656,
"token": 45,
"token_str": "I",
"sequence": "and I cannot conceive the reafon why I hath"
}
]
```
With the 64k hmBERT model that was trained with the Token Dropping approach, the output is:
```json
[
{
"score": 0.5147836804389954,
"token": 796,
"token_str": "he",
"sequence": "and I cannot conceive the reafon why he hath"
},
{
"score": 0.1566970944404602,
"token": 1349,
"token_str": "she",
"sequence": "and I cannot conceive the reafon why she hath"
},
{
"score": 0.08448878675699234,
"token": 903,
"token_str": "it",
"sequence": "and I cannot conceive the reafon why it hath"
},
{
"score": 0.020168323069810867,
"token": 45,
"token_str": "I",
"sequence": "and I cannot conceive the reafon why I hath"
},
{
"score": 0.01774059422314167,
"token": 3560,
"token_str": "God",
"sequence": "and I cannot conceive the reafon why God hath"
}
]
```
## Downstream Task Performance
We have also used this model when participating in the HIPE-2022 Shared Task and the BERT model pre-trained with Token Dropping approach achieved really good results on the NER downstream task, see [results here](https://github.com/dbmdz/clef-hipe/tree/main/experiments/clef-hipe-2022#final-models):
| Backbone LM | Configuration | F1-Score (All, Development) | F1-Score (German, Development) | F1-Score (English, Development) | F1-Score (French, Development) | Model Hub Link
| ---------------------------- | ------------------- | --------------------------- | ------------------------------ | ------------------------------- | ------------------------------ | -----------------------------------------------------------------
| hmBERT (32k) | `bs4-e10-lr5e-05#4` | 87.64 | 89.26 | 88.78 | 84.80 | [here](https://huggingface.co/dbmdz/flair-hipe-2022-ajmc-all)
| hmBERT (64k, token dropping) | `bs8-e10-lr3e-05#3` | 87.02 | 88.89 | 86.63 | 85.50 | [here](https://huggingface.co/dbmdz/flair-hipe-2022-ajmc-all-64k) | 05-09-2022 14:40:54 | 05-09-2022 14:40:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @LysandreJik
thanks for the approval! I have also added a model card and the model is also mentioned in our new ["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575) - which is used as the backbone language model for our winning NER models (English and French) :)
<|||||>/cc @sgugger :hugs: <|||||>Sorry this slipped through the cracks! Thanks a lot for your contributrion! |
transformers | 17,141 | closed | Add trajectory transformer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Add Trajectory Transformer to transformers.
This is a transformer model used for offline deep reinforcement learning, taking observations, actions and rewards as one big sequence.
The model is adapted from the code [here](https://github.com/jannerm/trajectory-transformer).
The caching mechanism as well as attentions outputs and hidden states outputs are additions not present in the original code.
The adapted forward pass produces the same output tensors as the forward pass in the original model.
Example usage for random environment data:
```python
from transformers import TrajectoryTransformerModel
import torch
import numpy as np
config = TrajectoryTransformerConfig.from_pretrained("CarlCochet/trajectory-transformer-halfcheetah-medium-v2")
model = TrajectoryTransformerModel.from_pretrained("CarlCochet/trajectory-transformer-halfcheetah-medium-v2", config=config)
batch_size = 1
# sequence made of [ actions, observations, reward ]
seq_length = config.action_dim + config.observation_dim + 1
trajectories = torch.LongTensor([np.random.permutation(seq_length) for _ in range(batch_size)])
outputs = model(trajectories)
prediction_logits = outputs.logits
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## TODO
- Pass all relevant tests ; need to disable some irrelevant tests as well
- Make a demo notebook
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@LysandreJik @thomwolf @edbeeching
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-09-2022 12:47:26 | 05-09-2022 12:47:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @sgugger for a second review! |
transformers | 17,140 | closed | Overflow when normalizing start and end logits in QA pipeline | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
After succesfully training a question answering model using the [run_qa.py ](https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/examples/pytorch/question-answering/run_qa.py) script, I tried to reuse it in for inference in a question-answering `pipeline` but the model predicts `no-answer` for all the (question, context) pairs even for those included in my evaluation file which was used when running the `run_qa.py` script. The evaluation part of `run_qa.py` was able to predict an answer for those examples.
The issue is related to the normalization of start and end logits done in the QA pipeline https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/src/transformers/pipelines/question_answering.py#L401
`start_` and `end_` are provided `np.float32` arrays and all the logits calculated by the model were <-100 which leads to overflow when applying `exp` then `log`. The normalization differs from the one applied in `run_qa.py` file which explains this overflow error happening only while doing inference.
https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/examples/flax/question-answering/utils_qa.py#L191
The trace showing this behavior is provided below:
```
> /opt/conda/lib/python3.7/site-packages/transformers/pipelines/question_answering.py(401)postprocess()
400 # Normalize logits and spans to retrieve the answer
--> 401 start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
402 end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
ipdb> start_.dtype
dtype('float32')
ipdb> start_.shape
(1, 512)
ipdb> start_[0, :20]
array([-10000. , -10000. , -10000. , -10000. ,
-10000. , -10000. , -10000. , -10000. ,
-10000. , -10000. , -10000. , -10000. ,
-10000. , -10000. , -10000. , -10000. ,
-10000. , -155.77672, -155.8097 , -155.81848],
dtype=float32)
ipdb> n #running operation line 401
start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
> /opt/conda/lib/python3.7/site-packages/transformers/pipelines/question_answering.py(402)postprocess()
401 start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
--> 402 end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
403
ipdb> start_[0, :20]
array([inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf, inf
```
### Expected behavior
```shell
The LogSumExp trick should be applied similarly to https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/examples/flax/question-answering/utils_qa.py#L191 and https://datumorphism.leima.is/cards/machine-learning/neural-networks/log-sum-exp-trick/ to prevent this overflow.
```
| 05-09-2022 11:32:58 | 05-09-2022 11:32:58 | Yes, the logsumexp trick should be used every time we compute a `softmax` in any pipeline.<|||||>Thanks for finding this !
This is old code that was carried over many times, proposed fix here: https://github.com/huggingface/transformers/pull/17143
@fgbelidji do you have an example where it triggers ? (Would be nice to have a test that triggers the error)<|||||>@Narsil thanks for the fix! The example code below will trigger the error:
```
from transformers import SquadExample, pipeline
context = """
Beyoncé Giselle Knowles-Carter (/biːˈjɒnseɪ/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy".
"""
example = SquadExample(context_text=context,
question_text= "When did Beyonce start becoming popular?",
qas_id='56be85543aeaaa14008c9063',
title='Beyoncé',
answer_text="in the late 1990s",
start_position_character=269)
model_name = "deepset/roberta-base-squad2"
model_pipeline = pipeline(model=model_name,
tokenizer=model_name,
revision="v1.0",
task="question-answering",
handle_impossible_answer=True,
top_k=1)
example_preprocessed = next(model_pipeline.preprocess(example))
model_outputs = model_pipeline._forward(example_preprocessed)
model_outputs["start"] = model_outputs["start"].detach()
model_outputs["end"] = model_outputs["end"].detach()
output_without_error = model_pipeline.postprocess([model_outputs])
#Ensuring overflow
model_outputs["start"] = model_outputs["start"] * 100
model_outputs["end"] = model_outputs["end"]* 100
output_with_error = model_pipeline.postprocess([model_outputs])
print(f"No overflow: {output_without_error}")
print(f"Overflow: {output_with_error}")
```
Output:
```
No overflow: {'score': 0.6222478747367859, 'start': 277, 'end': 287, 'answer': 'late 1990s'}
Overflow: {'score': 0.0, 'start': 0, 'end': 0, 'answer': ''}
``` |
transformers | 17,139 | closed | Match the indexing output of `TokenClassificationPipeline` with labels (e.g. `ner_tags`) | ### Feature request
As described below, currently there may well be cases where the token indexes given as an output of `TokenClassificationPipeline` can not be mapped to the original word indexes in the input dataset that originally had words already split.
It would come handy to be able to use the output indexes from the pipeline to retrieve e.g. the correct labels using the original dataset.
### Motivation
Currently, the preprocess function for `TokenClassificationPipeline` looks as follow:
https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/src/transformers/pipelines/token_classification.py#L191-L205
An issue with datasets such as conll2003 where sentences are split by words is that to use the pipeline, we need to pass a single string:
```python
from transformers import TokenClassificationPipeline
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_name = "elastic/distilbert-base-uncased-finetuned-conll03-english"
model = AutoModelForTokenClassification.from_pretrained(model_name)
dataset = load_dataset("conll2003")
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = TokenClassificationPipeline(model=model, tokenizer=tokenizer, aggregation_strategy="none", batch_size=2)
my_data = dataset["train"][4]
inputs = my_data["tokens"] # this is a list of words
inputs = " ".join(inputs) # we need to pass a single string to the pipeline as input
res = pipe(inputs)
print(res)
```
This is actually fine and everything works well (at the end of the day probabilities match), however, when using the `preprocess()` method as a standalone, there may be a slight issue:
```python
tokenized_inputs = pipe.preprocess(inputs)
print(inputs)
print(tokenized_inputs.word_ids(batch_index=0))
res = tokenizer.decode(tokenized_inputs["input_ids"][0])
print(res)
res = tokenizer.convert_ids_to_tokens(tokenized_inputs["input_ids"][0])
print(res)
"""
prints:
Germany 's representative to the European Union 's veterinary committee Werner Zwingmann said on Wednesday consumers should buy sheepmeat from countries other than Britain until the scientific advice was clearer .
[None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 13, 13, 14, 15, 16, 17, 18, 19, 20, 20, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, None]
[CLS] germany's representative to the european union's veterinary committee werner zwingmann said on wednesday consumers should buy sheepmeat from countries other than britain until the scientific advice was clearer. [SEP]
['[CLS]', 'germany', "'", 's', 'representative', 'to', 'the', 'european', 'union', "'", 's', 'veterinary', 'committee', 'werner', 'z', '##wing', '##mann', 'said', 'on', 'wednesday', 'consumers', 'should', 'buy', 'sheep', '##me', '##at', 'from', 'countries', 'other', 'than', 'britain', 'until', 'the', 'scientific', 'advice', 'was', 'clearer', '.', '[SEP]']
"""
```
We see that `'` and `s` at the beginning of the sentence are mapped to different words (respectively 1 and 2). However, using
```python
# here we pass as input a list of words
tokenized_inputs = tokenizer(dataset["train"][4]["tokens"],
truncation=True, is_split_into_words=True)
```
we have that `'` and `s` are mapped to the same word (number 1).
This latter behavior is critical if we want to make use of the `ner_tags` with the pipeline (e.g. for evaluation). Indeed, by assigning to different words tokens that were originally part of the same word (before joining with `" ".join()`), we break the indexing given in the pipeline output `index`, in the sense that those token indexes can no longer be mapped to the original `ner_tags` (which uses word indexes).
This mapping normally could be done using the `tokenized_inputs.word_ids`.
**Motivation:** Use pipelines for the evaluation in Optimum.
**Alternative approach:**
We could use `tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Sequence([WhitespaceSplit()])` to explicity avoid splitting on punctuation, however I guess it is bad practice to overwrite a pre-tokenizer (in my case [BertPreTokenizer](https://github.com/huggingface/tokenizers/blob/28cd3dce2a75d106572392194ff2564574c33235/tokenizers/src/pre_tokenizers/bert.rs#L13))?
### Your contribution
Not sure how to go about this. We could support `is_split_into_words` argument for `preprocess()`, but then this method would accept list of words, which is not the case right now (only full sentence as a single string). | 05-09-2022 10:01:32 | 05-09-2022 10:01:32 | Maybe of interest to @Narsil <|||||>Hi @fxmarty
> however I guess it is bad practice to overwrite a pre-tokenizer (in my case [BertPreTokenizer](https://github.com/huggingface/tokenizers/blob/28cd3dce2a75d106572392194ff2564574c33235/tokenizers/src/pre_tokenizers/bert.rs#L13))?
Yes, this would set up for shooting yourself in the foot later :)
> Not sure how to go about this. We could support is_split_into_words argument for preprocess(), but then this method would accept list of words, which is not the case right now (only full sentence as a single string).
I am not sure this would even work, since the tokenizer itself would end up splitting things anyway. (It might but I am not sure, and using those usually come with their own caveat, so I usually try to avoid them)
> inputs = " ".join(inputs) # we need to pass a single string to the pipeline as input
Since you already have your tokens, can't you use directly them for token number (I refrain to use `word_ids` on purpose ;) ).
And then maybe do something like using `offset_mapping` to map the output of the tokenizer and the pretokenized thing (handling conflicts which might occur), usually they are the best way to map what's incoming to the `input_ids`.
```python
def get_conll_word_ids(input_ids, offset_mappings, conll_tokens):
token_index = 0
sentence_index = 0
word_ids = []
for i, (input_ids, (start, stop) ) in enumerate(zip(input_ids, offset_mappings)):
while start > sentence_index:
# This is a new conll token
sentence_index += conll_tokens[token_index] + 1 # +1 is The inlined space
token_index += 1
word_ids.append(token_index)
```
This doesn't handle conflict which might exist if `start` and `stop` actually overlap 2 or more different conll tokens.
This code is a bit specific for conll kind of task, however I don't think there's a nice workaround anyway since what the `tokenizer` sees may be entirely different from what the conll tokens look like and using `pretokenized` sentences might actually induce biases into what the model sees leading to poorer than expected performance (depends on how exactly the training loop was done and if that bias was also put into that model, but I am guessing usually it's not done that way)<|||||>Thanks a lot for your help! In the meantime, I implemented my own trick to avoid modifying the pre-tokenizer, basically tokenizing word by word first to build a good `token_to_word_id`:
```python
inputs = " ".join(data["tokens]) # where `data` is a sample from the dataset
res = pipeline(inputs)
# BatchEncoding.word_ids may be wrong as we joined words with " ", so let's populate it ourselves
token_to_word_id = []
for j, word in enumerate(data["tokens"]):
preprocessed_inputs = pipeline.preprocess(word)
n_tokens = len([k for k in preprocessed_inputs.word_ids(0) if k != None]) # exclude None
token_to_word_id.extend([j] * n_tokens)
# the pipeline may give as output labeled tokens that are part of the same word, keep track
# of the indexing to match the true labels on words
index_tokens_word_start = []
for j, word_index in enumerate(token_to_word_id):
if j == 0:
index_tokens_word_start.append(j)
elif word_index != token_to_word_id[j - 1]:
index_tokens_word_start.append(j)
# keep only predictions that correspond to the beginning of a word
preds = [res[index]["entity"] for index in index_tokens_word_start]
```
<|||||>Hi @fxmarty,
Great that you found a workaround, which probably works OK for you combination of model+dataset+metric. I would like to point however that it's probably not extremely generic.
Take the following with a big grain of salt since the following is a personal opinion:
> keep only predictions that correspond to the beginning of a word
That works for things like conll maybe, but what if the model actually split a conll word into 2 tokens and give them 2 different token classes, is it really OK ? I would argue that it's not, even if I know some researchers do that to boost results. It's an issue since why would the first token yield more information than the second ? What if the model is a byte level one like ByT5 and every letter has an entity, is really the first letter the best choice ? Other aggregation strategies exist in the pipeline, which might yield better results for some models, but I think all are slightly lying. It makes sense for the pipeline since, as a user in an app, or webapp, displaying part of words as entities is super odd in space spitted languages, however if I am evaluating two different models, I think that evaluating `[Correct, Incorrect]` the same as `[incorrect, correct]` (if the word is split in two tokens) seems important IMO. If some byte level model comes along that predicts `[Incorrect, correct, correct, correct]` my take is that is should get a better score than the previous two.
The pipeline does provide `aggregation_strategy="first"/"max"/"avg"` which does come with caveat (and for instance will never work on non space separated languages like Chinese where's there's no real word boundary of words.
This is even emphasized for multi word tokens in NER where "New York" and having entities be "B-LOC", "I-LOC", "I-LOC" is the only way to get a single "New York" entity across words.
Using `offset_mappings` is IMO the only viable way to get a robust alignment of tokens for two different tokenizations.
Going that route, you can directly use the various `aggregation_strategy` (even if they work only on space separated languages).
Ofc many conflicts can (and will) occur, but at least you can define 1 single consistent strategy to deal with those.
Another suggestion would be to start right of the bat on Chinese or Thai for instance, which will trigger a lot more questions than space separated languages, and usually answers for these languages tend to be correct for simpler languages like English (and more correct that what the naive thing for English would be). And the solutions tend to be simpler too.
Looking at what the issue you linked, maybe enabling users to write custom script (akin to GH actions maybe ?) might be the best solution. (We could write a sane default Evaluate that aims to be generic, but users could always entirely define their own)<|||||>Hello Narsil, thanks a lot for your detailed feedback, this is really appreciated.
So I think what you suggest is that the flag `label_all_tokens` should be `True` to get a fair evaluation in the token classification example. https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/examples/pytorch/token-classification/run_ner.py#L426-L445
For now, I was merely trying to reproduce the PyToch example scripts using pipeline for evaluation so I just went for the default `label_all_tokens=False`, where indeed only the first token of a word is used for evaluation.
> It makes sense for the pipeline since, as a user in an app, or webapp, displaying part of words as entities is super odd in space spitted languages, however if I am evaluating two different models
Not sure about what you meant there, but actually the pipeline may return different labels for the same word! See e.g. https://huggingface.co/docs/transformers/task_summary#named-entity-recognition , and I recall reading at some point (in the course or doc) that it was probably better not to abstract away too much, at the expense of giving several labels for the same word.
Your thoughts about different predictions for tokens of the same word got me thinking if this was not as well dependent on how the model was trained. I am not familiar at all with token classification (so my understanding may be broken), but naively *"Why evaluate on first token if the model was trained on all"*, and conversely *"Why evaluate on all tokens if the model was trained on only the first"*.
> Using offset_mappings is IMO the only viable way to get a robust alignment of tokens for two different tokenizations.
Going that route, you can directly use the various aggregation_strategy (even if they work only on space separated languages).
> Ofc many conflicts can (and will) occur, but at least you can define 1 single consistent strategy to deal with those.
>
> Another suggestion would be to start right of the bat on Chinese or Thai for instance, which will trigger a lot more questions than space separated languages, and usually answers for these languages tend to be correct for simpler languages like English (and more correct that what the naive thing for English would be). And the solutions tend to be simpler too.
Anyways, I will have a deeper look at your two suggestions. One thing I was wondering today is how to deal with datasets as https://huggingface.co/datasets/msra_ner , if it makes sense to join by space at all, if it could hurt performances (see e.g. https://huggingface.co/abhishek/autonlp-japanese-sentiment-59363/blob/main/vocab.txt#L23394 where several characters can be a token, if my understanding is correct).
Out of curiosity, are there other cases where you think my approach (tokenize each word once first) would break?
Again, thank you for your feedback, it is really helpful.<|||||>> Out of curiosity, are there other cases where you think my approach (tokenize each word once first) would break?
I think it's pretty standard and definitely has merits which I didn't necessarily restated clearly enough.And it IS used as you suggested, so reusing it if that's how the model was trained is the best option. But you don't necessarily know how a model was trained i think. Also I think it's not necessarily super general and the odd case would pop up fast if you wanted to try to evaluate some random model which has a very different tokenization for instance.
Again my personal take would be to take directly the most general case first and on possibly a non separated language.
In `tokenizers` at least going the general route was often the easiest in the long run.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,138 | closed | Extend Transformers Trainer Class to Enable CPU AMP and Integrate Intel Extension for PyTorch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR supports the feature request https://github.com/huggingface/transformers/issues/17137
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-09-2022 08:59:07 | 05-09-2022 08:59:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @stas00 <|||||>Like #17153 I'm not entirely sure if this is best put here or in Optimum, so would like to hear back from @mfuntowicz and @LysandreJik before taking a deeper dive in the PR and review :-)<|||||>Hi, @mfuntowicz @LysandreJik @sgugger , any suggestion about this PR? We are happy to have a discussion here on any concerns.
Currently, BF16 AMP is already supported for GPU in Transformers, and with this PR we could extend that on the CPU side both for inference and training. Furtherly, users could get performance benefits by using IPEX with just one args "--use_ipex".
Thanks.
<|||||>> Thanks for addressing the comments! We're just missing a test now.
@sgugger Hi, here we have add ipex UT tests for trainer ( covering train, evaluate, predict). Could you please have a review of them? Thanks!<|||||>I see `Intel Haswell` for the 4 runners (push/scheduled, single/multi GPUs)<|||||>> I see `Intel Haswell` for the 4 runners (push/scheduled, single/multi GPUs)
Great! Thank you, @ydshieh
@sgugger, do we want the IPEX tests in the live push CI - they are very fast, but installing `intel_extension_for_pytorch` was very slow for pt-1.10 (was fast for pt-1.11) - or just scheduled?<|||||>Scheduled tests is enough.<|||||>> The doc looks great - thank you, @jianan-gu!
>
> Added one small grammar fix
>
> Would you like to add a benchmark to show the speed with and without ipex? otherwise it's very hard to tell whether it's worth the effort trying it.
>
> as I mentioned earlier you'd just run the trainer twice w/ and w/o ipex and make sure to add `--skip_memory_metrics 0` and it'll print you the speed at which it finished each run.
>
> If it's not too much trouble that is. I'm asking since you probably have done a lot of experiments already and saw the power of this extension, while none of us did.
Hi, thanks for your suggestions and reply.
For the benchmark that shows the speed with and without ipex, yes, we could do that and present the speedup results there.
Also, we have done a lot of experiments like Bert training, and we will provide the performance gains but it would take some time (may cost 1-2 weeks) to go through the internal process to make them publicly available.
Besides, is it good enough if we demonstrate the speedup with relative performance gain (like the table [here](https://intel.github.io/intel-extension-for-pytorch/1.11.200/tutorials/performance.html))? Thanks<|||||>> Besides, is it good enough if we demonstrate the speedup with relative performance gain (like the table [here](https://intel.github.io/intel-extension-for-pytorch/1.11.200/tutorials/performance.html))?
I don't think so. Those performance numbers can't be reproduced by a user, so they are very practical.
As I suggested there is no need for any major benchmarks, here is a simple eval one:
```
# no gpus / no ipex
PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES= python \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps \
--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \
--logging_steps 500 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 16 \
--predict_with_generate --eval_steps 150 --sortish_sampler --source_lang en \
--target_lang ro --dataset_name wmt16 --dataset_config ro-en --source_prefix \
'translate English to Romanian: ' --val_max_target_length 128 --warmup_steps \
50 --max_eval_samples 500 --skip_memory_metrics 0 --bf16
[...]
***** eval metrics *****
before_init_mem_cpu = 1000MB
eval_bleu = 24.1261
eval_gen_len = 39.554
eval_loss = 3.786
eval_mem_cpu_alloc_delta = 279MB
eval_mem_cpu_peaked_delta = 437MB
eval_runtime = 0:01:53.04
eval_samples = 500
eval_samples_per_second = 4.423
eval_steps_per_second = 0.283
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
# no gpus / no ipex
PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES= python \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps \
--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \
--logging_steps 500 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 16 \
--predict_with_generate --eval_steps 150 --sortish_sampler --source_lang en \
--target_lang ro --dataset_name wmt16 --dataset_config ro-en --source_prefix \
'translate English to Romanian: ' --val_max_target_length 128 --warmup_steps \
50 --max_eval_samples 500 --skip_memory_metrics 0 --bf16 --no_cuda --use_ipex
***** eval metrics *****
before_init_mem_cpu = 1010MB
eval_bleu = 24.1261
eval_gen_len = 39.554
eval_loss = 3.7863
eval_mem_cpu_alloc_delta = 518MB
eval_mem_cpu_peaked_delta = 439MB
eval_runtime = 0:01:34.41
eval_samples = 500
eval_samples_per_second = 5.296
eval_steps_per_second = 0.339
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
```
So 5.296 vs 4.423 eval_samples_per_second on my machine. ~16% speedup - that's a good start.
```
model name : 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz
```
Probably would be better to test with a slightly larger model, as it is likely to get better speedup but it helps to see that it actually works!
This is good enough for a doc.
And down the road we can surely boost it with better more indepth benchmarks.
What do you think?
<|||||>> Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance.
Given your description - is there a way to check that ipex is actually doing anything, other than being installed and import'able?
i.e. currently it's just:
```
def is_ipex_available():
return importlib.util.find_spec("intel_extension_for_pytorch") is not None
```
But if user's CPU is not Intel or not the right kind of Intel should it tell the user it's not supported and assert? or is it already the case? I guess mine is the right CPU so I it doesn't fail.<|||||>> But if user's CPU is not Intel or not the right kind of Intel should it tell the user it's not supported and assert? or is it already the case? I guess mine is the right CPU so I it doesn't fail.
Hi @stas00 The optimization in IPEX is a superset of BF16. It optimizes for CPU with AVX-512 or above. It can also functionally work for CPUs with AVX2. So, I expect it also functionally works for AMD CPUs (even though we do not benchmark perf on AMD CPUs). IPEX pre-compiles multiple kernels for various CPU ISAs for an op ahead-of-time and does runtime kernel dispatch according to the underlying CPU ISA capability. IPEX doesn't explicitly check CPU types internally. With this, we are open to whether and what additional checks are needed to add in this PR.<|||||>> > Besides, is it good enough if we demonstrate the speedup with relative performance gain (like the table [here](https://intel.github.io/intel-extension-for-pytorch/1.11.200/tutorials/performance.html))?
>
> I don't think so. Those performance numbers can't be reproduced by a user, so they are very practical.
>
> As I suggested there is no need for any major benchmarks, here is a simple eval one:
>
> ```
> # no gpus / no ipex
> PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES= python \
> examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
> --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps \
> --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \
> --logging_steps 500 --max_source_length 128 --max_target_length 128 \
> --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 16 \
> --predict_with_generate --eval_steps 150 --sortish_sampler --source_lang en \
> --target_lang ro --dataset_name wmt16 --dataset_config ro-en --source_prefix \
> 'translate English to Romanian: ' --val_max_target_length 128 --warmup_steps \
> 50 --max_eval_samples 500 --skip_memory_metrics 0 --bf16
> [...]
> ***** eval metrics *****
> before_init_mem_cpu = 1000MB
> eval_bleu = 24.1261
> eval_gen_len = 39.554
> eval_loss = 3.786
> eval_mem_cpu_alloc_delta = 279MB
> eval_mem_cpu_peaked_delta = 437MB
> eval_runtime = 0:01:53.04
> eval_samples = 500
> eval_samples_per_second = 4.423
> eval_steps_per_second = 0.283
> init_mem_cpu_alloc_delta = 0MB
> init_mem_cpu_peaked_delta = 0MB
>
> # no gpus / no ipex
> PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES= python \
> examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
> --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps \
> --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \
> --logging_steps 500 --max_source_length 128 --max_target_length 128 \
> --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 16 \
> --predict_with_generate --eval_steps 150 --sortish_sampler --source_lang en \
> --target_lang ro --dataset_name wmt16 --dataset_config ro-en --source_prefix \
> 'translate English to Romanian: ' --val_max_target_length 128 --warmup_steps \
> 50 --max_eval_samples 500 --skip_memory_metrics 0 --bf16 --no_cuda --use_ipex
> ***** eval metrics *****
> before_init_mem_cpu = 1010MB
> eval_bleu = 24.1261
> eval_gen_len = 39.554
> eval_loss = 3.7863
> eval_mem_cpu_alloc_delta = 518MB
> eval_mem_cpu_peaked_delta = 439MB
> eval_runtime = 0:01:34.41
> eval_samples = 500
> eval_samples_per_second = 5.296
> eval_steps_per_second = 0.339
> init_mem_cpu_alloc_delta = 0MB
> init_mem_cpu_peaked_delta = 0MB
> ```
>
> So 5.296 vs 4.423 eval_samples_per_second on my machine. ~16% speedup - that's a good start.
>
> ```
> model name : 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz
> ```
>
> Probably would be better to test with a slightly larger model, as it is likely to get better speedup but it helps to see that it actually works!
>
> This is good enough for a doc.
>
> And down the road we can surely boost it with better more indepth benchmarks.
>
> What do you think?
Sure, got it, so we will prepare for benchmarking example models to collect performance numbers (like you showed above) and the internal review process. And after that, we will update the doc with those performance numbers.
Thanks.<|||||>> Hi @stas00 The optimization in IPEX is a superset of BF16. It optimizes for CPU with AVX-512 or above. It can also functionally work for CPUs with AVX2. So, I expect it also functionally works for AMD CPUs (even though we do not benchmark perf on AMD CPUs). IPEX pre-compiles multiple kernels for various CPU ISAs for an op ahead-of-time and does runtime kernel dispatch according to the underlying CPU ISA capability. IPEX doesn't explicitly check CPU types internally. With this, we are open to whether and what additional checks are needed to add in this PR.
Perhaps mention it in the doc? just briefly - as in AMD CPUs and older Intel CPUs are likely to result in a better performance as well under IPEX.
> Sure, got it, so we will prepare for benchmarking example models to collect performance numbers (like you showed above) and the internal review process. And after that, we will update the doc with those performance numbers.
I'd be perfectly happy to merge this sooner not to press you if you kindly commit to adding at least some benchmarks later in a new PR - will that be a good arrangement for you guys?<|||||>Hi @stas00
> Perhaps mention it in the doc? just briefly - as in AMD CPUs and older Intel CPUs are likely to result in a better performance as well under IPEX.
Thanks for the suggestions. We revised the doc accordingly. Could you please review?
> I'd be perfectly happy to merge this sooner not to press you if you kindly commit to adding at least some benchmarks later in a new PR - will that be a good arrangement for you guys?
Sure, sounds good to us. We will update the perf numbers in a separate PR as soon as they are ready.
<|||||>@jianan-gu, fyi, ipex fails with pt-1.12
```
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate_with_ipex
(line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training_with_ipex
(line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_predict_with_ipex
(line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
```
we are disabling the tests for now, and please let us know when this is fixed, so that we can reenable those. Thank you!<|||||>> @jianan-gu, fyi, ipex fails with pt-1.12
>
> ```
> tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate_with_ipex
> (line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
> tests/trainer/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training_with_ipex
> (line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
> tests/trainer/test_trainer.py::TrainerIntegrationTest::test_predict_with_ipex
> (line 6) ImportError: /usr/local/lib/python3.8/dist-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-cpu.so: undefined symbol: _ZNK3c1010TensorImpl5sizesEv
> ```
>
> we are disabling the tests for now, and please let us know when this is fixed, so that we can reenable those. Thank you!
Hi @stas00
Thanks for the information.
As mentioned in this [issue](https://github.com/huggingface/transformers/issues/17962), IPEX 1.12 release is [available ](https://intel.github.io/intel-extension-for-pytorch/1.12.0/tutorials/installation.html), and we also open a PR https://github.com/huggingface/transformers/pull/18072 to enhance the integration for this version mismatch issue to avoid breaking Trainer;
Besides, for the performance data we discussed in this PR, we were mostly working on the IPEX 1.12 release during the past weeks, and we would like to prepare the data based on the new release. It would take some time, thanks.<|||||>I do not understand where is this implementation added?
import transformers
from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast
model1a = T5ForConditionalGeneration.from_pretrained('t5-base',low_cpu_mem_usage=True)
tokenizer1 = T5TokenizerFast.from_pretrained('t5-base')
<|||||>@Oxi84, please see: https://huggingface.co/docs/transformers/perf_train_cpu#mixed-precision-with-ipex
It's integrated into HF Trainer. |
transformers | 17,137 | closed | Speed up Hugging Face Models with Intel Extension for PyTorch* | ### Feature request
## Extend Trainer to Enable CPU AMP and Integrate Intel Extension for PyTorch
### Design
_Overview usage of Integrating Intel Extension for PyTorch:_

Intel® Extension for PyTorch* would provide optimizations both on training and inference for users, including Graph, AMP and Optimizer optimizations.
_AMP usage from Intel Extension for PyTorch:_

Since Transformers already has bf16 support based on GPU, we would extend this AMP support to CPU and hence further adopt AMP optimizations from Intel Extension for PyTorch.
### Implementation

As the above figure, we naturally follow the philosophy of the Trainer class from Transformers to implement the integration of Intel Extension for PyTorch.
The enabling of Intel Extension for PyTorch is triggered by user inputs, and then applied by model init (e.g., preparing AMP backend) and wrap model (e.g., IPEX optimization API) stages.
Trainer currently only supports AMP with BF16/FP16 on GPU (torch.cuda.amp, apex) while BF16 AMP for CPU has been enabled since PyTorch-1.10. To enable CPU AMP, we have to extend the AMP context of Trainer Class from GPU to both GPU and CPU.
We need to extend the Trainer class to enable CPU AMP and also integrate Intel Extension for PyTorch.
_The current workflow for AMP GPU is as follows:_

To select CPU or GPU AMP by adding the 'cpu_amp' and 'cuda_amp' into 'half_precision_backend'. To use Intel Extension for PyTorch, we also add 'use_ipex' into TrainerArgument. The workflow should be as following figure:

## Use case
Take an example of the use cases on [Transformers Question-Answering task](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
**Training**
- Default Training :
<pre> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/</pre>
- Training with IPEX:
<pre> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
<b>--use_ipex</b></pre>
- Training with IPEX using BF16 AMP on CPU :
<pre> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
<b>--use_ipex \</b>
<b>--bf16 --no_cuda</b></pre>
**Inference**
- Default Inference:
<pre> python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \</pre>
- Inference with IPEX using Torchscript mode:
<pre> python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
<b> --use_ipex \ </b>
<b> --jit_mode </b></pre>
- Inference with IPEX using Torchscript mode with BF16 AMP on CPU:
<pre> python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
<b>--use_ipex \ </b>
<b>--jit_mode \ </b>
<b>--bf16 --no_cuda</b></pre>
### Motivation
Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision (AMP) for CPU backend has been enabled since PyTorch-1.10 while it is not intergraded into the Huggingface/Transformers. At the same time, Intel Extension for PyTorch provides some general optimizations for transformer series models. We plan to integrate CPU AMP into Huggingface/Transformers and use Intel Extension for PyTorch to speed up Transformers series models both for their training and inference.
## Introduction to Intel Extension for PyTorch*
Intel® Extension for PyTorch* extends PyTorch with optimizations for an extra performance boost on Intel hardware. The intention of the extension is to deliver up-to-date features and optimizations for PyTorch on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX). It encompasses the following features to speed up the inference and training of Transformers series models:
### Channels Last
Compared to the default NCHW memory format, the channels_last (NHWC) memory format could further accelerate Transformers models (eg. wav2vec2 models) with convolutional neural network layers. In Intel® Extension for PyTorch*, NHWC memory format has been enabled for most key CPU operators while some of them have been merged to the PyTorch master branch.
### Auto Mixed Precision (AMP)
Users can get better performance and user experience with CPU AMP. The support of Auto Mixed Precision (AMP) with BFloat16 for CPU and BFloat16 optimization of operators have been massively enabled in Intel® Extension for PyTorch*, and partially upstreamed to the PyTorch master branch.
### Graph Optimization
To further optimize the performance of the Transformers series model with Torchscript, Intel® Extension for PyTorch* supports the fusions of frequently used operator patterns. Patterns like Multi-head-attention fusion, concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc, are enabled and perform well. According to our analysis, ~70% of most popular NLP tasks in question-answering, text-classification, and token-classification can get performance benefits with these fusion patterns for both Float32 and BFloat16 (AMP) precision.
### Optimizer Optimization
Optimizers are one of the key parts of the training workloads. Intel Extension for PyTorch brings two types of optimizations to optimizers: 1. Operator fusion for the computation in the optimizers. 2. This [joint blog from Intel and Facebook](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) shows that DLRM training can get 1.4x performance speedup with BFloat16 using same parameters as Float32. BFloat16 is a low precision float datatype, to get convergence with BFloat16, the Intel extension for PyTorch provided a SplitSGD which can reduce the memory footprint of the master weights by half compared with SGD using master weight. Currently, Intel Extension for PyTorch has already applied optimizations on common PyTorch optimizers like SGD and Adagrad. Further, the Adam optimizers, which are widely used in Transformers, are also on the plan of being optimized, which could transparently bring benefits to users.
### Bert Model Performance Speed up with Intel Extension for PyTorch vs Stock PyTorch
#### Float32 (IPEX vs PT) and BFloat16 (IPEX vs PT) comparison
<table border="1" cellpadding="10" align="center" class="perf_table">
<tbody>
</tbody><colgroup><col>
<col>
<col>
</colgroup><colgroup span="2"></colgroup>
<colgroup span="2"></colgroup>
<colgroup><col>
<col>
<col>
</colgroup><tbody><tr>
<th rowspan="2" scope="col">Hardware</th>
<th rowspan="2" scope="col">Workload<sup>1</sup></th>
<th rowspan="2" scope="col">Precision</th>
<th colspan="2" scope="colgroup">Throughput Inference<sup>2</sup></th>
<th colspan="2" scope="colgroup">Realtime Inference<sup>3</sup></th>
<th rowspan="2" scope="col">Model Type</th>
<th rowspan="2" scope="col">Dataset</th>
<th rowspan="2" scope="col">Misc.</th>
</tr>
<tr>
<th scope="col">Batch Size</th>
<th scope="col">Boost Ratio</th>
<th scope="col">Batch Size</th>
<th scope="col">Boost Ratio</th>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" rowspan="10" scope="col">Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz</td>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" scope="col">BERT-Large</td>
<td style="text-align: center; vertical-align: middle" scope="col">Float32</td>
<td style="text-align: center; vertical-align: middle" scope="col">80</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.14x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.02x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">Squad</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=384<br>Task: Question Answering</td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" scope="col">Bert-Base</td>
<td style="text-align: center; vertical-align: middle" scope="col">Float32</td>
<td style="text-align: center; vertical-align: middle" scope="col">160</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.10x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.33x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">MRPC</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=128<br>Task: Text Classification</td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" rowspan="2" scope="col">Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz</td>
<td style="text-align: center; vertical-align: middle" scope="col">BERT-Large</td>
<td style="text-align: center; vertical-align: middle" scope="col">BFloat16</td>
<td style="text-align: center; vertical-align: middle" scope="col">56</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.67x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.45x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">Squad</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=384<br>Task: Question Answering</td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" scope="col">Bert-Base</td>
<td style="text-align: center; vertical-align: middle" scope="col">BFloat16</td>
<td style="text-align: center; vertical-align: middle" scope="col">112</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.77x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.18x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">MRPC</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=128<br>Task: Text Classification</td>
</tr>
</tbody>
</table><br>
#### IPEX BFloat16 vs PT Float32 comparison
<table border="1" cellpadding="10" align="center" class="perf_table">
<tbody>
</tbody><colgroup><col>
<col>
<col>
</colgroup><colgroup span="2"></colgroup>
<colgroup span="2"></colgroup>
<colgroup><col>
<col>
<col>
</colgroup><tbody><tr>
<th rowspan="2" scope="col">Hardware</th>
<th rowspan="2" scope="col">Workload<sup>1</sup></th>
<th rowspan="2" scope="col">Precision</th>
<th colspan="2" scope="colgroup">Throughput Inference<sup>2</sup></th>
<th colspan="2" scope="colgroup">Realtime Inference<sup>3</sup></th>
<th rowspan="2" scope="col">Model Type</th>
<th rowspan="2" scope="col">Dataset</th>
<th rowspan="2" scope="col">Misc.</th>
</tr>
<tr>
<th scope="col">Batch Size</th>
<th scope="col">Boost Ratio</th>
<th scope="col">Batch Size</th>
<th scope="col">Boost Ratio</th>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" rowspan="10" scope="col">Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz
</td>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" scope="col">BERT-Large</td>
<td style="text-align: center; vertical-align: middle" scope="col">IPEX-BF16 over PT-Float32</td>
<td style="text-align: center; vertical-align: middle" scope="col">56</td>
<td style="text-align: center; vertical-align: middle" scope="col">2.25x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">2.32x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">Squad</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=384<br>Task: Question Answering</td>
</tr>
<tr>
<td style="text-align: center; vertical-align: middle" scope="col">BERT-Base</td>
<td style="text-align: center; vertical-align: middle" scope="col">IPEX-BF16 over PT-Float32</td>
<td style="text-align: center; vertical-align: middle" scope="col">56</td>
<td style="text-align: center; vertical-align: middle" scope="col">2.08x</td>
<td style="text-align: center; vertical-align: middle" scope="col">1</td>
<td style="text-align: center; vertical-align: middle" scope="col">1.76x</td>
<td style="text-align: center; vertical-align: middle" scope="col">NLP</td>
<td style="text-align: center; vertical-align: middle" scope="col">MRPC</td>
<td style="text-align: center; vertical-align: middle" scope="col">max_seq_len=128<br>Task: Text Classification</td>
</tr>
</tbody>
</table><br>
<sup>1. <a href="https://github.com/IntelAI/models/tree/pytorch-r1.10-models">Model Zoo for Intel® Architecture</a></sup>
<br>
<sup>2. Throughput inference runs with single instance per socket.</sup>
<br>
<sup>3. Realtime inference runs with multiple instances, 4 cores per instance.</sup>
<br><p><em>Note:</em> Performance numbers with stock PyTorch are measured with its most performant configuration.</p>
### Your contribution
Submitting PRs to support this Feature request ticket:
[Extend Transformers Trainer Class to Enable PyTorch Torchscript for Inference](https://github.com/huggingface/transformers/pull/17153)
[Extend Transformers Trainer Class to Enable PyTorch SGD/Adagrad Optimizers for Training](https://github.com/huggingface/transformers/pull/17154)
[Extend Transformers Trainer Class to Enable CPU AMP and Integrate Intel Extension for PyTorch](https://github.com/huggingface/transformers/pull/17138) | 05-09-2022 08:54:35 | 05-09-2022 08:54:35 | cc @stas00 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Implemented in https://github.com/huggingface/transformers/pull/17138 |
transformers | 17,136 | closed | PyTorch FSDP integration in Trainer | # What does this PR do?
PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations. This PR is aimed at integrating it into Trainer API.
- It enables Distributed Training at Scale. It's a wrapper for sharding Module parameters across data parallel workers. This is inspired by Xu et al. as well as the ZeRO Stage 3 from DeepSpeed.
- PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 05-09-2022 05:58:37 | 05-09-2022 05:58:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,135 | closed | Add DebertaV2ForMultipleChoice | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds `DebertaV2ForMultipleChoice`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-08-2022 22:54:17 | 05-08-2022 22:54:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also for additional context, the implementation of the multiple choice model is taken from `RobertaForMultipleChoice`.<|||||>Thanks for your contribution @zphang! |
transformers | 17,134 | closed | Updated link for `generate` method | The previous link did not lead to the `generate` method and therefore didn't show the `kwargs` for text generation. This new link rectifies this.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR improves the documentation. When trying to view the keyword args for text generation, the documentation currently links to the wrong place. This became evident when trying to answer (this question)[https://discuss.huggingface.co/t/where-can-i-get-all-the-parameters-for-each-pipeline-task/17564] in the HF forum. This PR rectifies this and links to the right place in the documentation that shows the list of keyword args for text generation.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 05-08-2022 18:42:47 | 05-08-2022 18:42:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17134). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,133 | closed | changing Encoder's embedding table in MarianEncoder | Hi,
i am trying to change the Encoder's (MarianEncoder) embedding table (self.embed_tokens) during inference.
i tried adding the following lines
num_tokens, embedding_dim = self.embed_tokens.weight.size()
new_embeddings = nn.Embedding(num_tokens, embedding_dim)
new_embeddings.to(self.embed_tokens.weight.device, dtype=self.embed_tokens.weight.dtype)
self._init_weights(new_embeddings)
new_embeddings.weight.data = Parameter(torch.rand(num_tokens, embedding_dim))
self.set_input_embeddings(new_embeddings)
to MarianEncoder init funcrition
i expect the Bleu of the translations to decrease since i replaced the embedding table with random matrix
but it seems that it doesn't affect the results
can someone explain to me how to change the embedding table?
thanks,
Bar | 05-08-2022 13:36:17 | 05-08-2022 13:36:17 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,132 | closed | Dataset streaming example not working | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.173.el7-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0a0+17540c5 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (gpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the guide to train a model in streaming mode using the [dataset-streaming](https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects/dataset-streaming) directory results in the following error.
```
[11:11:16] - INFO - datasets_modules.datasets.oscar.84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2.oscar - generating examples from = https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_480.txt.gz
Token indices sequence length is longer than the specified maximum sequence length for this model (1195 > 512). Running this sequence through the model will result in indexing errors
Traceback (most recent call last):
File "./run_mlm_flax_stream.py", line 549, in <module>
eval_samples = advance_iter_and_group_samples(training_iter, data_args.num_eval_samples, max_seq_length)
File "./run_mlm_flax_stream.py", line 284, in advance_iter_and_group_samples
samples = {k: samples[k] + tokenized_samples[k] for k in tokenized_samples.keys()}
File "./run_mlm_flax_stream.py", line 284, in <dictcomp>
samples = {k: samples[k] + tokenized_samples[k] for k in tokenized_samples.keys()}
TypeError: can only concatenate list (not "int") to list
```
### Expected behavior
```shell
Model training to start.
```
| 05-08-2022 09:20:43 | 05-08-2022 09:20:43 | Hey @HLasse,
Note that datasets streaming is not yet official supported, but just in the `research_folder` directory. We sadly don't have the capacity to maintain these scripts. Once dataset streaming fully works, we'll upgrade those scripts to the "main" examples folder.<|||||>This issue has a suggested solution in PR #17203 |
transformers | 17,131 | closed | Resumption of training using the same parameters as before fails with CUDA out of memory error. | ### System Info
```shell
- `transformers` version: 4.15.0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.6.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ydshieh
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I used longformer_training.py to train longformer, without getting a CUDA out of memory error, the training continues until the server time limit is reached.
[longformer_training.txt](https://github.com/huggingface/transformers/files/8646199/longformer_training.txt)
2. I rented a server again and resumed longformer training using longformer_resuming_training.py. For some reason I got a CUDA out of memory error.
[longformer_resume_training.txt](https://github.com/huggingface/transformers/files/8646200/longformer_resume_training.txt)
Errors are as follows.
[error.txt](https://github.com/huggingface/transformers/files/8646204/error.txt)
### Expected behavior
```shell
I am using Huggingface to research patent documents and I am really in need of help as my term is coming to an end. Your help would be really appreciated.
The expected behavior is that the same model is trained before and after the training is resumed, so it will not CUDA out of memory after resumption and complete the training successfully.
Thank you very much in advance.
```
| 05-08-2022 01:53:27 | 05-08-2022 01:53:27 | Hi, @YISTANFORD
My colleague sgugger might have better ideas. But here are some suggestions from my side:
- It's probably a good idea to work with newer versions.
- Python 3.6 is EOL (in Dec 2021)
- PyTorch is now 1.11, but 1.10 might be good too
- TensorFlow, `transformers`, etc.
- (Of course, I understand you might have some constraints in your work.)
- It would be very helpful if you could try to
- measure the GPU memory allocated before the training starts (i.e. before the line `for epoch in range(...)` in `train()`)
- check this GPU memory usage in the original train and the one resumed from a checkpoint to see if there is any difference.
- You might also want to try something like `torch.cuda.empty_cache()` (at some places, but I am not sure where at this moment).
I also found that the `REAL_BS` and `ACCUM_NUM` are different in the 2 scripts. But I guess you tried to reduce the GPU memory usage by using gradient accumulation.
<|||||>Hello @ydshieh ,
Thank you for your reply.
I really appreciate all the tips and tricks.
May I try another version and measure GPU memory usage and report back to you?
I would like to version up python to 3.7 first and pytorch to 1.10.
By the way, I apologize for my lack of knowledge, but I was wondering if you know of any way to measure GPU memory usage at each step of the code.
Thank you very much again.
<|||||>Hi @YISTANFORD,
You can check https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_stats, I would probably try both
- [memory_reserved](https://pytorch.org/docs/stable/generated/torch.cuda.memory_reserved.html#torch.cuda.memory_reserved)
- [memory_allocated](https://pytorch.org/docs/stable/generated/torch.cuda.memory_allocated.html#torch.cuda.memory_allocated)
You can also use `nvidia-smi` to get a bit more information, by calling bash command in Python using `os` module.
BTW, if possible, maybe with Python 3.8 or 3.9. But go ahead with what you could do :-)
Thank you!<|||||>Hello @ydshieh ,
Thank you very much for your kind guidance.
I will try python with 3.8 or 3.9.
Can I report back when I get the results?
Thank you very much again.<|||||>> Can I report back when I get the results?
Sure!<|||||>> Hello @ydshieh ,
>
> Thank you for your reply. I really appreciate all the tips and tricks. May I try another version and measure GPU memory usage and report back to you? I would like to version up python to 3.7 first and pytorch to 1.10. By the way, I apologize for my lack of knowledge, but I was wondering if you know of any way to measure GPU memory usage at each step of the code. Thank you very much again.
You can also use something like [WandB](https://wandb.ai/site/articles/monitor-improve-gpu-usage-for-model-training) to track your gpu utilization over time.
Also why do the scripts use different batch sizes and grad accumulations?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,130 | closed | Fix MLflowCallback end_run() and add support for tags and nested runs | # What does this PR do?
This PR includes:
- Add support for MLFLOW_TAGS, MLFLOW_RUN_ID, and MLFLOW_NESTED_RUN
- ensure that MLflow runs that were started with on_train_begin() are also properly terminated when on_train_end() hook is called.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 05-08-2022 00:26:45 | 05-08-2022 00:26:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,129 | closed | Remove placeholders patch in src/transformers/utils/fx.py | # What does this PR do?
This PR removes a patch that is not necessary anymore(since https://github.com/pytorch/pytorch/pull/59569 was merged long time ago)
cc: @michaelbenayoun
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-07-2022 21:58:28 | 05-07-2022 21:58:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17129). All of your documentation changes will be reflected on that endpoint.<|||||>When it this out, I get this for BERT:
```
E File "<eval_with_key>.1", line 4
E SyntaxError: non-default argument follows default argument
../../miniconda3/envs/transformers/lib/python3.7/site-packages/torch/fx/graph_module.py:71: SyntaxError
```
Do you have any idea how this can be fixed? |
transformers | 17,128 | closed | fixing VisibleDeprecationWarning | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.17.1-arch1-1-x86_64-with-glibc2.35
- Python version: 3.9.7
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
Hi @LysandreJik ,
Could you please fix `VisibleDeprecationWarning` warning?
Based on [this stackoverflow thread](https://stackoverflow.com/q/63097829), specifying `dtype=object` fixes the issue.
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Code
```
from transformers import pipeline
question_answerer = pipeline(task="question-answering", model="distilbert-base-cased-distilled-squad")
context = """
Captain America, and Wonder Woman.
Between 1939 and 1941 Detective Comics and its sister company, All-American Publications,
introduced popular superheroes such as Batman and Robin, Wonder Woman, the Flash,
Green Lantern, Doctor Fate, the Atom, Hawkman, Green Arrow and Aquaman.[7] Timely Comics,
the 1940s predecessor of Marvel Comics, had million-selling titles featuring the Human Torch,
the Sub-Mariner, and Captain America.[8]
As comic books grew in popularity, publishers began launching titles that expanded
into a variety of genres. Dell Comics' non-superhero characters (particularly the
licensed Walt Disney animated-character comics) outsold the superhero comics of the day.[12]
The publisher featured licensed movie and literary characters such as Mickey Mouse, Donald Duck,
Roy Rogers and Tarzan.[13] It was during this era that noted Donald Duck writer-artist
Carl Barks rose to prominence.[14] Additionally, MLJ's introduction of Archie Andrews
in Pep Comics #22 (December 1941) gave rise to teen humor comics,[15] with the Archie
Andrews character remaining in print well into the 21st century.[16]
At the same time in Canada, American comic books were prohibited importation under
the War Exchange Conservation Act[17] which restricted the importation of non-essential
goods. As a result, a domestic publishing industry flourished during the duration
of the war which were collectively informally called the Canadian Whites.
The educational comic book Dagwood Splits the Atom used characters from the comic
strip Blondie.[18] According to historian Michael A. Amundson, appealing comic-book
characters helped ease young readers' fear of nuclear war and neutralize anxiety
about the questions posed by atomic power.[19] It was during this period that long-running
humor comics debuted, including EC's Mad and Carl Barks' Uncle Scrooge in Dell's Four
Color Comics (both in 1952).[20][21]
"""
question = "What popular superheroes were introduced between 1939 and 1941?"
result = question_answerer(question=question, context=context)
print(result['answer'])
```
### Output
```
/home/myuser/anaconda3/lib/python3.9/site-packages/numpy/core/_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
EC's Mad and Carl Barks' Uncle Scrooge
```
### Expected behavior
```shell
Warning should not be generated. Accoring to [this post](https://forums.fast.ai/t/visibledeprecationwarning-creating-an-ndarray-from-ragged-nested-sequences-is-deprecated/81774/3), this warning is likely to turn into an error in future releases.
```
| 05-07-2022 19:19:10 | 05-07-2022 19:19:10 | Hey @mygithubid1! Would you like to open a PR with the fix you propose here?<|||||>Yes @LysandreJik.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,127 | closed | MarianTokenizerFast requested | ### Feature request
Hello I want a MarianTokenizerFast. I am a soldier from Ukrain and we are using the Pretrained MarianMT Models to help translate Russian commands. Currently the tokenizer is too slow so we need a faster one to translate faster! Sometimes it takes 1 second and the missile is already launch so FAST IS NEEDED!
### Motivation
Yesterday Russian launch a missile to attach a hospital in my hometown. We hindered the information and use MarianMT to translate it. But the tokenization is too slow, and we failed to stop the missile, and a lot of people die.
### Your contribution
We can give you a Medal of Honor after we won the war. | 05-07-2022 15:10:26 | 05-07-2022 15:10:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,126 | closed | is it possible to split the embedding layers from bert during fine tuning and inference | in my bert model, parameters in embedding layers are huge since the vocabulary size is very big. So I plan to split the embedding layer from the entire model and use it as a independent module to calculate word embeddings. Is it possible to do like this? thanks
| 05-07-2022 14:38:57 | 05-07-2022 14:38:57 | Hello,
You can get the embeddings layer with model.embeddings as per
https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/bert/modeling_bert.py#L848
```
print (model.embeddings)
#BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
```
Bonus: Could also just extract the embedding output as follows without splitting, I'm not sure why you are splitting?
https://github.com/huggingface/transformers/issues/1827#issuecomment-554028292
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,125 | closed | is it possible to split the embedding layers from bert during fine tuning and inference | ### System Info
```shell
in my bert model, parameters in embedding layers are huge since the vocabulary size is very big. So I plan to split the embedding layer from the entire model and use it as a independent module to calculate word embeddings. Is it possible to do like this? thanks
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NONE
### Expected behavior
```shell
NONE
```
| 05-07-2022 14:35:09 | 05-07-2022 14:35:09 | |
transformers | 17,124 | closed | Encoder-Decoder model after fine tuning on Turkish dataset, generation gives the same results regardless of the input | Hello everyone, I need help with the training of the encoder-decoder model. I need to fine-tune a bert2bert for Turkish content summarization. I am using this sample notebook reference: https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb
After training on my custom dataset, when I generate using a test dataset, I get gibberish results regardless of the amount of training. I have attached the results below and one more observation I made is that the training loss instantly goes to near 0 values after a few steps of training I am not sure what I am doing wrong.
Here are the screenshots of output :


Here is the training loss :

Here is the full notebook that I used for finetuning :
https://colab.research.google.com/drive/188Lil4Uc3wY7nd1PEfCjMwSfPO-NXI94?usp=sharing
I am not sure what I am doing wrong? I would be grateful for any advice.
Thank you! | 05-07-2022 13:31:05 | 05-07-2022 13:31:05 | I highly recommend you get a workaround that works on the **latest** dataset and transformers library! I suffered a lot only for this reason.<|||||>> I highly recommend you get a workaround that works on the **latest** dataset and transformers library! I suffered a lot only for this reason.
What do you mean by workaround? I am not sure what are you referring to? <|||||>you can look for the necessary changes that might be needed for the latest version and let me know if you get any?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi bro, I met similar problems like you. Have you found any solution? Thanks very much.<|||||>I’m having the same issue. Did someone figure it out? Thanks in advance.
---------------
Edit: Solved by using the following library versions:
` transformers==4.2.1`
` datasets==1.0.2`
` torch==1.6.0 `
credits to @salma-elshafey<|||||>> I’m having the same issue. Did someone figure it out? Thanks in advance.
>
> Edit: Solved by using the following library versions:
>
> ` transformers==4.2.1` ` datasets==1.0.2` `torch==1.6.0`
>
> credits to @salma-elshafey
It does not work for me :(. Could you please provide related links in your solution? Thanks very much. @mareloraby
<|||||>> > I’m having the same issue. Did someone figure it out? Thanks in advance.
> > Edit: Solved by using the following library versions:
> > ` transformers==4.2.1` ` datasets==1.0.2` `torch==1.6.0`
> > credits to @salma-elshafey
>
> It does not work for me :(. Could you please provide related links in your solution? Thanks very much. @mareloraby
Hey @tqnwhz, sorry I don’t have a reference I can link. My colleague who worked with an Encoder-Decoder model before helped me with that<|||||>Hello @tqnwhz!
I hope you are doing well.
Can you take a look at following issues :
- https://github.com/huggingface/blog/issues/292
- https://github.com/huggingface/transformers/issues/17122
If these doesn't solve check out this blog : https://huggingface.co/blog/warm-starting-encoder-decoder#data-preprocessing<|||||>@AniketRajpoot Do you have no more issue with new transformer's version(s)?<|||||>Actually I did not train the same model but rather a different generation model based on codeBERT. But it was having some different issues. But it did solve the problem for random output and loss going instantly zero. <|||||>Thanks very much! @AniketRajpoot
I'll conduct a few experiments to verify them.<|||||>@tqnwhz you could try this setting:
`transformers==4.18.0`
`datasets==2.1.0`<|||||>@AbuUbaida Thanks for your advice. I've tried this setting and it does not work :(. Given the fact that I've spent about two weeks trying to solve this but in vain. I plan to turn to other approaches rather than seq2seq.
Thanks again for your advices sincerely. Hope you everything goes well.<|||||>Hi @tqnwhz , could you provide the script and the datasets that could reproduce this issue. As this issue seems to happen a few times, I think it would be great if we can find the cause and fix it. But I need something that could reproduce it 🙏 Thank you. <|||||>Hi @ydshieh , I'm afraid that my code and dataset are not typical for this problem, since I try to use seq2seq to model multi-label text classification, rather than normal text generation task. <|||||>OK, no problem! |
transformers | 17,123 | closed | Add type hints for BigBirdPegasus and Data2VecText PyTorch models | Added missing type hints for the `BigBirdPegasus` and `Data2VecText` PyTorch models as requested in https://github.com/huggingface/transformers/issues/16059.
@Rocketknight1 | 05-07-2022 09:43:59 | 05-07-2022 09:43:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,122 | closed | Training issue of a Transformer based Encoder-Decoder model based on pre-trained BanglaBERT | I just tried to train an EncoderDeoder model for **summarization** task based on pre-trained *[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert)*, which is an **ELECTRA** discriminator model pre-trained with the Replaced Token Detection (RTD) objective. Surprisingly, after spending 4500 steps on 10k training data, the model wasn't trained a bit since the **ROUGE-2** scores were just **0.0000**. To make sure I used that 4500-checkpoint to generate summaries for testing purposes; it generated a fixed-length (50) output (even if I change the test input of different lengths) containing the **[CLS]** token **49** times and a **[SEP]** token **lastly**. Basically, I followed the [Warm-starting encoder-decoder models with 🤗Transformers](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook. Can anybody give any clue what could be the issue here? Thanks in advance.
In my case,
Tokenization **BanglaBERT** model:
```
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglabert")
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
```
Input pre-processing function:
```
def process_data_to_model_inputs(batch):
inputs = tokenizer(batch['text'], padding="max_length", truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch['summary'], padding="max_length", truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]]
return batch
```
Mapping the pre-processing function to the batches of examples:
```
train_data = train_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["text", "summary"]
)
train_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
valid_data = valid_data.map(
process_data_to_model_inputs,
batched=True,
batch_size=batch_size,
remove_columns=["text", "summary"]
)
valid_data.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
```
**BanglaBERT** model and its config settings:
```
bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("csebuetnlp/banglabert", "csebuetnlp/banglabert")
bert2bert.config.decoder_start_token_id = tokenizer.bos_token_id
bert2bert.config.eos_token_id = tokenizer.eos_token_id
bert2bert.config.pad_token_id = tokenizer.pad_token_id
bert2bert.config.vocab_size = bert2bert.config.decoder.vocab_size
bert2bert.config.max_length = 128
bert2bert.config.min_length = 42
bert2bert.config.early_stopping = True
bert2bert.config.length_penalty = 2.0
bert2bert.config.num_beams = 8
bert2bert.config.remove_invalid_values = True
bert2bert.config.repetition_penalty = 2.0
bert2bert.config.length_penalty = 2.0
```
I used the **Seq2SeqTrainer** for training. The **Seq2SeqTrainingArguments** were as follows:
```
evaluation_strategy = "steps",
per_device_train_batch_size = batch_size,
per_device_eval_batch_size = batch_size,
predict_with_generate = True,
logging_steps = 1000,
save_steps = 500,
eval_steps = 5000,
warmup_steps = 500,
overwrite_output_dir = True,
save_total_limit = 2,
num_train_epochs = 20,
fp16 = True
``` | 05-07-2022 07:08:59 | 05-07-2022 07:08:59 | I am literally getting the same issue, invested hours trying to figure out what is wrong but still did not get it. I tried both BERTurk and mBERT and both of them gave the same issue and even training loss goes near 0 instantly I am not sure what to do.
<|||||>Sorry for being late. Yes, that's because what approach you are following will work in the specific version more precisely, v4.2.1 but not in the current release (maybe 4.18).<|||||>> Sorry for being late. Yes, that's because what approach you are following will work in the specific version more precisely, v4.2.1 but not in the current release (maybe 4.18).
Hii sorry but I am not sure what exactly to change can you be more specific? Should I not use Encoder Decoder class or should I change the trainer ? I am new to this!
Thank you in advance. <|||||>I believe he means to ensure your version of transformers and other relevant libraries match the example <|||||>>I believe he means to ensure your version of transformers and other relevant libraries match the example
@AniketRajpoot exactly this is and the adjustments needed for version 4.18 are in progress. I have opened another [issue](https://github.com/huggingface/blog/issues/292#issue-1228916923). you can keep your eyes if you want.<|||||>Thank you so much @AbuUbaida @RaedShabbir, I understand the issue!
|
transformers | 17,121 | closed | GPT2: Perfect training and evaluation loss, but terrible test-time performance | ### System Info
I get the following error when running `transformers-cli env`:
```shell
Traceback (most recent call last):
File "/Users/micah/opt/anaconda3/envs/fBERT/bin/transformers-cli", line 7, in <module>
from transformers.commands.transformers_cli import main
File "/Users/micah/opt/anaconda3/envs/fBERT/lib/python3.7/site-packages/transformers/commands/transformers_cli.py", line 26, in <module>
from .user import UserCommands
File "/Users/micah/opt/anaconda3/envs/fBERT/lib/python3.7/site-packages/transformers/commands/user.py", line 20, in <module>
from huggingface_hub.hf_api import HfFolder, create_repo, list_repos_objs, login, logout, whoami
ImportError: cannot import name 'list_repos_objs' from 'huggingface_hub.hf_api' (/Users/micah/opt/anaconda3/envs/fBERT/lib/python3.7/site-packages/huggingface_hub/hf_api.py)
```
`4.16.2` is the Transformers library version I'm using. I don't think any other info is relevant.
### Who can help?
Copying from the `GPT2` list above: @patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I encountered a pretty ridiculous failure mode (with my own training pipeline, and no modification to the transformers library) in which, using GPT2 (imported as `from transformers import GPT2Config, GPT2Model`):
- I was getting almost 0 training and validation loss
- I was getting very bad performance when feeding the model incomplete sequences (e.g. for test-time word-generation)
After much debugging, I found that the issue was that the value of `self.masked_bias` (currently set to `-10e4`) is too low, which is supposed to implement the "mask" of the causal masked attention.
For some high enough learning rates (which are still stable, however), the network is able to find a hack to copy the input to the output (getting around the causal masking): just drive the attention weights lower than `-10e-4`, and the causal mask will effectively not be doing anything! This means that the model will be able to attend the whole input in trying to generate the output (even the future tokens), so it will be able to simply copy it at training and validation time (getting 0 loss).
What I found while debugging [this line](https://github.com/huggingface/transformers/blob/215e0681e4c3f6ade6e219d022a5e640b42fcb76/src/transformers/models/gpt2/modeling_gpt2.py#L207) (before the softmax call):
```
>>> attn_weights[0,0,:4,:4]
[[-15258.9805, -10000.0000, -10000.0000, -10000.0000],
[-15044.7910, -16940.4766, -10000.0000, -10000.0000],
[-11722.1553, -13301.4287, -1438.0649, -10000.0000],
[ -9711.6445, -11315.6006, -1065.3066, -12052.6035]]
```
As one can see, the attention weights outside of the causal masking are even larger than this fixed value, which lead the softmax to return non-zero weights on all inputs. The off-upper-triangle entries should be much larger, so that the causal-mask-triangle entries get assigned 0 weight.
### Expected behavior
In terms of fixes, there should probably be an assertion error that the weights never go below `self.masked_bias`, or setting it to an even lower value. Alternatively, it could return NaNs? It should definitely not fail silently.
| 05-07-2022 01:25:25 | 05-07-2022 01:25:25 | The original code by Andrej Kaparthy didn't have this issue, as it effectively set `self.masked_bias` to `float('-inf')`, as can be seen [here](https://github.com/karpathy/minGPT/blob/3ed14b2cec0dfdad3f4b2831f2b4a86d11aef150/mingpt/model.py#L71).<|||||>Hey @micahcarroll,
Thanks for reporting this! Your issue might have tipped over the domino that solves this issue :sweat_smile:
See: https://github.com/huggingface/transformers/issues/14859<|||||>@micahcarroll
Thanks for reporting, I am working on it. I am wondering if you trained from scratch or fine-tune the original GPT2 model?
Also could you share what LR you used (as you mentioned `For some high enough learning rates`).
Thanks in advance!
<|||||>Thanks @ydshieh!
I was training from scratch on a sequential decision problem task (as in [Decision Transformers](https://github.com/kzl/decision-transformer)), with a learning rate of 1e-3.<|||||>Hi @micahcarroll
So far I wasn't able to reproduce the issue. I tried to train a causal LM model from scratch using GPT2 (without the pretrained weights). I did get some large weights with magnitude in -1000 ~ -4000 though.
Despite this is not absolutely necessary, and we believe the fix will work (i.e. use a very large negative value), it would be super if I can reproduce the issue, and verify that our approach will indeed avoid it without any new issue.
Is there any chance you can share your training script, training arguments, and possibly the datasets you use (if it is not confidential).<|||||>The code hasn't yet been licensed for public release (nor it will be at least until September).
However, I completely agree with you that using the largest negative value possible should be sufficient to fix the error. Especially if you can use "inf" in some form (`np.inf` or `float('inf')` as was done in the original minGPT code, linked in prior comment).<|||||>@micahcarroll Understand, thank you!
Fortunately, I was able to get this issue by training a T5 model, so I can see if the fix would prevent such situation :) <|||||>@micahcarroll This is fixed. It would be really appreciated if you can also verify on your side :-)
I am going to close the issue. |
transformers | 17,120 | closed | Update modeling_gptj.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-06-2022 22:55:10 | 05-06-2022 22:55:10 | Sorry, opened by mistake.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17120). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,119 | closed | Accumulate tokens into batches in `PreTrainedTokenizerBase.add_tokens()` | # What does this PR do?
For tokenizers with added tokens that only consist of a small number of special tokens or any number of special tokens with consecutive token IDs, this pull request reduces the time complexity of creating the trie from quadratic to linear. This can easily reduce the time it takes to load a pretrained tokenizer with many added tokens from tens of minutes to seconds.
Fixes #16936.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SaulLu @Narsil
| 05-06-2022 21:40:44 | 05-06-2022 21:40:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil Thank you for your review! A better fix would be if we could just accumulate all special and non-special tokens and then call `tokenizer.add_tokens()` once or twice at the end. However, since we need to maintain the order of tokens in the vocab, we need to call `tokenizer.add_tokens()` each time we encounter a token of a different type.
For malicious input, where special and non-special tokens are interleaved, my approach would lead to no speed-up compared to the current code but no slow-down either. For the typical case, where special and non-special tokens form two contiguous regions of identifiers, my approach leads to half the optimum speed and has only small diff against the current code.<|||||>I agree with @SaulLu's review - I'll let @Narsil take a final look and merge if he approves :)
Thank you for the PR, @Witiko!<|||||>@LysandreJik: Thank you for your review. In https://github.com/huggingface/transformers/issues/16936#issuecomment-1127621789, @Narsil indicated that they don't think they can do a proper review of this PR due to lack of time and familiarity with the code base.<|||||>@Witiko , I think everyone now agrees that this PR is a good step.
I'll let @LysandreJik decide when to merge.
<|||||>This pull request is now linked from [the witiko/mathberta model in the Model Hub][1], which is adversely affected by #16936. This will allow users to manually apply the fix before it has been merged and released, so that they can use the model.
[1]: https://huggingface.co/witiko/mathberta#how-to-use |
transformers | 17,118 | closed | Fix typo | # Fix typo in docs
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-06-2022 21:04:07 | 05-06-2022 21:04:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17118). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,117 | closed | No way to get ONLY the generated text, not including the prompt. | ### System Info
```shell
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@Narsil @patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
At first I thought I could just substring by the prompt's length. This doesn't work because there's a bug where it converts every instance of " ," to "," in the generated text.
For example, "Characters in this scene: King , Gertrude." becomes "Characters in this scene: King, Gertrude."
In https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py there are tons of options but not a single one of them allows us to specify it to ONLY return the generated text, not including the prompt.
I could do a workaround where I replace all the " ," with "," myself, but I'm sure this is a code smell which could lead to future problems.
Example code:
```
gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, temperature=specifiedTemperature, max_length=calculated_max_length, min_length=calculated_min_length, repetition_penalty=specifiedRepetitionPenalty, bad_words_ids=badWordsTokens)
#gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
### Expected behavior
Two possibilities: Either don't modify the prompt at all so I can substring by the prompt's length, or have an option where we get only the generated text not including the prompt. | 05-06-2022 17:46:15 | 05-06-2022 17:46:15 | Hi @monsieurpooh ,
`generate` will not change, since it's a relatively low level function, it really does exactly what it should do to the relative tensors (`encoder-decoder` and `decoder-only` don't work the same for instance).
Two suggestions:
- Simple modification `gen_text = tokenizer.batch_decode(gen_tokens[input_ids.shape[0]:])[0]` (Ignore the first ids you sent)
- Use a pipeline:
```python
from transformers import pipeline
# This will remove the text for you.
pipe = pipeline(model="gpt2", return_full_text=False)
print(pipe("This is a test"))
```
Does that solve your issue ?<|||||>Thanks so much for your help Narsil! After a tiny bit of debugging and learning how to slice tensors, I figured out the correct code is: `tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0]`
It returns the correct tokens even when there's a space after some commas and periods.<|||||>Thank you for giving the correct code here, will help other users for sure ! :) |
transformers | 17,116 | closed | Pretrain on Wav2vec2 getting parameters did not receive grad for rank0. | ### System Info
Following [this example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) and running with [base model](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining#base) seem ok.
But when turning into the [large model](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining#large), the model name is './' so I guess this is a typo. Therefore I tried `facebook/wav2vec2-large-xlsr-53` and `facebook/wav2vec2-large-lv60`, both getting
```
Parameters which did not receive grad for rank 0: wav2vec2.encoder.layers.16.final_layer_norm.bias, wav2vec2.encoder.layers.16.final_layer_norm.weight, wav2vec2.encoder.layers.16.feed_forward.output_dense.bias, wav2vec2.encoder.layers.16.feed_forward.output_dense.weight, wav2vec2.encoder.layers.16.feed_forward.intermediate_dense.bias, wav2vec2.encoder.layers.16.feed_forward.intermediate_dense.weight, wav2vec2.encoder.layers.16.layer_norm.bias, wav2vec2.encoder.layers.16.layer_norm.weight, wav2vec2.encoder.layers.16.attention.out_proj.bias, wav2vec2.encoder.layers.16.attention.out_proj.weight, wav2vec2.encoder.layers.16.attention.q_proj.bias, wav2vec2.encoder.layers.16.attention.q_proj.weight, wav2vec2.encoder.layers.16.attention.v_proj.bias, wav2vec2.encoder.layers.16.attention.v_proj.weight, wav2vec2.encoder.layers.16.attention.k_proj.bias, wav2vec2.encoder.layers.16.attention.k_proj.weight, wav2vec2.encoder.layers.15.final_layer_norm.bias, wav2vec2.encoder.layers.15.final_layer_norm.weight, wav2vec2.encoder.layers.15.feed_forward.output_dense.bias, wav2vec2.encoder.layers.15.feed_forward.output_dense.weight, wav2vec2.encoder.layers.15.feed_forward.intermediate_dense.bias, wav2vec2.encoder.layers.15.feed_forward.intermediate_dense.weight, wav2vec2.encoder.layers.15.layer_norm.bias, wav2vec2.encoder.layers.15.layer_norm.weight, wav2vec2.encoder.layers.15.attention.out_proj.bias, wav2vec2.encoder.layers.15.attention.out_proj.weight, wav2vec2.encoder.layers.15.attention.q_proj.bias, wav2vec2.encoder.layers.15.attention.q_proj.weight, wav2vec2.encoder.layers.15.attention.v_proj.bias, wav2vec2.encoder.layers.15.attention.v_proj.weight, wav2vec2.encoder.layers.15.attention.k_proj.bias, wav2vec2.encoder.layers.15.attention.k_proj.weight, wav2vec2.encoder.layers.14.final_layer_norm.bias, wav2vec2.encoder.layers.14.final_layer_norm.weight, wav2vec2.encoder.layers.14.feed_forward.output_dense.bias, wav2vec2.encoder.layers.14.feed_forward.output_dense.weight, wav2vec2.encoder.layers.14.feed_forward.intermediate_dense.bias, wav2vec2.encoder.layers.14.feed_forward.intermediate_dense.weight, wav2vec2.encoder.layers.14.layer_norm.bias, wav2vec2.encoder.layers.14.layer_norm.weight, wav2vec2.encoder.layers.14.attention.out_proj.bias, wav2vec2.encoder.layers.14.attention.out_proj.weight, wav2vec2.encoder.layers.14.attention.q_proj.bias, wav2vec2.encoder.layers.14.attention.q_proj.weight, wav2vec2.encoder.layers.14.attention.v_proj.bias, wav2vec2.encoder.layers.14.attention.v_proj.weight, wav2vec2.encoder.layers.14.attention.k_proj.bias, wav2vec2.encoder.layers.14.attention.k_proj.weight
Parameter indices which did not receive grad for rank 0: 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Follow https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining#large
### Expected behavior
```shell
Expect no such errors!
```
| 05-06-2022 15:57:07 | 05-06-2022 15:57:07 | Hey @Slyne,
Could you please make sure to set the parameter `layerdrop` to 0.0 in distributed settings? This: https://huggingface.co/facebook/wav2vec2-large-xlsr-53/blob/main/config.json#L57 needs to be set to 0.0 before pretraining<|||||>@patrickvonplaten Thanks! It works now. Just found this parameter seemed to do some regularization on model structure. Will huggingface support this trick ?
Do you mind fixing that typo and this parameter and close this issue ?
Thanks!!!<|||||>Hey @Slyne,
It's quite difficult to get `layerdrop` working in distributed settings. Regarding the typo, feel free to open a PR to show what could be fixed if you want :-)
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,115 | closed | In a multi-gpu setup, use a fraction of GPUs for training and the other fraction for some other computations | ### Feature request
When running pytorch examples with transformers library, it seems that the number of GPUs is inferred at https://github.com/huggingface/transformers/blob/cad61b68396a1a387287a8e2e2fef78a25b79383/src/transformers/training_args.py#L1091
and it assumes that all `CUDA_VISIBLE_DEVICES` will be used for training. In some cases, one would like to use other GPUs at the machine to perform some other computations during the training but still keep the training run on a single GPU.
For example, with `CUDA_VISIBLE_DEVICES=0,1` one would like to run training on `cuda:0` only and use `cuda:1` for some additional custom computations running in parallel with the training process.
### Motivation
Useful when one wants to utilize other GPUs on the machine to perform some heavy computations from the data that is collected during the training process. Allowing the model to use all devices for training would unnecessarily slow down a run that can easily fit on a single GPU device.
### Your contribution
A hacky solution that might work here would be to override `_n_gpu` in the main script, but I am not sure if this would have some negative consequences on other parts of the library (i.e. things initialized before main is reached and the new value to `_n_gpu` is set). Also, dynamically modifying the `_n_gpu` probably wouldn't be considered as the cleanest solution. | 05-06-2022 15:00:18 | 05-06-2022 15:00:18 | (not sure if I should tag someone for this, a wild guess: @LysandreJik @patil-suraj )
Edit by Lysandre: @sgugger :)<|||||>This is only used for `DataParallel`, which is not the recommended way to run training on several GPUs (as recommended by the [PyTorch doc](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) itself) so we won't invest any time in making it more customizable than it is.<|||||>This is also used in single-GPU runs, which is why I've opened to ask about it. For example, I think it's currently impossible to run finetuning of bert-base-uncased on SQuAD on 1-GPU, and to use the second GPU to do something else while the finetuning is running on the first one.<|||||>i.e. this line https://github.com/huggingface/transformers/blob/cad61b68396a1a387287a8e2e2fef78a25b79383/src/transformers/training_args.py#L1091 will "force" usage of DataParallel just because there are two GPUs available on the machine (even though I want to run on the first one only). Constraining it to the first GPU only via `CUDA_VISIBLE_DEVICES` will block usage of the second GPU for some additional stuff during the finetuning run.<|||||>I don't know how to run a training in parallel to some other computation without launching two different scripts, and you can then leverage `CUDA_VISIBLE_DEVICES` for the first. Could you give me an explicit example of what you're thinking of?<|||||>Oh sorry for not clarifying this properly. For example, let's say we want to run the default example of finetuning the bert-base-uncased model on SQuAD dataset ([squad pytorch example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#:~:text=python%20run_qa.py%20%5C%0A%20%20%2D%2Dmodel_name_or_path%20bert%2Dbase%2Duncased%20%5C%0A%20%20%2D%2Ddataset_name%20squad%20%5C%0A%20%20%2D%2Ddo_train%20%5C%0A%20%20%2D%2Ddo_eval%20%5C%0A%20%20%2D%2Dper_device_train_batch_size%2012%20%5C%0A%20%20%2D%2Dlearning_rate%203e%2D5%20%5C%0A%20%20%2D%2Dnum_train_epochs%202%20%5C%0A%20%20%2D%2Dmax_seq_length%20384%20%5C%0A%20%20%2D%2Ddoc_stride%20128%20%5C%0A%20%20%2D%2Doutput_dir%20/tmp/debug_squad/)). And let's say that during this finetuning run, we want to collect gradients from each step and calculate some interesting statistics from them. Given that our GPU is already at limits with memory, we would like to utilize the second GPU available at the machine. To be able to use it, we would have to make it visible to pytorch via `CUDA_VISIBLE_DEVICES=0,1`. But rerunning the finetuning example with `0,1` will trigger `DataParallel` which we don't want to use (too slow, not recommended, and so on). We would still like to run the example on the first GPU, as it would normally run without `DataParallel` being triggered, and to use the second GPU to calculate some custom stuff that can't fit on the first GPU. <|||||>I'm still very confused about how you are going to achieve this with the `Trainer`. I'd really love to see some code to see how we can design an API that solves the problem.<|||||>For example, a simple solution to the problem could be something like using a training argument to specify which GPUs to use, instead of taking all available GPUs at the machine. I could provide some code, but I don't want to waste your time on digging through this. Basically, this could be a solution to this problem:
changing this line https://github.com/huggingface/transformers/blob/cad61b68396a1a387287a8e2e2fef78a25b79383/src/transformers/training_args.py#L1091
to
```python
self._n_gpu = len(self.devices)
```
where `self.devices` is an argument of the `TrainingArgument` class, that could look like this:
```python
gpus: List[str] = field(default=['cuda:0'], metadata={"help": "Available GPUs for model's training."})
```
I.e. there are use-cases where not all available GPUs at the machine should be used for training.
With the aforementioned fix, one could run finetuning of the bert-base-uncased on the first GPU only (via `--gpus ['cuda:0']`) and still use the second GPU for some custom computations (for example attaching gradient hooks to the model and dumping them on the `cuda:1` device for additional processing).<|||||>I think we're just going in circles, as I still don't see how you are going to be able to your extra computations on the second GPU with the Trainer.<|||||>For example like this:
```python
for name, param in trainer.model.named_parameters():
param.register_hook(lambda grad: grad.to('cuda:1'))
```
Edit: normally, this `lambda` would do something like `deepcopy` of gradients to `cuda:1`, and then for example sort them.<|||||>I am still not seeing what you are doing with those during the training. What piece of code executed by the `Trainer` will then give you anything useful from those? The reason I'm asking is that whatever you want to do probably requires subclassing the `Trainer` with some custom code, so instead of adding a new argument for everyone that will only work in DP setting and thus confuse users (the Trainer supporting way more setups) just for something that can't work out of the box anyway is not a good idea. If you end up subclassing the `Trainer` anyway, you should then be able to also implement the changes to only use GPU 0.<|||||>More specifically, I am trying to identify less important weights during the training and to compress them. Identifying them requires doing some computations with gradients, and that's the reason why I need to use the additional GPU (but this is probably not relevant to the problem). And yes, I have subclassed the `Trainer` class to implement these additional custom things. I am not using anything from the `Trainer` class except the default training pipeline.
So, back to the original question and to add more context:
1. when this line is invoked: https://github.com/huggingface/transformers/blob/0645b07daf9a78ad64472a36288710b9fa32087a/examples/pytorch/question-answering/run_qa.py#L212
the `training_args` will grab all available gpus at the machine and set them as `self._n_gpu` attribute
https://github.com/huggingface/transformers/blob/cad61b68396a1a387287a8e2e2fef78a25b79383/src/transformers/training_args.py#L1091
2. then, `trainer.train()` is invoked to begin the training phase. At this line
https://github.com/huggingface/transformers/blob/6bc6797e04811176f4244a42c86f8a65a1e1c455/src/transformers/trainer.py#L1366 the model gets wrapped into `DataParallel` class because `self._n_gpu` grabbed all available GPUs at the machine. And this is the core of the problem: the model is `DataParallel` just because there are multiple GPUs on the machine detected with `torch.cuda.device_count()`. If there were 100 GPUs, then the model would be trained on 100 GPUs just because they are detected with `torch.cuda.device_count()`. As I have mentioned earlier, forcing the number of visible GPUs with `CUDA_VISIBLE_DEVICES=0` is not a solution because then all other GPUs become invisible to pytorch and these custom computations during the training.
3. so a possible solution to avoid adding new arguments to the `Trainer`: I could for example override the `self._n_gpu` attribute of the `training_args` object before the `Trainer` is initialized and model is wrapped (so from their point of view, there would be only one available GPU). What I wanted to ask is, if there are some other things that I should be careful about if I override dynamically the `self._n_gpu` attribute. I don't feel comfortable overriding it just because it's seems to be a "protected" attributed of the `TrainingArguments` class. Probably another possible solution could be to subclass the `TrainingArguments` class and override the behaviour of https://github.com/huggingface/transformers/blob/cad61b68396a1a387287a8e2e2fef78a25b79383/src/transformers/training_args.py#L1034 by setting `self._n_gpu` attributed based on the given argument `--gpus` instead of grabbing all visible GPUs with `torch.cuda.device_count()`.
<|||||>TLDR for a possible enhancement of the existing pipeline:
- Is it better to give users a choice to set which devices they want to use for training via an argument compared to always grabbing all visible devices via `torch.cuda.device_count()`?
- In any case, effort for end-users will be the same, as they would just use an argument (for example `--gpus ['cuda:0']`) instead of `CUDA_VISIBLE_DEVICES` to specify which devices should be used for their experiment. The former gives more flexibility and enables use-cases as described above.<|||||>Thanks for humoring me and detailing a bit more your use case :-).
I'm convinced, we can add a new training argument that will accept a list (if used outside of the parser) or a comma-separated string (for use as CLI argument) with elements being what `torch.device` accepts (so ints or strings like `'cuda:0'), which will default to `None` (in which case we take everything available).
Does that sound like an acceptable solution? Would you like to open a PR with that?<|||||>Yes sure, that solution would be perfect (providing both functionalities, and also backward compatible). Thanks a lot for your time and very helpful suggestions, I will open a PR for this.
And sorry for spamming you with probably unnecessary details of my exact use case. I could have explained the issue in a much simpler way 😄<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,114 | closed | Error on loading saved optimizer after training (zero-3) | ### System Info
```shell
Platform: Ubuntu 18.04.1
python3.8.0
cuda-11.3
torch==1.11.0+cu113 (GPU)
transformers==4.18.0
deepspeed==0.6.3
huggingface_hub version: 0.5.1
```
### Who can help?
@sgugger, @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Use any code which trains and evaluates a model with huggingface trainer (e.g https://github.com/ElementAI/picard/blob/main/seq2seq/run_seq2seq.py#L216-L267). Use `save_steps=1` in config. Train for few epochs and evaluate. An error is thrown when the model is trying to load the optimizer after training.
### OPTIMIZER USED: adafactor (issue also occurs with `adaw_hf`, `adamw_torch`)
ZeRO-3 config (used the same from hf page)
```
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
Traceback:
```
Training completed. Do not forget to share your model on huggingface.co/models =)
Loading best model from ./output/checkpoint-12 (score: 70.1).
[2022-05-06 14:15:47,319] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed info: version=0.6.3, git-hash=unknown, git-branch=unknown
[2022-05-06 14:15:47,323] [INFO] [engine.py:278:__init__] DeepSpeed Flops Profiler Enabled: False
[2022-05-06 14:15:47,323] [INFO] [engine.py:1042:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer
[2022-05-06 14:15:47,323] [INFO] [engine.py:1048:_configure_optimizer] Using client Optimizer as basic optimizer
[2022-05-06 14:15:47,324] [INFO] [engine.py:1064:_configure_optimizer] DeepSpeed Basic Optimizer = Adafactor
[2022-05-06 14:15:47,324] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Adafactor type=<class 'transformers.optimization.Adafactor'>
[2022-05-06 14:15:47,324] [WARNING] [engine.py:1077:_configure_optimizer] **** You are using ZeRO with an untested optimizer, proceed with caution *****
[2022-05-06 14:15:47,324] [INFO] [logging.py:69:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer
[2022-05-06 14:15:47,324] [INFO] [engine.py:1362:_configure_zero_optimizer] Initializing ZeRO Stage 3
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
[2022-05-06 14:15:47,325] [INFO] [stage3.py:273:__init__] Reduce bucket size 262144
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
[2022-05-06 14:15:47,325] [INFO] [stage3.py:274:__init__] Allgather bucket size 235929.6
Loading extension module utils...
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...No modifications detected for re-loaded extension module utils, skipping build step...
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...Loading extension module utils...
Time to load utils op: 0.0006072521209716797 seconds
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0006115436553955078 seconds
Traceback (most recent call last):
Time to load utils op: 0.0006079673767089844 secondsNo modifications detected for re-loaded extension module utils, skipping build step...
File "train.py", line 227, in <module>
No modifications detected for re-loaded extension module utils, skipping build step...Loading extension module utils...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005943775177001953 secondsLoading extension module utils...
Using /home/user/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
Traceback (most recent call last):
File "train.py", line 227, in <module>
Traceback (most recent call last):
File "train.py", line 227, in <module>
Traceback (most recent call last):
Time to load utils op: 0.0006155967712402344 seconds
File "train.py", line 227, in <module>
main()Time to load utils op: 0.0006160736083984375 seconds
Time to load utils op: 0.0006232261657714844 seconds File "train.py", line 176, in main
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
main()
main()train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "train.py", line 176, in main
Traceback (most recent call last):
File "train.py", line 176, in main
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
File "train.py", line 227, in <module>
Traceback (most recent call last):
main()Traceback (most recent call last):
File "train.py", line 227, in <module>
File "train.py", line 227, in <module>
Time to load utils op: 0.0004525184631347656 seconds File "train.py", line 176, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)main()
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
File "train.py", line 176, in main
Traceback (most recent call last):
File "train.py", line 227, in <module>
main()main()
File "train.py", line 176, in main
File "train.py", line 176, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)
train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
main()
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
File "train.py", line 176, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1546, in train
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self) deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self) deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self) File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
File "/home/user/venv/lib/python3.8/site-packages/transformers/deepspeed.py", line 374, in deepspeed_reinit
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs) File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs) File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs) deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
engine = DeepSpeedEngine(args=args, File "/home/user/venv/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
engine = DeepSpeedEngine(args=args,
engine = DeepSpeedEngine(args=args,
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
engine = DeepSpeedEngine(args=args,
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
engine = DeepSpeedEngine(args=args,engine = DeepSpeedEngine(args=args,
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
engine = DeepSpeedEngine(args=args, File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
engine = DeepSpeedEngine(args=args, File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 294, in __init__
self._configure_optimizer(optimizer, model_parameters)self._configure_optimizer(optimizer, model_parameters)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
self._configure_optimizer(optimizer, model_parameters)self._configure_optimizer(optimizer, model_parameters)self._configure_optimizer(optimizer, model_parameters) File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
self._configure_optimizer(optimizer, model_parameters)self._configure_optimizer(optimizer, model_parameters)
self._configure_optimizer(optimizer, model_parameters)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1080, in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)self.optimizer = self._configure_zero_optimizer(basic_optimizer) self.optimizer = self._configure_zero_optimizer(basic_optimizer)self.optimizer = self._configure_zero_optimizer(basic_optimizer)
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1365, in _configure_zero_optimizer
optimizer = DeepSpeedZeroOptimizer_Stage3( optimizer = DeepSpeedZeroOptimizer_Stage3( optimizer = DeepSpeedZeroOptimizer_Stage3(
optimizer = DeepSpeedZeroOptimizer_Stage3(optimizer = DeepSpeedZeroOptimizer_Stage3(
optimizer = DeepSpeedZeroOptimizer_Stage3(
optimizer = DeepSpeedZeroOptimizer_Stage3(optimizer = DeepSpeedZeroOptimizer_Stage3( File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
File "/home/user/venv/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 293, in __init__
self.dtype = self.optimizer.param_groups[0]['params'][0].dtypeself.dtype = self.optimizer.param_groups[0]['params'][0].dtypeself.dtype = self.optimizer.param_groups[0]['params'][0].dtype
self.dtype = self.optimizer.param_groups[0]['params'][0].dtype self.dtype = self.optimizer.param_groups[0]['params'][0].dtypeIndexError
IndexErrorself.dtype = self.optimizer.param_groups[0]['params'][0].dtypeself.dtype = self.optimizer.param_groups[0]['params'][0].dtypeIndexErrorself.dtype = self.optimizer.param_groups[0]['params'][0].dtype
: :
: IndexError
list index out of rangelist index out of rangeIndexErrorlist index out of range: IndexErrorIndexError
: IndexError
list index out of range: : list index out of range:
list index out of rangelist index out of range
list index out of range
```
### Expected behavior
```shell
The code should be able to reload the optimizer w/o errors.
```
| 05-06-2022 14:52:03 | 05-06-2022 14:52:03 | I don't think DeepSpeed is compatible with the use of Adafactor, cc @stas00 <|||||>@sgugger oh, but I tried using other optimizers too (like `adamw_hf`, `adam_torch`, etc) which also resulted in the same error. <|||||>I can handle this ticket, @sgugger
@base-y, thank you for the report - how can I reproduce this situation on a small example with public data/model?
<|||||>Hi @stas00 , I havent tried to reproduce the error with public datasets. But I think to reproduce the error, you could take any seq2seq model (like T5-small) and train it on any seq2seq dataset and make sure to have atleast one checkpoint saved before the training ends. I used deepspeed config mentioned above. Here was my thought on how to reproduce the error (mentioned above):
## Reproduction
Use any code which trains and evaluates a model with huggingface trainer (e.g https://github.com/ElementAI/picard/blob/main/seq2seq/run_seq2seq.py#L216-L267). Use `save_steps=1` in config (any `save_steps` will work as long there was atleast one saved ckpt before the training ends). Train for few epochs and evaluate. An error is thrown when the model is trying to load the optimizer after training.
> Make sure to set the optimizer to ADAFACTOR.
Please let me know if any other information is required.<|||||>This is not how: please provide a way for us to reproduce the problem works.
We unfortunately don't have time to go and figure out things that aren't part of `transformers` directly - so telling us to go and train and write a script, etc. we won't do.
What you need to do is to write a small reproducible example that we can copy and paste to reproduce the problem with.
Once the problem is reproduced then we can start working on fixing it or delegating it to Deepspeed as the problem could be coming from there. But let's understand it here first.
Thanks.<|||||>One other thing I could recommend to try is not to use the load best model feature but to try to just train for a few steps, and then exit and normally resume from the saved deepspeed checkpoint and see if it works. If it fails then chances are very high you need to report it to the Deepspeed repo instead.
<|||||>I wrote a script which takes in SQUAD dataset and trains for 2 steps (32 batch size). Though the entire dataset is downloaded, I am only loading 10 samples for train and validation set to get the error asap.
The code takes ~1min to run and throw the error.
```
from dataclasses import field, dataclass
from typing import Optional
import datasets
from transformers import T5Tokenizer, T5ForConditionalGeneration, Trainer, HfArgumentParser, TrainingArguments
args_dict = {
"model_name_or_path": 't5-small',
"max_source_length": 512,
"max_target_length": 16,
"per_gpu_train_batch_size": 8,
"per_gpu_eval_batch_size": 8,
"gradient_accumulation_steps": 4,
"learning_rate": 1e-4,
"num_train_epochs": 2,
"do_train": True,
"do_eval": True,
"optim": "adafactor",
"evaluation_strategy": "steps",
"eval_steps": 1,
"save_strategy": "steps",
"save_steps": 1,
"load_best_model_at_end": True,
"output_dir": './output',
"cache_dir": './cache',
"max_steps": 2,
"deepspeed": "ds_config.json"
}
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_file_path: Optional[str] = field(
default='train_data.pt',
metadata={"help": "Path for cached train dataset"},
)
valid_file_path: Optional[str] = field(
default='valid_data.pt',
metadata={"help": "Path for cached valid dataset"},
)
max_len: Optional[int] = field(
default=512,
metadata={"help": "Max input length for the source text"},
)
target_max_len: Optional[int] = field(
default=32,
metadata={"help": "Max input length for the target text"},
)
def _add_eos_to_examples(example):
example['input_text'] = 'question: %s context: %s' % (example['question'], example['context'])
example['target_text'] = '%s' % example['answers']['text'][0]
return example
def _convert_to_features(example_batch):
input_encodings = tokenizer.batch_encode_plus(example_batch['input_text'], pad_to_max_length=True, max_length=512, truncation=True)
target_encodings = tokenizer.batch_encode_plus(example_batch['target_text'], pad_to_max_length=True, max_length=16, truncation=True)
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'labels': target_encodings['input_ids'],
}
return encodings
def get_dataset():
train_dataset = datasets.load_dataset('squad', split='train[10:20]')
valid_dataset = datasets.load_dataset('squad', split='validation[:10]')
train_dataset = train_dataset.map(_add_eos_to_examples)
# map convert_to_features batch wise
train_dataset = train_dataset.map(_convert_to_features, batched=True)
valid_dataset = valid_dataset.map(_add_eos_to_examples, load_from_cache_file=False)
valid_dataset = valid_dataset.map(_convert_to_features, batched=True, load_from_cache_file=False)
# set the tensor type and the columns which the dataset should return
columns = ['input_ids', 'attention_mask', 'labels']
train_dataset.set_format(type='torch', columns=columns)
valid_dataset.set_format(type='torch', columns=columns)
return train_dataset, valid_dataset
if __name__ == "__main__":
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_dict(args_dict)
tokenizer = T5Tokenizer.from_pretrained('t5-small', cache_dir=model_args.cache_dir)
model = T5ForConditionalGeneration.from_pretrained('t5-small', cache_dir='./cache')
train_dataset, valid_dataset = get_dataset()
print('loading done')
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset,
)
# Training
if training_args.do_train:
print('Training....')
trainer.train()
trainer.save_model()
# Evaluation
if training_args.do_eval:
results = {}
print("*** Evaluate ***")
eval_output = trainer.evaluate()
output_eval_file = "./output/eval_results.txt"
with open(output_eval_file, "w") as writer:
print("***** Eval results *****")
for key in sorted(eval_output.keys()):
print(" %s = %s", key, str(eval_output[key]))
writer.write("%s = %s\n" % (key, str(eval_output[key])))
results.update(eval_output)
```
And here is the `ds_config.json` file:
```
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
Hope you find this useful.<|||||>super! that's very helpful, @base-y
I'm able to reproduce the failure:
```
$ CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.run --nproc_per_node=1 --master_addr='127.0.0.1' --master_port=9901 test.py
[...]
Traceback (most recent call last):
File "test.py", line 129, in <module>
trainer.train()
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1545, in train
self._load_best_model()
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1608, in _load_best_model
deepspeed_engine, optimizer, lr_scheduler = deepspeed_reinit(self)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/deepspeed.py", line 374, in deepspeed_reinit
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**trainer.deepspeed_initialize_kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/__init__.py", line 119, in initialize
engine = DeepSpeedEngine(args=args,
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 295, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 1081, in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 1366, in _configure_zero_optimizer
optimizer = DeepSpeedZeroOptimizer_Stage3(
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/stage3.py", line 610, in __init__
self.dtype = self.optimizer.param_groups[0]['params'][0].dtype
IndexError: list index out of range
```
I will try to analyze this later today or tomorrow.
<|||||>I solved the training part, but run into a new problem in eval.
```
Traceback (most recent call last):
File "test.py", line 136, in <module>
eval_output = trainer.evaluate()
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2420, in evaluate
output = eval_loop(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2597, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2838, in prediction_step
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2177, in compute_loss
outputs = model(**inputs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 1573, in forward
loss = self.module(*inputs, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl
result = forward_call(*input, **kwargs)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/t5/modeling_t5.py", line 1603, in forward
encoder_outputs = self.encoder(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl
result = forward_call(*input, **kwargs)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/t5/modeling_t5.py", line 1035, in forward
layer_outputs = layer_module(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl
result = forward_call(*input, **kwargs)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/t5/modeling_t5.py", line 670, in forward
self_attention_outputs = self.layer[0](
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl
result = forward_call(*input, **kwargs)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/t5/modeling_t5.py", line 575, in forward
normed_hidden_states = self.layer_norm(hidden_states)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1117, in _call_impl
result = hook(self, input)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/stage3.py", line 1096, in _pre_forward_module_hook
self.pre_sub_module_forward_function(module)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/stage3.py", line 1214, in pre_sub_module_forward_function
param_coordinator.fetch_sub_module(sub_module)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 279, in fetch_sub_module
assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionError: {'id': 6, 'status': 'INFLIGHT', 'numel': 0, 'ds_numel': 512, 'shape': (0,), 'ds_shape': (512,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {292}}
```
the training issue has to do with the hack that requires us to re-init the deepspeed engine every time the model is reloaded. So I'm trying a different approach.
You can try this fix meanwhile for the training assert you reported in this issue: https://github.com/huggingface/transformers/pull/17151
Please let me know if that fixes the problem you reported originally. Thanks.
I will keep you posted when I get the whole thing working.<|||||>The 2nd problem is a bug introduced by https://github.com/microsoft/DeepSpeed/pull/1916
So to unblock you use my PR and roll back deepspeed to `deepspeed==0.6.4`
I will report this other issue to Deepspeed.<|||||>So to update, the 2nd problem happens not because of the PR I quoted above but because a new deepspeed engine init, that we are forced to do to overcome this issue: https://github.com/microsoft/DeepSpeed/issues/1612, creates a new set of forward hooks, but it doesn't delete the old ones - and since the model persists hence the breakage.
So Tunji is going to create a PR that gives us a method to remove the old hooks.
So I will shortly have 2 PRs for you to test with which should solve this problem altogether.<|||||>@base-y, ok - Tunji and I have a working solution - needed work in both repos.
if it's not merged yet, please use these 2 branches:
- https://github.com/microsoft/DeepSpeed/pull/1947
- https://github.com/huggingface/transformers/pull/17151
and it should all work.
Please let me know if you have tried it and it works for you.
If I don't hear back from you I will merge the `transformers` fix once deepspeed merges their side and make a new release.
<|||||>Hey, sorry for the delayed response. Sure, I will install huggingface and deepspeed locally from the PR branches and check if it works asap.<|||||>Hi @stas00 , I reran the training code and there are no errors this time around. Thank you!<|||||>Perfect. Thank you for validating, @base-y
I will merge the HF PR once the Deepspeed merges their side and makes a new release.
cc: @tjruwase <|||||>FYI, the deepspeed side has been merged, so just waiting for their new release and will then merge the HF PR. <|||||>Hi, I am using Huggingface version = 4.25.1 and Deepspeed = 0.8.0 and still facing this issue. Is there something I am missing?<|||||>Hi, smitanannaware
as a long time has passed since this Issue, please kindly file a new Issue and provide full information about your specific case including most importantly the full traceback and if possible an easy way for us to reproduce it - and please tag me on that issue. Thank you. |
transformers | 17,113 | closed | Baai glm | add glm model | 05-06-2022 10:31:38 | 05-06-2022 10:31:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17113). All of your documentation changes will be reflected on that endpoint.<|||||>As requested on slack, I just had a quick look at your PR. It's in good shape! :blush:
Regarding the fast tokenizer part, to check its validity, I suggest we take a first stab by looking at the results of the tests you created. For the moment, it seems to me that we can't leverage the tests files you added -at least with the CI. To fix it, I think you need to add the model you are creating to the main init file (this one: https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py). To know what to add maybe the easiest way as Lysander mentioned earlier is to see on another branch the additions that are made when you add a model from another model with the cli command `transformers-cli add-new-model-like` (cf [this discussion](https://huggingface.slack.com/archives/C036A8ECY79/p1647333073740149)).
Then, concerning the tokenizer fast, I already see 2 small things: 1. it will be necessary to add a converter (cf the discussion we had [here](https://huggingface.slack.com/archives/C036A8ECY79/p1650009413440189?thread_ts=1649381592.438409&cid=C036A8ECY79)) and 2. I have the impression that there are new special tokens added - like `ENC_token`, `sop_token`, `eop_token`, `gMASK_token` etc (but I am not sure to see where they are used in the code).
Don't hesitate to ask for more details! I'm here to help |
transformers | 17,112 | closed | [LED] fix global_attention_mask not being passed for generation and docs clarification about grad checkpointing | # What does this PR do?
There are two changes:
1. The `global_attention_mask` parameter was not passed in `.prepare_inputs_for_generation(...)`. So, when you call the `.generate(...)` method it never reaches the `LEDModel.forward(...)` (even though `global_attention_mask` is passed in `kwargs`). So local attention is used for all tokens. This problem was mentioned also in [this pull request](https://github.com/huggingface/transformers/pull/16485), but not completely fixed.
2. The [documentation of LED](https://huggingface.co/docs/transformers/model_doc/led) states:
> To fine-tune LED on all 16384, it is necessary to enable gradient checkpointing by executing model.gradient_checkpointing_enable().
This could be misleading since gradient checkpointing _can_ be used if training leads to OOM errors, but it is _not actually necessary_, as mentioned in [this issue](https://github.com/huggingface/transformers/issues/16541). Some hints on the use of the `use_cache` flag were also added, as suggested in [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) by @patrickvonplaten (who initializes the model using `AutoModelForSeq2SeqLM.from_pretrained("allenai/led-base-16384", gradient_checkpointing=True, use_cache=False)`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @sgugger | 05-06-2022 10:28:06 | 05-06-2022 10:28:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for catching this @caesar-one, looking forward to it getting merged as I am also trying to use LED-based models with a global attention mask!<|||||>@patrickvonplaten could this issue be the reason for the [lower performances of Primera](https://huggingface.co/allenai/PRIMERA) (LED-based model) in the hf implementation vs the original implementation? After all, global attention was not considered during generation.
Anyway, thanks for your fast feedback :-)<|||||>Thanks a lot for the PR @caesar-one ! |
transformers | 17,111 | closed | Fix self-push CI report path in cat | # What does this PR do?
`self-push.yml` has lines like
```
run: cat reports/tests_torch_gpu_failures_short.txt
```
which should be
```
run: cat reports/tests_torch_gpu/failures_short.txt
```
Currently, we get errors like
```
cat: reports/tests_torch_multi_gpu_failures_short.txt: No such file or directory
Error: Process completed with exit code 1.
```
(see for example [this job](https://github.com/huggingface/transformers/runs/6307735900?check_suite_focus=true)). | 05-06-2022 09:49:48 | 05-06-2022 09:49:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> If this change is needed, it should also be done in the other GitHub action files,
The other occurrences are in `self-nightly-scheduled.yml"`. ~~I can update it too.~~ **Done**
> and also in the report part of the job? Not sure if they will push properly nested files in subdirectories.
I guess you are talking about files like `notification_service_deprecated.py` and `notification_service.py`. They are not using `failures_short.txt`.
Note that:
- this PR doesn't change any artifact directory location
- `notification_service_deprecated.py` does have issues with the report.
- For example, `"run_all_tests_tf_gpu_test_reports/[X].txt"` should be `run_all_tests_tf_gpu_test_reports/tests_torch_gpu/[X].txt`
- (if we don't change the artifact directory in `self-push.yml`)
- I am still on the task of the self-push report format, which might still take some time.
- But think I can already fix a few small things in the meantime. |
transformers | 17,110 | closed | DebertaV2Tokenizer wasn't added to the __init__ file | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.11.0-46-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.1+cu113
- Tensorflow version (GPU?): not installed
- Flax version (CPU?/GPU?/TPU?): not installed
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
Hey, I just saw that the pull request for DebertaV2Tokenizer and DebertaV2TokenizerFast #15529 was merged around two weeks ago. Unfortunately, importing DebertaV2Tokenizer or DebertaV2TokenizerFast did not work. This is seems to happen because "DebertaV2Tokenizer" wasn't added to the `__init__` file https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py#L181
`"models.deberta_v2": ["DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP", "DebertaV2Config"],`
Adding it to the file fixes the problem
@SaulLu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers import DebertaV2TokenizerFast, DebertaV2Tokenizer`
### Expected behavior
```shell
To import the tokenizer and not throw an error.
```
| 05-06-2022 08:46:16 | 05-06-2022 08:46:16 | Hello @AhmedIdr! Now that a new version has been released, you should be able to import it by upgrading to the latest version.<|||||>Yeah it seems to be working, thank you. |
transformers | 17,109 | closed | Add OFA configuration, tokenizer and modeling | # What does this PR do?
Fixes #15813 and #16265. Added OFA configuartion, tokenizer and Modeling.
## Who can Review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @patil-suraj @gchhablani
| 05-06-2022 04:16:26 | 05-06-2022 04:16:26 | Thank you so much @JustinLin610 for adding this PR. Sorry I hadn't had the chance to work on it so far.
@patil-suraj If anything is needed from my end please let me know.<|||||>I can address the failing tests. Please give me until today.<|||||>> I can address the failing tests. Please give me until today.
Thanks very much for your help! That would be very nice for me cuz I'm not familiar with these things...<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17109). All of your documentation changes will be reflected on that endpoint.<|||||>> I can address the failing tests. Please give me until today.
I just passed the tests (mainly making the test case simpler), but I am not sure if there is any other things that I should do. If you have time, feel free to shoot me some advice on how to improve the code:)<|||||>Thanks a lot for the PR @JustinLin610 ! I will review this PR either today or tomorrow. LMK if you have any questions, happy to help :)<|||||>> Thanks a lot for the PR @JustinLin610 ! I will review this PR either today or tomorrow. LMK if you have any questions, happy to help :)
Great! Looking forward to your reply!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks a lot for the PR @JustinLin610 ! I will review this PR either today or tomorrow. LMK if you have any questions, happy to help :)
I have just sync my branch with main. Any ideas or comments for my PR? btw it seems that the testing of big bird flax causes the breakdown of the whole testing...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>pinging @gchhablani @patil-suraj
@LysandreJik
really want this cool model, merge in huggingface :)<|||||>@LysandreJik @patil-suraj Please let me know if anyone else is working on this, otherwise I can give it another attempt.<|||||>This is a nice feature. Looking forward to it.<|||||>> @LysandreJik @patil-suraj Please let me know if anyone else is working on this, otherwise I can give it another attempt.
Still on this..., and also one of my colleagues will participate in this. Would you mind if you and Patil check the code and give me some feedback?<|||||>Is it possible to re-open this PR so it may get attention?<|||||>> Is it possible to re-open this PR so it may get attention?
Good suggestion. Maybe I should delete the branch and open a new PR, as my branch has a lot of diffs with the main branch as time passed, and thus I have to merge a lot of PRs for synchronization... I'll update to our latest version first and add more test cases for merging...
If you are interested, you can check our official repo and the specific branch for transformers. We have confirmed that this version can reach the same performance as the official code based on Fairseq. https://github.com/OFA-Sys/OFA/tree/feature/add_transformers<|||||>gently tagging the great @NielsRogge in case they might have a chance to look at this one :-) OFA is quite strong and it would be amazing to have first-party huggingface support for it |
transformers | 17,108 | closed | [bert] An equivalent way to get last_hidden_state tensor but supports ColossalAI well. | # What does this PR do?
An equivalent way to get the last_hidden_state tensor in `BertModel` and won't harm the correctness and performance.
The motivation is that I am going to apply the BertModel in [Colossal-AI](https://github.com/hpcaitech/ColossalAI), a distributed Deep Learning Framework. The is_tensor() function in the file_utils.py does not recognize our customized Tensor data structure, which extends the ability of `torch.Tensor` but not a subclass of `torch.Tensor`. With my updating, we can avoid the type checking and make it run smoothly.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-06-2022 03:28:02 | 05-06-2022 03:28:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17108). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,107 | closed | How to deal with multiple sequences with T5ForConditionalGeneration | https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/t5/modeling_t5.py#L1537
Here I remark that the output of individual sequences are different from batched sequences using T5ForConditionalGeneration. How to fix it ? Here is an example to reproduce the error:
with batch_size > 1, the logits is wrong
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import pandas as pd
from torchtext.legacy.data import Field, BucketIterator, TabularDataset
# prepare data
data = {"text": ["summarize: i am very happy. i am very happy",
"summarize: i am very safe. i am very safe"],
"summary": ["i am very happy", "i am very safe"]}
df = pd.DataFrame(data)
df.to_csv("debug.csv", index=False)
# set tokenizer of T5-small
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("t5-small")
pad_index = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
unk_index = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)
eos_index = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
from transformers import T5Tokenizer, T5ForConditionalGeneration
SRC = Field(tokenize = tokenizer.encode,
use_vocab=False,
lower = False,
init_token = None,
eos_token = eos_index,
pad_token=pad_index,
unk_token=unk_index,
include_lengths = True)
TRG = Field(tokenize = tokenizer.encode,
use_vocab=False,
init_token = None,
eos_token = eos_index,
pad_token=pad_index,
unk_token=unk_index,
lower = False)
fields = {"text": ("src", SRC), "summary": ("trg", TRG)}
train_data, valid_data, test_data = TabularDataset.splits(
path="./",
train="debug.csv",
validation="debug.csv",
test="debug.csv",
format='csv',
fields=fields)
BATCH_SIZE = 1
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_within_batch = True,
sort_key = lambda x : len(x.src),
device = device)
for i, batch in enumerate(train_iterator):
src, _ = batch.src
trg = batch.trg
logits = model(input_ids=src.view(src.shape[1], src.shape[0]),
labels=trg.view(trg.shape[1], trg.shape[0])).logits
X = logits.view(logits.size(1), logits.size(0), logits.size(-1))
X = F.softmax(X, dim=-1)
ids = X.argmax(dim=-1)
y = tokenizer.batch_decode(sequences=ids, skip_special_tokens=False)
z = tokenizer.batch_decode(sequences=trg, skip_special_tokens=False)
print(" ".join(y))
print("*********")
print(" ".join(z))
print("*********")
``` | 05-06-2022 01:34:24 | 05-06-2022 01:34:24 | |
transformers | 17,106 | closed | Socket Timeout when using DDP | ### System Info
```shell
- `transformers` version: 4.17.0.dev0
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.8.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (run_summarization.py script)
```
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm constructing a dataset (.parquet format) that is similar to json format, but has other fields to construct graph for examples in the dataset. When I'm training the model in DDP mode (distributed), I'm getting `RuntimeError: Socket Timeout`. Here is the full stack:
```
Running tokenizer on train dataset #0: 24%|███████████████████████████████████████████████▌ | 7/29 [28:27<1:46:58, 291.73s/ba]Traceback (most recent call last): #1: 24%|███████████████████████████████████████████████▌ | 7/29 [28:54<1:46:07, 289.45s/ba]
File "examples/pytorch/summarization/run_summarization.py", line 987, in <module>████████▌ | 7/29 [30:24<1:49:35, 298.88s/ba]
main()kenizer on train dataset #3: 24%|███████████████████████████████████████████████▌ | 7/29 [28:46<1:43:47, 283.05s/ba]
File "examples/pytorch/summarization/run_summarization.py", line 791, in main█████▊ | 6/29 [27:32<1:57:16, 305.93s/ba]
with training_args.main_process_first(desc="train dataset map pre-processing"):████████▌ | 7/29 [27:45<1:42:39, 279.97s/ba]
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/contextlib.py", line 113, in __enter__ | 6/29 [26:27<1:54:13, 297.97s/ba]
return next(self.gen)n dataset #7: 21%|████████████████████████████████████████▊ | 6/29 [25:48<1:51:59, 292.15s/ba]
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/site-packages/transformers/training_args.py", line 1264, in main_process_first | 6/29 [26:27<1:52:41, 293.96s/ba]
torch.distributed.barrier()set #9: 24%|███████████████████████████████████████████████▌ | 7/29 [29:50<1:45:55, 288.90s/ba]
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2420, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: Socket Timeout
Killing subprocess 62044
Killing subprocess 62045
Traceback (most recent call last):
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/sajad/anaconda3/envs/myenv-py38/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
```
### Expected behavior
```shell
Running the preprocessing function on each training split.
```
| 05-05-2022 23:33:44 | 05-05-2022 23:33:44 | Not sure if it's related to dataset (.parquet format). Could you please post the code snippet you used to launch the script ? Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same error. I tried to pre-train with 25GB korean corpus data using example/run_clm.py.
I haven't tested it in an environment not using DDP yet, but I think this problem is related to corpus. Because there was no problem when it was a small corpus.
The process killed about 30000 ~ 32000 of 85249. The tokenizer type is Byte-level BPE.
- My script
```
python -m torch.distributed.launch \
--nproc_per_node 4 $TRANSFORMERS_PATH/pytorch/language-modeling/run_clm.py \
--model_type gpt2 \
--tokenizer_name $TOKENIZER_PATH/$MODEL_NAME \
--config_overrides bos_token_id=0,eos_token_id=0 \
--block_size 1024 \
--train_file $DATASET_PATH/train.txt \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--output_dir $MODEL_PATH/$MODEL_NAME \
--num_train_epochs 5 \
--weight_decay 0.01 \
--learning_rate 1e-5 \
--warmup_steps 8000 \
--save_strategy steps \
--save_steps 4000 \
--save_total_limit 10 \
--evaluation_strategy steps \
--eval_steps 4000 \
--load_best_model_at_end \
--validation_split_percentage 5
```
- logs
```
Running tokenizer on dataset: 38%|███▊ | 32066/85249 [32:12<53:25, 16.59ba/s]
Running tokenizer on dataset: 38%|███▊ | 32068/85249 [32:13<53:59, 16.42ba/s]Traceback (most recent call last):
File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 563, in <module>
main()
File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 397, in main
with training_args.main_process_first(desc="dataset map tokenization"):
File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/dofirst/workspace/transformers/src/transformers/training_args.py", line 1368, in main_process_first
Traceback (most recent call last):
File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 563, in <module>
Traceback (most recent call last):
File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 563, in <module>
torch.distributed.barrier()
File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2776, in barrier
main()
main() File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 397, in main
File "/home/dofirst/workspace/scripts/../../transformers/examples/pytorch/language-modeling/run_clm.py", line 397, in main
with training_args.main_process_first(desc="dataset map tokenization"):
with training_args.main_process_first(desc="dataset map tokenization"): File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/contextlib.py", line 113, in __enter__
File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
return next(self.gen)
File "/home/dofirst/workspace/transformers/src/transformers/training_args.py", line 1368, in main_process_first
File "/home/dofirst/workspace/transformers/src/transformers/training_args.py", line 1368, in main_process_first
work = default_pg.barrier(opts=opts)
RuntimeError: [3] is setting up NCCL communicator and retreiving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Socket Timeout
Exception raised from recvBytes at /opt/conda/conda-bld/pytorch_1646755903507/work/torch/csrc/distributed/c10d/Utils.hpp:580 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f09757d01bd in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x6c (0x7f09757cc90c in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x11f (0x7f09ab3b3d4f in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #3: c10d::TCPStore::doGet(std::string const&) + 0x21 (0x7f09ab3b4cd1 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #4: c10d::TCPStore::get(std::string const&) + 0x5b (0x7f09ab3b4d5b in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #5: c10d::PrefixStore::get(std::string const&) + 0x32 (0x7f09ab3868a2 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::PrefixStore::get(std::string const&) + 0x32 (0x7f09ab3868a2 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #7: c10d::PrefixStore::get(std::string const&) + 0x32 (0x7f09ab3868a2 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #8: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, c10d::OpType, std::string const&, int) + 0xe4 (0x7f09b3661df4 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #9: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x1d9 (0x7f09b3665e89 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #10: <unknown function> + 0xb4c325 (0x7f09b3669325 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #11: c10d::ProcessGroupNCCL::allreduce_impl(std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllreduceOptions const&) + 0xf (0x7f09b366a61f in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #12: c10d::ProcessGroupNCCL::allreduce(std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllreduceOptions const&) + 0x2d3 (0x7f09b3670733 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #13: c10d::ProcessGroupNCCL::barrier(c10d::BarrierOptions const&) + 0x72a (0x7f09b367a18a in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so)
frame #14: <unknown function> + 0x800291 (0x7f09f8d2b291 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #15: <unknown function> + 0x1e5d67 (0x7f09f8710d67 in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #16: <unknown function> + 0x13c00e (0x559abc56400e in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #17: _PyObject_MakeTpCall + 0x3bf (0x559abc55913f in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #18: <unknown function> + 0x166ca0 (0x559abc58eca0 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #19: _PyEval_EvalFrameDefault + 0x1510 (0x559abc5ffeb0 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #20: <unknown function> + 0x1c7d37 (0x559abc5efd37 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #21: _PyEval_EvalFrameDefault + 0x4f83 (0x559abc603923 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #22: <unknown function> + 0x197bc5 (0x559abc5bfbc5 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #23: <unknown function> + 0x13b23d (0x559abc56323d in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #24: _PyEval_EvalFrameDefault + 0x71b (0x559abc5ff0bb in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #25: _PyFunction_Vectorcall + 0x1b7 (0x559abc5f57e7 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #26: <unknown function> + 0x9ce79 (0x559abc4c4e79 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #27: <unknown function> + 0x13bb70 (0x559abc563b70 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #28: _PyEval_EvalFrameDefault + 0x21a2 (0x559abc600b42 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #29: _PyEval_EvalCodeWithName + 0xd5f (0x559abc5f50ff in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #30: _PyFunction_Vectorcall + 0x594 (0x559abc5f5bc4 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #31: _PyEval_EvalFrameDefault + 0x71b (0x559abc5ff0bb in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #32: _PyEval_EvalCodeWithName + 0x260 (0x559abc5f4600 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #33: PyEval_EvalCode + 0x23 (0x559abc5f5eb3 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #34: <unknown function> + 0x242622 (0x559abc66a622 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #35: <unknown function> + 0x2531d2 (0x559abc67b1d2 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #36: <unknown function> + 0x25636b (0x559abc67e36b in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #37: PyRun_SimpleFileExFlags + 0x1bf (0x559abc67e54f in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #38: Py_RunMain + 0x3a9 (0x559abc67ea29 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #39: Py_BytesMain + 0x39 (0x559abc67ec29 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
frame #40: __libc_start_main + 0xe7 (0x7f0a3fb46c87 in /lib/x86_64-linux-gnu/libc.so.6)
frame #41: <unknown function> + 0x1f9ad7 (0x559abc621ad7 in /home/dofirst/miniconda3/envs/huggingface/bin/python)
torch.distributed.barrier()
File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2776, in barrier
torch.distributed.barrier()
File "/home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2776, in barrier
work = default_pg.barrier(opts=opts)
work = default_pg.barrier(opts=opts)
RuntimeError: RuntimeError[2] is setting up NCCL communicator and retreiving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Socket Timeout
Exception raised from recvBytes at /opt/conda/conda-bld/pytorch_1646755903507/work/torch/csrc/distributed/c10d/Utils.hpp:580 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f768dda91bd in /home/dofirst/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/lib/libc10.so)
```<|||||>> I met the same error. I tried to pre-train with 25GB korean corpus data using example/run_clm.py. I haven't tested it in an environment not using DDP yet, but I think this problem is related to corpus. Because there was no problem when it was a small corpus. The process killed about 30000 ~ 32000 of 85249. The tokenizer type is Byte-level BPE.
>
I succeeded in pre-training without DDP. Running tokenizer was finished well and I could use this cache data with DDP after tokenizing.
I don't know the cause yet, but this problem seems to be related to DDP.
My English is not that great. Nevertheless I want to solve this problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Looks like that the process gets killed due to torch.distributed.launch/run timeout of 30 minutes? (https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group)
I had the same problem, where my job would be stopped when using DDP due to the long mapping/tokenization.<|||||>i have a similar task, and my torch.distributed launch gets interupted due to the 30mins timeout.
in my case when i run the script normally like, `python run.py` it gets cached, but when i run it in torch.distributed launch it isnt getting cached, and the entire preprocessing step occurs again, and gets timedout<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Re-opening as it doesn't seem like it's been solved . Maybe @sgugger could help here?<|||||>@patrickvonplaten To give you some clue about the potential root of the problem, given my experiment and [#](https://github.com/huggingface/transformers/issues/17106#issuecomment-1148125905), I believe this happens when the script deals with extremely large-scale datasets. Mine was above >100GB, most of which related to the graph fields that I had put in each example (in the parquet file). I could manage to get this passed by running on a single GPU, and then using cached file for fast load in multi-GPU setting. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey guys, I'm having the same issue here when running in distributed (both with `torch.distributed.launch` and both with elastic run), seems to me like this isn't solved yet.
## My system info:
- `transformers` version: 4.24.0
- Platform: Linux-4.15.0-166-generic
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script (Number)?: Yes (7-8 GPUs).
- Using distributed or parallel set-up in script?: Yes (run_clm.py script)
- Number of nodes in distributed: 1
## My run information
- Modified scripts: My own modified script of run_clm.py, released in version 4.24.0.
- Dataset: [openwebtext](https://huggingface.co/datasets/openwebtext) (from the hub)
## Notes
- When using smaller dataset (e.g. wikitext-2, wikitext-103) I'm not having the issue.
- As mentioned above, the error appears after ~30 minutes. In my case 31:05 minutes.
## Reproduction
I'm running the following:
> torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=${NUM_GPU} \
./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py \
--model_name_or_path ${MODEL} \
--dataset_name ${DS_NAME} \
--save_steps ${SAVE_STEPS} \
--logging_steps 1000 \
--eval_steps 2000 \
--do_train \
--do_eval \
--seed ${RANDOM} \
--max_steps ${MAX_TRAIN_STEPS} \
--learning_rate ${LR} \
--per_device_train_batch_size ${TRAIN_BATCH} \
--gradient_accumulation_steps ${ACC_STEPS} \
--per_device_eval_batch_size ${EVAL_BATCH} \
--evaluation_strategy steps \
--logging_dir ${OUTPUT_DIR} \
--output_dir ${OUTPUT_DIR} \
--overwrite_output_dir \
--load_best_model_at_end \
--max_train_samples 200 \
--max_eval_samples 200 \
And I'm having the following error output:
> main()
File "./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py", line 414, in main
File "./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py", line 414, in main
File "./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py", line 414, in main
main()
File "./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py", line 414, in main
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen) return next(self.gen)return next(self.gen)
>
>
> File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
> File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
> File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
> return next(self.gen)return next(self.gen)
>
> return next(self.gen) File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
>
> File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
> File "/venv/lib/python3.8/site-packages/transformers/training_args.py", line 1668, in main_process_first
> torch.distributed.barrier()torch.distributed.barrier()
>
> torch.distributed.barrier() File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
>
> File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
torch.distributed.barrier() File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier >
> torch.distributed.barrier()
> torch.distributed.barrier() File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
>
> File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
> File "/venv/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
> work = default_pg.barrier(opts=opts)
> work = default_pg.barrier(opts=opts)work = default_pg.barrier(opts=opts)
>
> RuntimeErrorRuntimeError work = default_pg.barrier(opts=opts)work = default_pg.barrier(opts=opts): :
> Socket TimeoutSocket Timeout
>
> RuntimeError: Socket Timeout
> work = default_pg.barrier(opts=opts)
> RuntimeErrorRuntimeError: Socket Timeout
> : Socket Timeout
> RuntimeError: Socket Timeout
> WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 5732 closing signal SIGTERM
> ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 5733) of binary: /venv/bin/python3
> Traceback (most recent call last):
> File "/venv/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
> return f(*args, **kwargs)
> File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 719, in main
> run(args)
> File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
> elastic_launch(
> File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
>
> ./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py FAILED
>
> Failures:
> [1]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 5734)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
> [2]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 5735)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
> [3]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 5736)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 5737)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[5]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 5738)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Root Cause (first observed failure):
[0]:
time : 2022-11-11_07:49:25
host : ido-branch-s2n2k-pxvf4
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 5733)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Thanks a lot in advance for looking into it<|||||>You are not using the [`ddp_timeout`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.ddp_timeout) training argument to put a higher value than 30 minutes, so if you have a big dataset to preprocess, you get this error. Use a bigger value to solve this error or preprocess your dataset in a non-distributed fashion.<|||||>@sgugger what if I am launching my script with a "torch.distributed.launch" utility?
Then, even if I update the ddp_timeout it does not get reflected, and the processes halt in 30 minutes (the default time).<|||||>met same problem<|||||>If you use `torch.distributed.launch` with a `ddp_timeout` that is not listened to, it sounds like a bug in PyTorch ;-) |
transformers | 17,105 | closed | propagate "attention_mask" dtype for "use_past" in OnnxConfig.generate_dummy_inputs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16538
The `mask_dtype` is propagated to `torch.ones()` producing "attention_mask" for `past_key_values` in `generate_dummy_inputs` call. This ensures the input datatype expected by ONNX model matches default "attention_mask" `dtype`.
The fix is applied for configs where the pattern was used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
The existing tests use `*OnnxConfig.generate_dummy_inputs` method to produce inputs passed to `session.run(...)` for testing. For this reason the issue was not reported at testing - the same inputs are used for export and testing. I'm not sure if specific tests for inputs datatype is required.
Here is a [notebook](https://colab.research.google.com/gist/arampacha/b123a1350223665201d3a3c6056cf490/hf_onnx_inference.ipynb) for verifying the fix works as expected.
## Who can review?
@lewtun
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-05-2022 23:00:04 | 05-05-2022 23:00:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,104 | closed | Added BigBirdPegasus onnx config | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added BigBirdPegasus OnnxConfig to make this model available for conversion.
Sorry in my last pull request I forced push with wrong commit hash, which closed that PR. So I created a new one with same changes. Please consider.
@ChainYo @lewtun
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- ~~[ ] Did you write any new necessary tests?~~
- I have also checked, slow tests pass successfully by running
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "bigbird_pegasus"
```
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-05-2022 17:23:03 | 05-05-2022 17:23:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Oh you closed the other one?<|||||>> Oh you closed the other one?
While rebasing, I mistakenly forced pushed with wrong commit hash which reset the head of my branch to latest commit from main branch. All my previous commits were lost and pull request got closed automatically, I didn't had any other option other than creating a new pull request.
|
transformers | 17,103 | closed | Fix link to example scripts | This PR fixes #17098 - a broken link to the example scripts in the `Trainer` docs. | 05-05-2022 16:53:00 | 05-05-2022 16:53:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,102 | closed | TypeError: forward() got an unexpected keyword argument 'labels' with mt5-small | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have a problem in the training of a `google/mt5-small`
```python
device = "cuda:0" if torch.cuda.is_available() else "cpu"
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
# split
data_files = {"train": "train.csv", "test": "test.csv"}
sentences = load_dataset("loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
download_mode="force_redownload"
)
# filter
not_none_sentences = sentences.filter(lambda example: example['label'] is not None)
#tokenizer
model_name = 'google/mt5-small'
tokenizer = AutoTokenizer.from_pretrained(model_name)
def tokenize_function(examples):
tokens = tokenizer(examples["text"], padding="max_length", truncation=True, max_length=128)
tokens['label'] = features["label"].str2int(examples['label'])
return tokens
tokenized_datasets = not_none_sentences.map(tokenize_function, batched=True)
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
# model
model = MT5EncoderModel.from_pretrained(model_name, num_labels=num_labels)
model = model.to(device)
# metrics
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
print(eval_pred)
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
# train
training_args = TrainingArguments("checkpoints",
per_device_train_batch_size=128,
num_train_epochs=3,
learning_rate=3e-05)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=full_train_dataset,
eval_dataset=full_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-26-ffe0f4836481> in <module>
6 compute_metrics=compute_metrics,
7 )
----> 8 trainer.train()
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1420 tr_loss_step = self.training_step(model, inputs)
1421 else:
-> 1422 tr_loss_step = self.training_step(model, inputs)
1423
1424 if (
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in training_step(self, model, inputs)
2009
2010 with self.autocast_smart_context_manager():
-> 2011 loss = self.compute_loss(model, inputs)
2012
2013 if self.args.n_gpu > 1:
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
2041 else:
2042 labels = None
-> 2043 outputs = model(**inputs)
2044 # Save past state if it exists
2045 # TODO: this needs to be fixed and made cleaner later.
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'labels'
```
### Expected behavior
```shell
I tried with another model and it worked (using: `AutoModelForSequenceClassification`): 'microsoft/xtremedistil-l6-h256-uncased'.
```
| 05-05-2022 16:01:13 | 05-05-2022 16:01:13 | Since it seems linked to T5 specifically, pinging @patrickvonplaten<|||||>cc @patil-suraj could you take a look here?<|||||>Looking into it!<|||||>Hi @paulthemagno !
This is because `MT5EncoderModel` or the `T5EncoderModel` is just a base model and does not have any head. So it does not accept the `labels` argument.
https://github.com/huggingface/transformers/blob/30be0da5da83419329b2bde93e4dada0ce7e31ae/src/transformers/models/t5/modeling_t5.py#L1812-L1821
To use this model for sequence classification, you could create a custom module and add a seq classification head on top of it, add the `labels` argument in `forward` and compute and return the loss. It should look very similar to how `BertForSequenceClassification` is implemented.
https://github.com/huggingface/transformers/blob/30be0da5da83419329b2bde93e4dada0ce7e31ae/src/transformers/models/bert/modeling_bert.py#L1508
Hope this helps :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thnk you @patil-suraj, I followed your hints to create a `T5ForSequenceClassification` class, taking inspiration from `BertForSequenceClassification`.
I forked the last version of `transformers` in the main and pushed my updates. You can find it [here](https://github.com/paulthemagno/transformers).
Since Bert uses a pooled output (T5 not) and has only an encoder in `BertModel` (while `T5Model` Encoder-Decoder), I'm not sure about my edits, can you please check them?
---
some updates:
### Difference between Bert/T5Model forward returned object
The main update is that the difference of the object returned from the `forward` function of `BertModel` and `T5Model`.
- `BertModel` returns an object of class [BaseModelOutputWithPoolingAndCrossAttentions](https://github.com/huggingface/transformers/blob/4bd36f1853501c453dc0ae994f789311468b87bc/src/transformers/modeling_outputs.py#L195) that contains the property `pooler_output`
- `T5Model` returns an object of class [Seq2SeqModelOutput](https://github.com/huggingface/transformers/blob/4bd36f1853501c453dc0ae994f789311468b87bc/src/transformers/modeling_outputs.py#L290) that doesn't have a `pooler_output` variable.
So in `T5ForSequenceClassification` forward function I tried to pool the output of `Seq2SeqModelOutput` ([line](https://github.com/paulthemagno/transformers/blob/12b33f40c8fa29938ac6c8c2bcfd5e9dd224a911/src/transformers/models/t5/modeling_t5.py#L1950)) as done in `BertModel`:
```python
pooled_output = self.pooler(outputs[0]) if self.pooler is not None else None
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
```
I didn't add it in `T5Model` directly because I didn't want to "break" its output, but probably you know how to do that in a cleaner way. For the same reason I defined the pooler in `T5ForSequenceClassification` ([line](https://github.com/paulthemagno/transformers/blob/12b33f40c8fa29938ac6c8c2bcfd5e9dd224a911/src/transformers/models/t5/modeling_t5.py#L1898)) and not in `T5Model` as done in `BertModel`:
```python
self.pooler = T5Pooler(config) #if add_pooling_layer else None
```
`T5Pooler` is a copy of `BertPooler`.
### T5Model has Encoder-Decoder while Bert only Encoder
The other doubt I have is that `T5Model` init needs both `input/embeds_ids` and `decoder_input/embeds_ids`, while `BertModel` only the first ones. So in `T5ForSequenceClassification` I pass also the `decoder_input/embeds_ids` params, but with the same values of `input/embeds_ids` ([line](https://github.com/paulthemagno/transformers/blob/12b33f40c8fa29938ac6c8c2bcfd5e9dd224a911/src/transformers/models/t5/modeling_t5.py#L1945)). This probably could be a mistake.
```python
outputs = self.t5(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
# this is my main doubt: T5Model needs boht input_ids/embeds and decoder_input_ids/emebds while BertModel only input_ids/embeds
# I tried to pass to T5Model the same values for these variables but I'm not sure about that
decoder_input_ids=input_ids,
decoder_inputs_embeds=inputs_embeds,
)
```
I tried to do a training with a GPU instance`ml.g4dn.xlarge` using a `t5-small` on that but it seems very slow. Obviously, I replaced the definition of the model in the issue with:
```python
model = T5ForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)
```
Anyway, the training isn't raising errors.
Thank you!! |
transformers | 17,101 | closed | Update CLIPFeatureExtractor to convert PIL image to RGB | Currently PIL images with RGBA format throws an error when being processed by CLIPFeatureExtractor.
CLIPFeatureExtractor.normalize() throws the following error:
.../sentence-transformer/lib/python3.9/site-packages/transformers/image_utils.py", line 185, in normalize return (image - mean) / std ValueError: operands could not be broadcast together with shapes (4,224,224) (3,)
The original [clip model preprocesses PIL Images](https://github.com/openai/CLIP/blob/main/clip/clip.py#L74) by converting all PIL images into RGB format. | 05-05-2022 14:41:44 | 05-05-2022 14:41:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>hi @patil-suraj, re-created the PR as discussed 😄 |
transformers | 17,100 | closed | Tokenization in run_mlm is not correct (maybe)? | ### System Info
```shell
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-28-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In the `run_mlm.py` unless I am missing something which is completely possible, the two functions `tokenize_function` and `group_texts` end up tokenizing sentences in this way (for BERT for example):
The sentence "One sentence. Another sentence. One last sentence." is tokenized as
[101, id_One, id_sentence, 102, 101, id_Another, id_sentece, 102, 101, id_One, id_last, id_sentence, 102]
instead of
[101, id_One, id_sentence, 102, id_Another, id_sentece, 102, id_One, id_last, id_sentence, 102]
I believe this ends up being in conflict (not erroring but behaving differently) with the default tokenizer behaviour on fine tuning tasks with pairs of sentences such as mnli. Is this intended?
### Expected behavior
```shell
Maybe this is the expected tokenization
[101, id_One, id_sentence, 102, id_Another, id_sentece, 102, id_One, id_last, id_sentence, 102]?
```
| 05-05-2022 14:30:56 | 05-05-2022 14:30:56 | This is just an example, as mentioned in the main example README, and you are free to adapt the data processing to your needs.
This script is not reproducing the BERT pretraining (which also has some next sentence prediction objective), it just gives an example of masked language modeling.<|||||>Ok I thought maybe it would make sense pointing it out since it did trick me :) |
transformers | 17,099 | closed | PyTorch FSDP integration in Trainer | # What does this PR do?
PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations. This PR is aimed at integrating it into Trainer API.
- It enables Distributed Training at Scale. It's a wrapper for sharding Module parameters across data parallel workers. This is inspired by Xu et al. as well as the ZeRO Stage 3 from DeepSpeed.
- PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 05-05-2022 14:18:29 | 05-05-2022 14:18:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Opened a new PR on the branch of main repo instead of fork [17136](https://github.com/huggingface/transformers/pull/17136) |
transformers | 17,098 | closed | Docs link to example scripts is broken | This is about **documentation link error**.
## Error
At https://huggingface.co/docs/transformers/main/en/main_classes/trainer#trainer
>It’s used in most of the [example scripts](https://huggingface.co/docs/transformers/main/en/examples).
The link *example scripts* https://huggingface.co/docs/transformers/main/en/examples is broken, got 404.
<img width="549" alt="image" src="https://user-images.githubusercontent.com/21273221/166940684-aa6489cb-d47a-4c94-beab-c197d0b588f0.png">
## Source
https://github.com/huggingface/transformers/blob/dd16a113a48cc2117953397e9611383577d6a70d/docs/source/en/main_classes/trainer.mdx#L15
## Question
The *example scripts* link means https://github.com/huggingface/transformers/tree/main/examples ?
| 05-05-2022 14:10:29 | 05-05-2022 14:10:29 | Hi, a PR has been opened to fix this issue. Thanks for reporting! :)
The problem was `examples` had been removed from the `toctree` which caused the link to be broken. |
transformers | 17,097 | closed | Remove torchhub test | # What does this PR do?
The torchhub test is failing with a cryptic error. This PR removes the test but keeps the integration intact. As the Hugging Face Hub becomes the main marketplace for `transformers` models, it should be the integration users use and while torchhub remains supported we will not test it going forward. | 05-05-2022 13:56:26 | 05-05-2022 13:56:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17097). All of your documentation changes will be reflected on that endpoint.<|||||>## 😭😭 |
transformers | 17,096 | closed | pip install "sacremoses>=0.0.50" breaks on SageMaker Studio | ### System Info
```shell
This was verified today on a fresh SageMaker Studio instance running in us-west-2.
It's not a Transformer issue, but as sacremoses is a dependency, this is likely to break 'pip install transformers' on SageMaker Studio at some point.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1) Open an SM Studio notebook
2) Run the following cell:
```
%%sh
pip install "sacremoses>=0.0.50"
```
The obvious workaround for now is
```
pip install "sacremoses==0.0.49"
```
### Expected behavior
```shell
sacremoses should install without error.
```
| 05-05-2022 12:46:07 | 05-05-2022 12:46:07 | Thanks for the issue @juliensimon, this should be fixed by https://github.com/huggingface/transformers/pull/17049. It will be in the next release which should drop early next week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should be fixed now! |
transformers | 17,095 | closed | add type annotations for `gptj` | # What does this PR do?
This PR adds type annotations for GPT-J (PyTorch) as described in #16059.
## Who can review?
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed.
| 05-05-2022 12:23:43 | 05-05-2022 12:23:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,094 | closed | set_verbosity_info not working | ### System Info
Transformers version: 4.18
### Issue
The example at https://huggingface.co/docs/transformers/main_classes/logging does not work.
Workaround in example is using transformers.trainer.logger instead of get_logger.
### Who can help?
Not sure
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Reproduction
```
import transformers
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger(__name__)
logger.info("set_verbosity_info before get_logger INFO")
logger.warning("WARN")
logging.set_verbosity_info()
logger.info("set_verbosity_info after get_logger INFO")
transformers.trainer.logger.info("TRAINER LOG INFO")
```
currently outputs
```
WARN
TRAINER LOG INFO
```
missing the info on the logger, so there is a disconnect between get_logger and set_verbosity_info
### Expected behavior
output
```
WARN
set_verbosity_info before get_logger INFO
set_verbosity_info after get_logger INFO
TRAINER LOG INFO
```
| 05-05-2022 09:50:14 | 05-05-2022 09:50:14 | Hello @sanderland! The `logging` utility of the `transformers` library is made to control the logging of the library itself, not of scripts outside of it.
It's based off of the way you instantiate the logger with `logging.get_logger` -> all files in the `transformers` module will have `transformers` as their parent, therefore all loggers in the library will follow it.
If for some reason you want to leverage it as well, you could replace `__name__` by something related to `transformers` so that the logging utility understands it's dealing with something similar.
For example something like that:
```diff
import transformers
from transformers.utils import logging
logging.set_verbosity_info()
- logger = logging.get_logger(__name__)
+ logger = logging.get_logger('transformers')
logger.info("set_verbosity_info before get_logger INFO")
logger.warning("WARN")
logging.set_verbosity_info()
logger.info("set_verbosity_info after get_logger INFO")
transformers.trainer.logger.info("TRAINER LOG INFO")
```
which should use the parent logger.
You can also define your own logger with:
```diff
import transformers
from transformers.utils import logging
logging.set_verbosity_info()
- logger = logging.get_logger(__name__)
+ logger = logging.get_logger('transformers.my_cool_module')
logger.info("set_verbosity_info before get_logger INFO")
logger.warning("WARN")
logging.set_verbosity_info()
logger.info("set_verbosity_info after get_logger INFO")
transformers.trainer.logger.info("TRAINER LOG INFO")
```
(replace `my_cool_module` with what you want)
Both output the following:
```
set_verbosity_info before get_logger INFO
WARN
set_verbosity_info after get_logger INFO
TRAINER LOG INFO
```<|||||>Thanks! In any case, https://huggingface.co/docs/transformers/main_classes/logging still seems really really strange in its example for 'Here is an example of how to use logging in a module'.
<|||||>You're right, the documentation here seems incorrect. Would you like to open a PR to update it with what you've identified in this issue?<|||||>Closing this as we have a PR now :) |
transformers | 17,093 | closed | Import transformers and datasets not possible | ### System Info
```shell
$ transformers-cli env
2022-05-05 11:22:48.908890: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
WARNING:tensorflow:From /home/arnold/bin/anaconda/envs/nlp/lib/python3.8/site-packages/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-05-05 11:22:50.917777: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-05-05 11:22:50.920262: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-05-05 11:22:50.920313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-05-05 11:22:50.921473: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:50.921634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:08:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5
coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s
2022-05-05 11:22:50.921654: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2022-05-05 11:22:50.966290: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10
2022-05-05 11:22:50.966413: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10
2022-05-05 11:22:51.048655: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-05-05 11:22:51.054988: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-05-05 11:22:51.100060: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2022-05-05 11:22:51.106893: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10
2022-05-05 11:22:51.187096: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7
2022-05-05 11:22:51.187319: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:51.187572: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:51.187671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2022-05-05 11:22:51.188275: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2022-05-05 11:22:52.310987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-05-05 11:22:52.311023: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2022-05-05 11:22:52.311030: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2022-05-05 11:22:52.311832: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:52.312058: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:52.312219: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 11:22:52.312331: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/device:GPU:0 with 9224 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:08:00.0, compute capability: 7.5)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.14.1
- Platform: Linux-5.13.0-40-lowlatency-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
I cannot install transformers and datasets into the same conda environment. When I do that and try to import transformers inti python I get the error: ImportError: cannot import name 'create_repo' from 'huggingface_hub' (/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/huggingface_hub/__init__.py)
When install these libraries without the other I can import that library.
The problem arises when I try to duplicate the examples of chapter 2, page 33 of the book 'Natural language processing with transformers'. Up untill that page everything was ok.
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
$ conda create -n test transformers datasets torch
$ python
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/__init__.py", line 43, in <module>
from . import dependency_versions_check
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/dependency_versions_check.py", line 36, in <module>
from .file_utils import is_tokenizers_available
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/file_utils.py", line 52, in <module>
from huggingface_hub import HfFolder, Repository, create_repo, list_repo_files, whoami
ImportError: cannot import name 'create_repo' from 'huggingface_hub' (/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/huggingface_hub/__init__.py)
>>>
```
I did some experiments to try to determine the cause of the error. The output you can find below.
```
$ conda create -n test transformers
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/arnold/bin/anaconda/envs/test
added / updated specs:
- transformers
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2022.4.26 | h06a4308_0 124 KB
ninja-1.10.2 | h06a4308_5 8 KB
ninja-base-1.10.2 | hd09550d_5 109 KB
numpy-1.21.5 | py39he7a7128_2 10 KB
numpy-base-1.21.5 | py39hf524024_2 4.9 MB
pytorch-1.10.2 |cpu_py39hfa7516b_0 44.1 MB
setuptools-61.2.0 | py39h06a4308_0 1011 KB
tokenizers-0.10.3 | py39hb317417_1 2.4 MB
tqdm-4.64.0 | py39h06a4308_0 126 KB
urllib3-1.26.9 | py39h06a4308_0 180 KB
xz-5.2.5 | h7f8727e_1 339 KB
------------------------------------------------------------
Total: 53.2 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
_openmp_mutex pkgs/main/linux-64::_openmp_mutex-4.5-1_gnu
blas pkgs/main/linux-64::blas-1.0-mkl
brotlipy pkgs/main/linux-64::brotlipy-0.7.0-py39h27cfd23_1003
ca-certificates pkgs/main/linux-64::ca-certificates-2022.4.26-h06a4308_0
certifi pkgs/main/linux-64::certifi-2021.10.8-py39h06a4308_2
cffi pkgs/main/linux-64::cffi-1.15.0-py39hd667e15_1
charset-normalizer pkgs/main/noarch::charset-normalizer-2.0.4-pyhd3eb1b0_0
click pkgs/main/linux-64::click-8.0.4-py39h06a4308_0
cryptography pkgs/main/linux-64::cryptography-36.0.0-py39h9ce1e76_0
dataclasses pkgs/main/noarch::dataclasses-0.8-pyh6d0b6a4_7
filelock pkgs/main/noarch::filelock-3.6.0-pyhd3eb1b0_0
future pkgs/main/linux-64::future-0.18.2-py39h06a4308_1
huggingface_hub pkgs/main/noarch::huggingface_hub-0.2.1-pyhd3eb1b0_0
idna pkgs/main/noarch::idna-3.3-pyhd3eb1b0_0
importlib-metadata pkgs/main/linux-64::importlib-metadata-4.11.3-py39h06a4308_0
importlib_metadata pkgs/main/noarch::importlib_metadata-4.11.3-hd3eb1b0_0
intel-openmp pkgs/main/linux-64::intel-openmp-2021.4.0-h06a4308_3561
joblib pkgs/main/noarch::joblib-1.1.0-pyhd3eb1b0_0
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.35.1-h7274673_9
libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.3.0-h5101ec6_17
libgomp pkgs/main/linux-64::libgomp-9.3.0-h5101ec6_17
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.3.0-hd4cf53a_17
mkl pkgs/main/linux-64::mkl-2021.4.0-h06a4308_640
mkl-service pkgs/main/linux-64::mkl-service-2.4.0-py39h7f8727e_0
mkl_fft pkgs/main/linux-64::mkl_fft-1.3.1-py39hd3c417c_0
mkl_random pkgs/main/linux-64::mkl_random-1.2.2-py39h51133e4_0
ncurses pkgs/main/linux-64::ncurses-6.3-h7f8727e_2
ninja pkgs/main/linux-64::ninja-1.10.2-h06a4308_5
ninja-base pkgs/main/linux-64::ninja-base-1.10.2-hd09550d_5
numpy pkgs/main/linux-64::numpy-1.21.5-py39he7a7128_2
numpy-base pkgs/main/linux-64::numpy-base-1.21.5-py39hf524024_2
openssl pkgs/main/linux-64::openssl-1.1.1n-h7f8727e_0
packaging pkgs/main/noarch::packaging-21.3-pyhd3eb1b0_0
pip pkgs/main/linux-64::pip-21.2.4-py39h06a4308_0
pycparser pkgs/main/noarch::pycparser-2.21-pyhd3eb1b0_0
pyopenssl pkgs/main/noarch::pyopenssl-22.0.0-pyhd3eb1b0_0
pyparsing pkgs/main/noarch::pyparsing-3.0.4-pyhd3eb1b0_0
pysocks pkgs/main/linux-64::pysocks-1.7.1-py39h06a4308_0
python pkgs/main/linux-64::python-3.9.12-h12debd9_0
pytorch pkgs/main/linux-64::pytorch-1.10.2-cpu_py39hfa7516b_0
pyyaml pkgs/main/linux-64::pyyaml-6.0-py39h7f8727e_1
readline pkgs/main/linux-64::readline-8.1.2-h7f8727e_1
regex pkgs/main/linux-64::regex-2022.3.15-py39h7f8727e_0
requests pkgs/main/noarch::requests-2.27.1-pyhd3eb1b0_0
sacremoses pkgs/main/noarch::sacremoses-0.0.43-pyhd3eb1b0_0
setuptools pkgs/main/linux-64::setuptools-61.2.0-py39h06a4308_0
six pkgs/main/noarch::six-1.16.0-pyhd3eb1b0_1
sqlite pkgs/main/linux-64::sqlite-3.38.2-hc218d9a_0
tk pkgs/main/linux-64::tk-8.6.11-h1ccaba5_0
tokenizers pkgs/main/linux-64::tokenizers-0.10.3-py39hb317417_1
tqdm pkgs/main/linux-64::tqdm-4.64.0-py39h06a4308_0
transformers pkgs/main/noarch::transformers-4.14.1-pyhd3eb1b0_0
typing-extensions pkgs/main/noarch::typing-extensions-4.1.1-hd3eb1b0_0
typing_extensions pkgs/main/noarch::typing_extensions-4.1.1-pyh06a4308_0
tzdata pkgs/main/noarch::tzdata-2022a-hda174b7_0
urllib3 pkgs/main/linux-64::urllib3-1.26.9-py39h06a4308_0
wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0
xz pkgs/main/linux-64::xz-5.2.5-h7f8727e_1
yaml pkgs/main/linux-64::yaml-0.2.5-h7b6447c_0
zipp pkgs/main/noarch::zipp-3.7.0-pyhd3eb1b0_0
zlib pkgs/main/linux-64::zlib-1.2.12-h7f8727e_2
Proceed ([y]/n)?
Downloading and Extracting Packages
pytorch-1.10.2 | 44.1 MB | ######################################################### | 100%
tqdm-4.64.0 | 126 KB | ######################################################### | 100%
ninja-base-1.10.2 | 109 KB | ######################################################### | 100%
numpy-1.21.5 | 10 KB | ######################################################### | 100%
tokenizers-0.10.3 | 2.4 MB | ######################################################### | 100%
numpy-base-1.21.5 | 4.9 MB | ######################################################### | 100%
ca-certificates-2022 | 124 KB | ######################################################### | 100%
xz-5.2.5 | 339 KB | ######################################################### | 100%
urllib3-1.26.9 | 180 KB | ######################################################### | 100%
ninja-1.10.2 | 8 KB | ######################################################### | 100%
setuptools-61.2.0 | 1011 KB | ######################################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate test
#
# To deactivate an active environment, use
#
# $ conda deactivate
$ conda activate test
$ conda install datasets
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.34=0
- python=3.9 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.34
$ conda create -n test transformers datasets
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/arnold/bin/anaconda/envs/test
added / updated specs:
- datasets
- transformers
The following packages will be downloaded:
package | build
---------------------------|-----------------
abseil-cpp-20210324.2 | h2531618_0 965 KB
arrow-cpp-3.0.0 | py38h6b21186_4 7.1 MB
boost-cpp-1.73.0 | h27cfd23_11 25 KB
conllu-4.4.1 | pyhd3eb1b0_0 23 KB
datasets-1.12.1 | pyhd3eb1b0_0 193 KB
dill-0.3.4 | pyhd3eb1b0_0 61 KB
double-conversion-3.1.5 | he6710b0_1 235 KB
fsspec-2022.2.0 | pyhd3eb1b0_0 98 KB
gflags-2.2.2 | he6710b0_0 126 KB
glog-0.5.0 | h2531618_0 101 KB
grpc-cpp-1.39.0 | hae934f6_5 2.8 MB
huggingface_hub-0.0.17 | pyhd3eb1b0_0 59 KB
libcurl-7.82.0 | h0b77cf5_0 342 KB
libevent-2.1.12 | h8f2d780_0 425 KB
libprotobuf-3.17.2 | h4ff587b_1 2.0 MB
libssh2-1.10.0 | h8f2d780_0 274 KB
libthrift-0.14.2 | hcc01f38_0 2.8 MB
lxml-4.8.0 | py38h1f438cf_0 1.3 MB
multiprocess-0.70.12.2 | py38h7f8727e_0 226 KB
numpy-1.21.5 | py38he7a7128_2 10 KB
numpy-base-1.21.5 | py38hf524024_2 4.8 MB
orc-1.6.9 | ha97a36c_3 623 KB
pyarrow-3.0.0 | py38he0739d4_3 1.9 MB
python-xxhash-2.0.2 | py38h7f8727e_0 24 KB
re2-2020.11.01 | h2531618_1 315 KB
snappy-1.1.9 | h295c915_0 636 KB
tqdm-4.49.0 | py_0 55 KB
uriparser-0.9.3 | he6710b0_1 48 KB
utf8proc-2.6.1 | h27cfd23_0 308 KB
xxhash-0.8.0 | h7f8727e_3 83 KB
yarl-1.6.3 | py38h27cfd23_0 136 KB
------------------------------------------------------------
Total: 28.0 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
_openmp_mutex pkgs/main/linux-64::_openmp_mutex-4.5-1_gnu
abseil-cpp pkgs/main/linux-64::abseil-cpp-20210324.2-h2531618_0
aiohttp pkgs/main/linux-64::aiohttp-3.8.1-py38h7f8727e_1
aiosignal pkgs/main/noarch::aiosignal-1.2.0-pyhd3eb1b0_0
arrow-cpp pkgs/main/linux-64::arrow-cpp-3.0.0-py38h6b21186_4
async-timeout pkgs/main/noarch::async-timeout-4.0.1-pyhd3eb1b0_0
attrs pkgs/main/noarch::attrs-21.4.0-pyhd3eb1b0_0
aws-c-common pkgs/main/linux-64::aws-c-common-0.4.57-he6710b0_1
aws-c-event-stream pkgs/main/linux-64::aws-c-event-stream-0.1.6-h2531618_5
aws-checksums pkgs/main/linux-64::aws-checksums-0.1.9-he6710b0_0
aws-sdk-cpp pkgs/main/linux-64::aws-sdk-cpp-1.8.185-hce553d0_0
bcj-cffi pkgs/main/linux-64::bcj-cffi-0.5.1-py38h295c915_0
blas pkgs/main/linux-64::blas-1.0-mkl
boost-cpp pkgs/main/linux-64::boost-cpp-1.73.0-h27cfd23_11
bottleneck pkgs/main/linux-64::bottleneck-1.3.4-py38hce1f21e_0
brotli pkgs/main/linux-64::brotli-1.0.9-he6710b0_2
brotli-python pkgs/main/linux-64::brotli-python-1.0.9-py38heb0550a_2
brotlicffi pkgs/main/linux-64::brotlicffi-1.0.9.2-py38h295c915_0
brotlipy pkgs/main/linux-64::brotlipy-0.7.0-py38h27cfd23_1003
bzip2 pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0
c-ares pkgs/main/linux-64::c-ares-1.18.1-h7f8727e_0
ca-certificates pkgs/main/linux-64::ca-certificates-2022.4.26-h06a4308_0
certifi pkgs/main/linux-64::certifi-2021.10.8-py38h06a4308_2
cffi pkgs/main/linux-64::cffi-1.15.0-py38hd667e15_1
charset-normalizer pkgs/main/noarch::charset-normalizer-2.0.4-pyhd3eb1b0_0
click pkgs/main/linux-64::click-8.0.4-py38h06a4308_0
conllu pkgs/main/noarch::conllu-4.4.1-pyhd3eb1b0_0
cryptography pkgs/main/linux-64::cryptography-36.0.0-py38h9ce1e76_0
dataclasses pkgs/main/noarch::dataclasses-0.8-pyh6d0b6a4_7
datasets pkgs/main/noarch::datasets-1.12.1-pyhd3eb1b0_0
dill pkgs/main/noarch::dill-0.3.4-pyhd3eb1b0_0
double-conversion pkgs/main/linux-64::double-conversion-3.1.5-he6710b0_1
et_xmlfile pkgs/main/linux-64::et_xmlfile-1.1.0-py38h06a4308_0
filelock pkgs/main/noarch::filelock-3.6.0-pyhd3eb1b0_0
frozenlist pkgs/main/linux-64::frozenlist-1.2.0-py38h7f8727e_0
fsspec pkgs/main/noarch::fsspec-2022.2.0-pyhd3eb1b0_0
future pkgs/main/linux-64::future-0.18.2-py38_1
gflags pkgs/main/linux-64::gflags-2.2.2-he6710b0_0
glog pkgs/main/linux-64::glog-0.5.0-h2531618_0
gmp pkgs/main/linux-64::gmp-6.2.1-h2531618_2
grpc-cpp pkgs/main/linux-64::grpc-cpp-1.39.0-hae934f6_5
huggingface_hub pkgs/main/noarch::huggingface_hub-0.0.17-pyhd3eb1b0_0
icu pkgs/main/linux-64::icu-58.2-he6710b0_3
idna pkgs/main/noarch::idna-3.3-pyhd3eb1b0_0
importlib-metadata pkgs/main/linux-64::importlib-metadata-4.11.3-py38h06a4308_0
importlib_metadata pkgs/main/noarch::importlib_metadata-4.11.3-hd3eb1b0_0
intel-openmp pkgs/main/linux-64::intel-openmp-2021.4.0-h06a4308_3561
joblib pkgs/main/noarch::joblib-1.1.0-pyhd3eb1b0_0
krb5 pkgs/main/linux-64::krb5-1.19.2-hac12032_0
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.35.1-h7274673_9
libboost pkgs/main/linux-64::libboost-1.73.0-h3ff78a5_11
libcurl pkgs/main/linux-64::libcurl-7.82.0-h0b77cf5_0
libedit pkgs/main/linux-64::libedit-3.1.20210910-h7f8727e_0
libev pkgs/main/linux-64::libev-4.33-h7f8727e_1
libevent pkgs/main/linux-64::libevent-2.1.12-h8f2d780_0
libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.3.0-h5101ec6_17
libgomp pkgs/main/linux-64::libgomp-9.3.0-h5101ec6_17
libnghttp2 pkgs/main/linux-64::libnghttp2-1.46.0-hce63b2e_0
libprotobuf pkgs/main/linux-64::libprotobuf-3.17.2-h4ff587b_1
libssh2 pkgs/main/linux-64::libssh2-1.10.0-h8f2d780_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.3.0-hd4cf53a_17
libthrift pkgs/main/linux-64::libthrift-0.14.2-hcc01f38_0
libxml2 pkgs/main/linux-64::libxml2-2.9.12-h03d6c58_0
libxslt pkgs/main/linux-64::libxslt-1.1.34-hc22bd24_0
lxml pkgs/main/linux-64::lxml-4.8.0-py38h1f438cf_0
lz4-c pkgs/main/linux-64::lz4-c-1.9.3-h295c915_1
mkl pkgs/main/linux-64::mkl-2021.4.0-h06a4308_640
mkl-service pkgs/main/linux-64::mkl-service-2.4.0-py38h7f8727e_0
mkl_fft pkgs/main/linux-64::mkl_fft-1.3.1-py38hd3c417c_0
mkl_random pkgs/main/linux-64::mkl_random-1.2.2-py38h51133e4_0
multidict pkgs/main/linux-64::multidict-5.2.0-py38h7f8727e_2
multiprocess pkgs/main/linux-64::multiprocess-0.70.12.2-py38h7f8727e_0
multivolumefile pkgs/main/noarch::multivolumefile-0.2.3-pyhd3eb1b0_0
ncurses pkgs/main/linux-64::ncurses-6.3-h7f8727e_2
ninja pkgs/main/linux-64::ninja-1.10.2-h06a4308_5
ninja-base pkgs/main/linux-64::ninja-base-1.10.2-hd09550d_5
numexpr pkgs/main/linux-64::numexpr-2.8.1-py38h6abb31d_0
numpy pkgs/main/linux-64::numpy-1.21.5-py38he7a7128_2
numpy-base pkgs/main/linux-64::numpy-base-1.21.5-py38hf524024_2
openpyxl pkgs/main/noarch::openpyxl-3.0.9-pyhd3eb1b0_0
openssl pkgs/main/linux-64::openssl-1.1.1n-h7f8727e_0
orc pkgs/main/linux-64::orc-1.6.9-ha97a36c_3
packaging pkgs/main/noarch::packaging-21.3-pyhd3eb1b0_0
pandas pkgs/main/linux-64::pandas-1.4.2-py38h295c915_0
pip pkgs/main/linux-64::pip-21.2.4-py38h06a4308_0
py7zr pkgs/main/noarch::py7zr-0.16.1-pyhd3eb1b0_1
pyarrow pkgs/main/linux-64::pyarrow-3.0.0-py38he0739d4_3
pycparser pkgs/main/noarch::pycparser-2.21-pyhd3eb1b0_0
pycryptodomex pkgs/main/linux-64::pycryptodomex-3.10.1-py38h27cfd23_1
pyopenssl pkgs/main/noarch::pyopenssl-22.0.0-pyhd3eb1b0_0
pyparsing pkgs/main/noarch::pyparsing-3.0.4-pyhd3eb1b0_0
pyppmd pkgs/main/linux-64::pyppmd-0.16.1-py38h295c915_0
pysocks pkgs/main/linux-64::pysocks-1.7.1-py38h06a4308_0
python pkgs/main/linux-64::python-3.8.13-h12debd9_0
python-dateutil pkgs/main/noarch::python-dateutil-2.8.2-pyhd3eb1b0_0
python-xxhash pkgs/main/linux-64::python-xxhash-2.0.2-py38h7f8727e_0
pytorch pkgs/main/linux-64::pytorch-1.10.2-cpu_py38hfa7516b_0
pytz pkgs/main/noarch::pytz-2021.3-pyhd3eb1b0_0
pyyaml pkgs/main/linux-64::pyyaml-6.0-py38h7f8727e_1
pyzstd pkgs/main/linux-64::pyzstd-0.14.4-py38h7f8727e_3
re2 pkgs/main/linux-64::re2-2020.11.01-h2531618_1
readline pkgs/main/linux-64::readline-8.1.2-h7f8727e_1
regex pkgs/main/linux-64::regex-2022.3.15-py38h7f8727e_0
requests pkgs/main/noarch::requests-2.27.1-pyhd3eb1b0_0
sacremoses pkgs/main/noarch::sacremoses-0.0.43-pyhd3eb1b0_0
setuptools pkgs/main/linux-64::setuptools-61.2.0-py38h06a4308_0
six pkgs/main/noarch::six-1.16.0-pyhd3eb1b0_1
snappy pkgs/main/linux-64::snappy-1.1.9-h295c915_0
sqlite pkgs/main/linux-64::sqlite-3.38.2-hc218d9a_0
texttable pkgs/main/noarch::texttable-1.6.4-pyhd3eb1b0_0
tk pkgs/main/linux-64::tk-8.6.11-h1ccaba5_0
tokenizers pkgs/main/linux-64::tokenizers-0.10.3-py38hb317417_1
tqdm pkgs/main/noarch::tqdm-4.49.0-py_0
transformers pkgs/main/noarch::transformers-4.14.1-pyhd3eb1b0_0
typing-extensions pkgs/main/noarch::typing-extensions-4.1.1-hd3eb1b0_0
typing_extensions pkgs/main/noarch::typing_extensions-4.1.1-pyh06a4308_0
uriparser pkgs/main/linux-64::uriparser-0.9.3-he6710b0_1
urllib3 pkgs/main/linux-64::urllib3-1.26.9-py38h06a4308_0
utf8proc pkgs/main/linux-64::utf8proc-2.6.1-h27cfd23_0
wheel pkgs/main/noarch::wheel-0.37.1-pyhd3eb1b0_0
xxhash pkgs/main/linux-64::xxhash-0.8.0-h7f8727e_3
xz pkgs/main/linux-64::xz-5.2.5-h7f8727e_1
yaml pkgs/main/linux-64::yaml-0.2.5-h7b6447c_0
yarl pkgs/main/linux-64::yarl-1.6.3-py38h27cfd23_0
zipp pkgs/main/noarch::zipp-3.7.0-pyhd3eb1b0_0
zlib pkgs/main/linux-64::zlib-1.2.12-h7f8727e_2
zstd pkgs/main/linux-64::zstd-1.4.9-haebb681_0
Proceed ([y]/n)?
Downloading and Extracting Packages
libprotobuf-3.17.2 | 2.0 MB | ######################################################### | 100%
snappy-1.1.9 | 636 KB | ######################################################### | 100%
grpc-cpp-1.39.0 | 2.8 MB | ######################################################### | 100%
utf8proc-2.6.1 | 308 KB | ######################################################### | 100%
glog-0.5.0 | 101 KB | ######################################################### | 100%
orc-1.6.9 | 623 KB | ######################################################### | 100%
re2-2020.11.01 | 315 KB | ######################################################### | 100%
tqdm-4.49.0 | 55 KB | ######################################################### | 100%
huggingface_hub-0.0. | 59 KB | ######################################################### | 100%
lxml-4.8.0 | 1.3 MB | ######################################################### | 100%
libssh2-1.10.0 | 274 KB | ######################################################### | 100%
multiprocess-0.70.12 | 226 KB | ######################################################### | 100%
datasets-1.12.1 | 193 KB | ######################################################### | 100%
xxhash-0.8.0 | 83 KB | ######################################################### | 100%
conllu-4.4.1 | 23 KB | ######################################################### | 100%
double-conversion-3. | 235 KB | ######################################################### | 100%
python-xxhash-2.0.2 | 24 KB | ######################################################### | 100%
libcurl-7.82.0 | 342 KB | ######################################################### | 100%
numpy-1.21.5 | 10 KB | ######################################################### | 100%
abseil-cpp-20210324. | 965 KB | ######################################################### | 100%
numpy-base-1.21.5 | 4.8 MB | ######################################################### | 100%
libthrift-0.14.2 | 2.8 MB | ######################################################### | 100%
pyarrow-3.0.0 | 1.9 MB | ######################################################### | 100%
dill-0.3.4 | 61 KB | ######################################################### | 100%
gflags-2.2.2 | 126 KB | ######################################################### | 100%
arrow-cpp-3.0.0 | 7.1 MB | ######################################################### | 100%
uriparser-0.9.3 | 48 KB | ######################################################### | 100%
boost-cpp-1.73.0 | 25 KB | ######################################################### | 100%
fsspec-2022.2.0 | 98 KB | ######################################################### | 100%
yarl-1.6.3 | 136 KB | ######################################################### | 100%
libevent-2.1.12 | 425 KB | ######################################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate test
#
# To deactivate an active environment, use
#
# $ conda deactivate
$ conda activate test
$ python
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/__init__.py", line 43, in <module>
from . import dependency_versions_check
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/dependency_versions_check.py", line 36, in <module>
from .file_utils import is_tokenizers_available
File "/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/transformers/file_utils.py", line 52, in <module>
from huggingface_hub import HfFolder, Repository, create_repo, list_repo_files, whoami
ImportError: cannot import name 'create_repo' from 'huggingface_hub' (/home/arnold/bin/anaconda/envs/test/lib/python3.8/site-packages/huggingface_hub/__init__.py)
>>>
```
### Expected behavior
```shell
$ conda create -n test transformers datasets torch
$ python
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
>>>
```
```
| 05-05-2022 09:34:51 | 05-05-2022 09:34:51 | Hey @RnoldR! Could you try to update the version of `huggingface_hub` to a more recent version?<|||||>This is what I get with conda when installing transformers and datasets. I tried with 0.2, 0.3, 0.4, 0.5.1 but a higher version is not possible because no release candidate can be found. <|||||>I am trying with pip currently but that runs into problems with cuda.<|||||>I succeeded in creating an environment that combines transformers and datasets with pytorch *and* my GPU.
1. Create a conda environment
conda create -n pt-nlp pytorch-gpu pandas numpy matplotlib scikit-learn
2. Add transformers and datasets (and umap to be able to run the examples of Chapter 2) with pip
conda activate pt-nlp
pip install transformers datasets umap-learn
and that did the trick, that is for my machine with the RTX 2080Ti. It has the desired for higher version of huggingface_hub and of transformers and datasets as well. The packages as available on Anaconda don't work together. To avoid beginners like me losing time I suggest you fix it IMHO. Thanks for your time!
collect_env
```
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 22.04 LTS
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 510.60.02
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.3.1
[conda] _pytorch_select 0.2 gpu_0
[conda] blas 1.0 mkl
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py37he8ac12f_0
[conda] mkl_fft 1.3.0 py37h54f3939_0
[conda] mkl_random 1.1.1 py37h0573a6f_0
[conda] pytorch 1.3.1 cuda100py37h53c1284_0
[conda] pytorch-gpu 1.3.1 0
```
Conda environment
```
# packages in environment at /home/arnold/bin/anaconda/envs/pt-nlp:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
_pytorch_select 0.2 gpu_0
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
asynctest 0.13.0 pypi_0 pypi
attrs 21.4.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
blas 1.0 mkl
bottleneck 1.3.4 py37hce1f21e_0
brotli 1.0.9 he6710b0_2
ca-certificates 2022.4.26 h06a4308_0
certifi 2021.10.8 py37h06a4308_2
cffi 1.15.0 py37hd667e15_1
charset-normalizer 2.0.12 pypi_0 pypi
click 8.1.3 pypi_0 pypi
cudatoolkit 10.0.130 0
cudnn 7.6.5 cuda10.0_0
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.1.0 pypi_0 pypi
dbus 1.13.18 hb2f20db_0
debugpy 1.5.1 py37h295c915_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.4 pypi_0 pypi
entrypoints 0.4 py37h06a4308_0
expat 2.4.4 h295c915_0
filelock 3.6.0 pypi_0 pypi
fontconfig 2.13.1 h6c09931_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.11.0 h70c0345_0
frozenlist 1.3.0 pypi_0 pypi
fsspec 2022.3.0 pypi_0 pypi
giflib 5.2.1 h7b6447c_0
glib 2.69.1 h4ff587b_1
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
huggingface-hub 0.5.1 pypi_0 pypi
icu 58.2 he6710b0_3
idna 3.3 pypi_0 pypi
importlib-metadata 4.11.3 pypi_0 pypi
intel-openmp 2022.0.1 h06a4308_3633
ipykernel 6.9.1 py37h06a4308_0
ipython 7.31.1 py37h06a4308_0
jedi 0.18.1 py37h06a4308_1
joblib 1.1.0 pyhd3eb1b0_0
jpeg 9e h7f8727e_0
jupyter_client 7.2.2 py37h06a4308_0
jupyter_core 4.10.0 py37h06a4308_0
kiwisolver 1.3.2 py37h295c915_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 9.3.0 h5101ec6_17
libpng 1.6.37 hbc83047_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h7f8727e_2
libwebp 1.2.2 h55f646e_0
libwebp-base 1.2.2 h7f8727e_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h74e7548_1
llvmlite 0.38.0 pypi_0 pypi
lz4-c 1.9.3 h295c915_1
matplotlib 3.5.1 py37h06a4308_1
matplotlib-base 3.5.1 py37ha18d171_1
matplotlib-inline 0.1.2 pyhd3eb1b0_2
mkl 2020.2 256
mkl-service 2.3.0 py37he8ac12f_0
mkl_fft 1.3.0 py37h54f3939_0
mkl_random 1.1.1 py37h0573a6f_0
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.12.2 pypi_0 pypi
munkres 1.1.4 py_0
ncurses 6.3 h7f8727e_2
nest-asyncio 1.5.5 py37h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
numba 0.55.1 pypi_0 pypi
numexpr 2.7.3 py37hb2eb853_0
numpy 1.19.2 py37h54aff64_0
numpy-base 1.19.2 py37hfa32c7d_0
openssl 1.1.1n h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pandas 1.3.5 py37h8c16a72_0
parso 0.8.3 pyhd3eb1b0_0
pcre 8.45 h295c915_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.0.1 py37h22f2fdc_0
pip 21.2.2 py37h06a4308_0
prompt-toolkit 3.0.20 pyhd3eb1b0_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pyarrow 8.0.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pygments 2.11.2 pyhd3eb1b0_0
pynndescent 0.5.6 pypi_0 pypi
pyparsing 3.0.4 pyhd3eb1b0_0
pyqt 5.9.2 py37h05f1152_2
python 3.7.13 h12debd9_0
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.3.1 cuda100py37h53c1284_0
pytorch-gpu 1.3.1 0
pytz 2021.3 pyhd3eb1b0_0
pyyaml 6.0 pypi_0 pypi
pyzmq 22.3.0 py37h295c915_2
qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1
regex 2022.4.24 pypi_0 pypi
requests 2.27.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
sacremoses 0.0.53 pypi_0 pypi
scikit-learn 1.0.2 py37h51133e4_1
scipy 1.6.2 py37h91f5cce_0
setuptools 61.2.0 py37h06a4308_0
sip 4.19.8 py37hf484d3e_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.38.3 hc218d9a_0
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.11 h1ccaba5_0
tokenizers 0.12.1 pypi_0 pypi
tornado 6.1 py37h27cfd23_0
tqdm 4.64.0 pypi_0 pypi
traitlets 5.1.1 pyhd3eb1b0_0
transformers 4.18.0 pypi_0 pypi
typing-extensions 4.2.0 pypi_0 pypi
umap 0.1.1 pypi_0 pypi
umap-learn 0.5.3 pypi_0 pypi
urllib3 1.26.9 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.37.1 pyhd3eb1b0_0
xxhash 3.0.0 pypi_0 pypi
xz 5.2.5 h7f8727e_1
yarl 1.7.2 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.8.0 pypi_0 pypi
zlib 1.2.12 h7f8727e_2
zstd 1.4.9 haebb681_0
```
|
transformers | 17,092 | closed | LayoutLMv2Processor: ensure 1-to-1 mapping between images and samples in case of overflowing tokens | # What does this PR do?
Fixes #13554
Problem re-summarized: when `return_offsets_mapping` is set to True, `LayoutLMv2Processor` would break up sequences that are too long into multiple `input_ids` sequences, causing a mismatch between `input_ids` (longer in length in the case of overflowing tokens) and `images`.
This fix would ensure the 1-to-1 mapping between the `images` and `input_ids`.
**Reproducible Example:** (The assertion at the end would fail without the fix, pass with the fix)
```
import transformers
from PIL import Image
from transformers import LayoutLMv2Processor
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D, load_dataset
import torch
datasets = load_dataset("nielsr/funsd")
labels = datasets['train'].features['ner_tags'].feature.names
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True,
return_overflowing_tokens=True,
stride=50,
return_offsets_mapping=True,
return_tensors="pt")
return encoded_inputs
train_data = preprocess_data(datasets["train"])
# this assert would fail without this PR fix.
assert len(train_data["image"]) == len(train_data["input_ids"])
```
## Required Input from Reviewers
Right now, the LayoutLMv2Processor would **return a list** for `encoded_inputs["image"]`, regardless of the value of `return_tensors`. If we want it to return a torch tensor in the case `return_tensors=="pt"`, we have to `torch.stack` the list (and do similar thing to support "np" and "tf").
Should I implement this in `get_overflowing_images`? Or should I just leave the return type as list and just print a warning?
## Who can review?
@NielsRogge @sgugger @LysandreJik
## P.S.
The `test_processor_case_1` in `test_processor_layoutlmv2.py` fails before this PR. I'd be happy to look at it as well but it's unrelated to this PR. | 05-05-2022 05:21:34 | 05-05-2022 05:21:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution!<|||||>Thanks for handling this @ghlai9665 |
transformers | 17,091 | closed | Fix MLflowCallback and add support for MLFLOW_EXPERIMENT_NAME | # What does this PR do?
This PR includes the following:
- Resolves #12841: uses the MLFLOW_EXPERIMENT_NAME environment variable and the mlflow.set_experiment() method to ensure the experiment is created if it does not exist already.
- Fixes #17066: Checks properly for an active run using mlflow.active_run() (Bug introduced in #16131).
Supersedes #17067 due to CI build errors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 05-04-2022 21:30:57 | 05-04-2022 21:30:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Managed to activate the test by checking out your PR and pushing it to a new branch in the `transformers` namespace. I have no idea why it worked.
Anyhow, please run `make style` on your branch to fix the code quality issue and we should be good to merge! (you'll need to install the latest versions of our deps for styling with `pip install .[quality] -U` )<|||||>@sgugger All checks passed! 🤗 <|||||>Thanks a lot! |
transformers | 17,090 | closed | Fix missing "models" in pipeline test module | # What does this PR do?
My PR #17034 forgot to update
```
package=f"tests.{model_slug}"
```
to
```
package=f"tests.models.{model_slug}"
```
which lead to a lot of pipeline tests being skipped.
Fixed it here. | 05-04-2022 19:29:52 | 05-04-2022 19:29:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,089 | closed | CUDA out of memory in evaluation_loop | Hi,
I'm trying to run the seq2seq question answering example from the repro ([here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_seq2seq_qa.py)) and while training is fine in evaluation loop I get CUDA out of memory. I've read other asked about it previously and they suggested using `eval_accumulation_steps=10,`. Tried that and stil get the error.
Specifically:
```
File "question-answering/run_seq2seq_qa.py", line 740, in <module>
main()
File "question-answering/run_seq2seq_qa.py", line 699, in main
metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval")
File "/question-answering/trainer_seq2seq_qa.py", line 65, in evaluate
ignore_keys=ignore_keys,
File "/venv/lib/python3.7/site-packages/transformers/trainer.py", line 2515, in evaluation_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
File "/venv/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 97, in nested_concat
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
File "/venv/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 97, in <genexpr>
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
File "/venv/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 99, in nested_concat
return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
File "/venv/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 61, in torch_pad_and_concatenate
return torch.cat((tensor1, tensor2), dim=0)
RuntimeError: CUDA out of memory. Tried to allocate 6.26 GiB (GPU 0; 23.69 GiB total capacity; 15.20 GiB already allocated; 96.62 MiB free; 21.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
Would appriciate any suggestions.
| 05-04-2022 18:44:26 | 05-04-2022 18:44:26 | +1
Update: assigning a value to eval_accumulation_steps works for my case. @OfirArviv Did you try
`--eval_accumulation_steps 1`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm experiencing the same issue.
When setting --eval_accumulation_steps 1, the process gets killed after few evaluation steps.<|||||>eval_accumulation_steps的大小可有效缓解,这是因为eval_accumulation_steps是设置多少个steps将eval的结果move到cpu,如果不设置,默认是在GPU不断地按step累加。如果你的eval输出的结果总和大于显存那么就会出现CUDA OOM,当然如果你的cpu内存也不够大,同样会出现cpu的OOM.具体计算如下,如果是MLM任务,5000条eval样本,max_seq_len=256,vocab_size=21128(bert中文词汇表大小),那么你的预测结果如下:
shape = (5000,256,21128)
mermory=5000*256*21128*2(f16半精度)/1000/1024/1024≈52G.【2标识16位浮点数2个字节=2byte】
如果是f32则翻一倍=104G,别说显存撑不住,内存都撑不住。
一个有效的方法是:重写eval_loop,因为最终都是要做argmax使得shape = (5000,256,21128)=>(5000,256),那么我们可以在concat之前就做这一步。
<|||||>> I'm experiencing the same issue. When setting --eval_accumulation_steps 1, the process gets killed after few evaluation steps.
for me it runs for exactly 9 steps, with 1 as eval batch size |
transformers | 17,088 | closed | Add OPT | # What does this PR do?
A PR to add OPT-350-m model on transformers library :hugs:
We tested the logits and the generation script and the logits on the DGX and we match the results from [metaseq](https://github.com/patrickvonplaten/metaseq/blob/main/README.md)
## TO DOs
- [ ] code lint + integration tests
- [ ] discuss how to import the logits for the hardcoded tests
- [ ] correct documentation
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @LysandreJik @patrickvonplaten @stas00 @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
| 05-04-2022 15:55:28 | 05-04-2022 15:55:28 | Also I have noticed something here: https://github.com/younesbelkada/transformers/blob/13632212d5bacc83bb9a9b8b6816a70bde865dbc/src/transformers/models/opt/modeling_opt.py#L249
Initially in our implementation the attention mask has a shape `batch_size, 1, seq_len, seq_len` whereas in the metaseq implementation the masks is identitcal across the whole batch therefore has a shape of `seq_len, seq_len`. Is it a common practice that the attention mask is identical across the whole batch ?<|||||>> identitcal
I shouldn't but I think for causal LM it's fine to do this since the attention_mask is usually just the causal mask which is the same for every batch. Later tokens can never attend to earlier tokens when there is a causal mask, so it's often good enough to use that even when tokens are padded<|||||>Thank you very much for the explanation regarding the attention mask
Since I took the template from BART I think that it considered this attention strategy by default<|||||>I am facing [this CI issue](https://app.circleci.com/pipelines/github/huggingface/transformers/39343/workflows/6c764409-9785-4c68-8a29-e05ac79adb88/jobs/441415) when running `make repo-consistency` and I can't figure out what is happening.. Do you have any idea by change what could cause this issue?

<|||||>Linking this issue https://github.com/facebookresearch/metaseq/issues/31
to understand how to load properly sharded models<|||||>Crashing the party here (apologies for not having more context on this codebase): is it normal for model imports to require >8k LOC added? Or is there something in particular about OPT models that requires all this additional glue code?
If this is is OPT's fault - is there anything we can do (on the [metaseq](https://github.com/facebookresearch/metaseq) side) to bridge this gap?<|||||>> Crashing the party here (apologies for not having more context on this codebase): is it normal for model imports to require >8k LOC added? Or is there something in particular about OPT models that requires all this additional glue code?
>
> If this is is OPT's fault - is there anything we can do (on the [metaseq](https://github.com/facebookresearch/metaseq) side) to bridge this gap?
Hey @suchenzang,
Cool that you're taking a look here! There is lot of code that was copied in a first pass from Bart that should be removed. This will make the code much more concise. Also note that we'll add the model both in TF and Flax as well.
On the other hand, you guys are the authors of OPT, so you should lead the addition of OPT to Transformers if you want to. Would you like to open a PR yourself instead that adds the model instead of us doing it? We took the initiative here because of timing, but if you would like to take over the integration, please let us know :-)<|||||>Just cleaned up a bit the code - hopefully more tests should pass now - I have to add and correct some documentation<|||||>> On the other hand, you guys are the authors of OPT, so you should lead the addition of OPT to Transformers if you want to. Would you like to open a PR yourself instead that adds the model instead of us doing it? We took the initiative here because of timing, but if you would like to take over the integration, please let us know :-)
@patrickvonplaten Not at all - love seeing this initiative (and we likely don't have bandwidth to take on integration work)! We have things to clean up on the metaseq side too, so I'm generally looking for feedback on this front as more folks try out our models.<|||||>Just out of curiosity, and to prevent duplicate work: Since fairseq models can be converted 1 on 1 to the XGLM model type, would that be possible with this one too or is this model too different to warrant a totally new model type?
Secondly, this is just the 350m model. Are other variants of this model going to be supported too? Hate to see this merged without full OPT model support.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Here the checkpoints that should be converted: https://huggingface.co/patrickvonplaten/opt_metaseq_6700m/tree/main/model<|||||>Should we merge? 🤗<|||||>Actually one sec, I'll update the model docs a bit again |
transformers | 17,087 | closed | Younes opt 350 m | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-04-2022 15:54:02 | 05-04-2022 15:54:02 | |
transformers | 17,086 | closed | Broken 'export' using transformers.onnx | ### Feature request
HuggingFace allows an indirect way to export models that aren't supported by the current version of transformers[onnx], mentioned on huggingface website [here](https://huggingface.co/docs/transformers/serialization#implementing-a-custom-onnx-configuration).
According to the original website, this method also provides a way to change the heads of the models based on users needs.
Here is the original code mentioned on the website starting from generating the custom onnx configuration, exporting and validating the exported model.
~~~
from typing import Mapping, OrderedDict
from transformers.onnx import OnnxConfig
class DistilBertOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("input_ids", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
from transformers import AutoConfig
model_ckpt = "distilbert-base-uncased"
config = AutoConfig.from_pretrained(model_ckpt)
onnx_config_for_seq_clf = DistilBertOnnxConfig(config, task="sequence-classification")
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
onnx_path = Path("C:/Users/Hp/zsc/onnx_deberta/model4.onnx")
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config_for_seq_clf, onnx_config_for_seq_clf.default_onnx_opset, onnx_path)
import onnx
onnx_model = onnx.load("C:/Users/Hp/zsc/onnx_deberta/model4.onnx")
onnx.checker.check_model(onnx_model)
from transformers.onnx import validate_model_outputs
validate_model_outputs(
onnx_config_for_seq_clf, tokenizer, base_model, onnx_path, onnx_outputs, onnx_config_for_seq_clf.atol_for_validation
)
~~~
But for some reason, the **task** doesn't seem to work. If you opt to only export and directly check the output shape of the model, it is always the same, which is the shape of last hidden layer of the model.
The validation is successful only when the model is exported with the default task, i.e. **task=default**.
### Motivation
This is a related issue raised by me prior to this, mentioned [here](https://github.com/huggingface/transformers/issues/16982), which points out the the wrong output shape generated even after specifying the tasks.
This is possibly a problem where the exported model is unable to change the heads of the exported models based on specified tasks.
| 05-04-2022 14:40:38 | 05-04-2022 14:40:38 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,085 | closed | Minor change about ARCHIVE_LIST for TF Data2Vec | # What does this PR do?
Just not to declare `DATA2VEC_VISION_PRETRAINED_MODEL_ARCHIVE_LIST` in TF test file.
But I don't mean to introduce it in the model (it was there already in that PR).
As we discussed before, it's better to just drop this variable, and use the checkpoint name directly in `test_model_from_pretrained`. I can change to that, just let me know :-)
| 05-04-2022 14:01:59 | 05-04-2022 14:01:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,084 | closed | Update the _test_requirements to use the dev version of Accelerate | # What does this PR do?
Since there is rapid development of the Accelerate package, this includes the git version as a requirement for the examples in `_test_requirements.txt` for the examples, and also updates the CI to use that for installation only.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @ydshieh
| 05-04-2022 13:30:15 | 05-04-2022 13:30:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,083 | closed | split single_gpu and multi_gpu | # What does this PR do?
Fix the scheduled CI issue caused by the 256 limits (jobs generated from matrix).
Note that the workflow run page has a graph that has no single-gpu and multi-gpu on it. But on the left side, the job names have matrix mentioned.
<img width="271" alt="Screenshot 2022-05-04 145601" src="https://user-images.githubusercontent.com/2521628/166685617-6c008556-dd44-4369-9b6a-b7b1c22cbc9d.png">
| 05-04-2022 12:45:30 | 05-04-2022 12:45:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR! Did you launch it as a trial to see if it works? I see the following should be completed, on line 303:
>
> [setup, run_tests_gpu, run_examples_gpu, run_pipelines_tf_gpu, run_pipelines_torch_gpu, run_all_tests_torch_cuda_extensions_gpu]
You are right, that line should be changed. I haven't launched it (just tried with a dummy example). I will launch it now.<|||||>You can launch it with only 1-2 models in each run, for example by updating this line:
```
echo "::set-output name=matrix::$(python3 -c 'import os; tests = os.getcwd(); model_tests = os.listdir(os.path.join(tests, "models")); d1 = sorted(list(filter(os.path.isdir, os.listdir(tests)))); d2 = sorted(list(filter(os.path.isdir, [f"models/{x}" for x in model_tests]))); d1.remove("models"); d = d2 + d1; print(d)')"
```
to
```
echo "::set-output name=matrix::$(python3 -c 'import os; tests = os.getcwd(); model_tests = os.listdir(os.path.join(tests, "models"))[:2]; d1 = sorted(list(filter(os.path.isdir, os.listdir(tests)))); d2 = sorted(list(filter(os.path.isdir, [f"models/{x}" for x in model_tests]))); d1.remove("models"); d = d2 + d1; print(d)')"
```
This way you'll test the full behavior without having 12-hour long iterations.<|||||>It took sometime, but the run looks good.
https://github.com/huggingface/transformers/actions/runs/2276209307<|||||>Looks good, thanks @ydshieh! |
transformers | 17,082 | closed | Fix DeBERTa token_type_ids | # What does this PR do?
This PR fixes #15735. It changes the behavior of `DebertaTokenizer` and `DebertaTokenizerFast` when passing pair inputs. Before, the token type IDs were all `0`. This PR changes this so that the `token_type_id`s for the tokens of the second sentence are `1`.
It also adds a test case to test this behavior (`DebertaTokenizationTest.test_token_type_ids`). Failed before, passes now.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/15735
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik? | 05-04-2022 11:51:41 | 05-04-2022 11:51:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Great, thanks for looking into this! I also checked Microsoft's implementation and [it looks like](https://github.com/microsoft/DeBERTa/blob/c558ad99373dac695128c9ec45f39869aafd374e/DeBERTa/deberta/deberta.py#L77) they use `1` for sentence B as well 😊 |
transformers | 17,081 | closed | Error in mBART50 fast tokenizer `convert_tokens_to_string` method | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
>>> from transformers import AutoTokenizer
# Problematic behavior with default fast tokenizer
>>> t = AutoTokenizer.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX")
>>> t.convert_tokens_to_string(["This", " is", " a", " test"])
['This', ' is', ' a', ' test'] # Conversion didn't work
>>> t.convert_tokens_to_string([["This", " is", " a", " test"], ["Other", " test", "!"]])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gsarti/.cache/pypoetry/virtualenvs/inseq-PzwjmCYf-py3.8/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 534, in convert_tokens_to_string
return self.backend_tokenizer.decoder.decode(tokens)
TypeError: Can't convert ['This', 'is', 'a', 'test'] to PyString
# Normal behavior with slow tokenizer
>>> t = AutoTokenizer.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", , src_lang="en_XX", use_fast=False)
>>> t.convert_tokens_to_string(["This", " is", " a", " test"])
"This is a test" # Correctly converted
>>> t.convert_tokens_to_string([["This", " is", " a", " test"], ["Other", " test", "!"]])
["This is a test", "Other test!"] # Correctly converted
```
### Expected behavior
I would expect the behavior of the fast tokenizer to be consistent with the one of the original slow tokenizer. At the moment, the `convert_tokens_to_string` method doesn't achieve the expected results.
I am not facing the same issues when using Marian models, but it is likely that the problem is not restricted only to the `mBART50` class. | 05-04-2022 09:49:17 | 05-04-2022 09:49:17 | Thanks a lot for the detailed issue, I, unfortunately, can't reproduce the error (as you can see on [this google colab](https://colab.research.google.com/drive/1g2LjQgRFH569E4E7lC0-EOmV3yNEDCPD?usp=sharing)).
My assumption is that you would have the version of `tokenizers` that has been deprecated since (`0.12.0`). Could you share with me the output of `pip freeze | grep "tokenizers"` (or by any other means the version of `tokenizers` that you have)? If the result is `tokenizers==0.12.0` I advise you to simply uninstall this version and reinstall the latest version (`0.12.1`).
Keep me updated! :smile: <|||||>Thank you, it is indeed the case! Using `tokenizers==0.12.1` works. |
transformers | 17,080 | closed | Update _toctree.yml | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-04-2022 06:58:16 | 05-04-2022 06:58:16 | This can't be added without the traduction of the `philosophy` page in Spanish, as you can see with the failing check above.<|||||>> This can't be added without the traduction of the `philosophy` page in Spanish, as you can see with the failing check above.
I added it here: https://github.com/huggingface/transformers/pull/16922<|||||>Everything needs to be in the same PR. |
transformers | 17,079 | closed | comet ml integration - add option to continue training | ### Feature request
when continuing training, it would be great to be able to also continue to log the training metrics to the comet ml experiment. comet_ml has a feature for it : it uses the class ExistingExperiment ( see https://www.comet.ml/docs/python-sdk/ExistingExperiment/ )
Thanks
### Motivation
It would be much more clear and organized if I could continue to log the loss in the same graph when I continue the training of a model.
### Your contribution
No contribution | 05-04-2022 06:56:41 | 05-04-2022 06:56:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,078 | closed | Added BigBirdPegasus onnx config | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added BigBirdPegasus OnnxConfig to make this model available for conversion.
@ChainYo
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- ~~[ ] Did you write any new necessary tests?~~
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-04-2022 05:22:14 | 05-04-2022 05:22:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17078). All of your documentation changes will be reflected on that endpoint.<|||||>Hello @nandwalritik nice PR!
I'm tagging maintainers for discussion @lewtun @sgugger .
If you have time when the work is ended, please try to convert one `BigBirdPegasus` ONNX model and upload it to the [ONNXConfig for all](https://huggingface.co/OWG) organisation, it would be awesome! |
transformers | 17,077 | closed | When will the official 4.19 release be? | ### Feature request
The official 4.19 release
### Motivation
\
### Your contribution
\ | 05-04-2022 04:49:25 | 05-04-2022 04:49:25 | Likely Thursday!<|||||>Thank you very much!<|||||>It was released a few hours ago.<|||||>OK~ Thank you so much for your help! |
transformers | 17,076 | closed | [ fast_tokenizers.mdx ] - Added translation to portuguese to tutorial | # What does this PR do?
Creates folder pt in docs/source for translating documentation to Portuguese
Currently, only the installation.mdx file was translated as of this PR.
Fixes issue #16824
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests? | 05-04-2022 00:30:45 | 05-04-2022 00:30:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Fellip15 muito obrigado! I will be reviewing it.<|||||>@sgugger LGTM 🔥 |
transformers | 17,075 | closed | Why training time is much more than same task in Fairseq? | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-4.18.0-305.28.1.el8_4.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Device: Tesla V100-PCIE-32GB * 1
## For HF's Trainer:
```
python run_summarization.py \
--model_name_or_path facebook/bart-large \
--do_train \
--do_eval \
--do_predict \
--dataset_name xsum \
--output_dir /scratch/tw2112/datas/HFPLAY2 \
--overwrite_output_dir \
--max_grad_norm 0.1 \
--label_smoothing_factor 0.1 \
--fp16 True \
--learning_rate 3e-05 \
--lr_scheduler_type polynomial \
--greater_is_better True \
--warmup_steps 500 \
--num_train_epochs 1 \
--max_source_length 1024 \
--max_target_length 1024 \
--val_max_target_length 80 \
--gradient_accumulation_steps 1 \
--weight_decay 0.01 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 24 \
--num_beams 6 \
--save_strategy steps \
--save_steps 2000 \
--evaluation_strategy steps \
--eval_steps 2000 \
--load_best_model_at_end True \
--metric_for_best_model loss \
--greater_is_better False
```
The estimated time for 1 epoch is about 3.3h
## For FairSeq:
```
WARMUP_UPDATES=500
LR=3e-05
MAX_TOKENS=2048
BART_PATH=/scratch/tw2112/codes/models/bart.large/model.pt
fairseq-train xsum \
--restore-file $BART_PATH \
--task translation \
--source-lang source --target-lang target \
--truncate-source \
--batch-size 8 \
--layernorm-embedding \
--share-all-embeddings \
--share-decoder-input-output-embed \
--reset-optimizer --reset-dataloader --reset-meters \
--required-batch-size-multiple 1 \
--arch bart_large \
--criterion label_smoothed_cross_entropy \
--label-smoothing 0.1 \
--dropout 0.1 --attention-dropout 0.1 \
--weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \
--clip-norm 0.1 \
--lr-scheduler polynomial_decay --lr $LR --warmup-updates $WARMUP_UPDATES \
--fp16 \
--max-epoch 1\
--skip-invalid-size-inputs-valid-test \
--find-unused-parameters;
```
The estimated time for 1 epoch is about 1.5h
### Expected behavior
```shell
I checked the config namespace in fairseq's log and changed the settings according to the example in Fairseq repo. I change both 2 commands to train on XSUM, 1 epoch, batch=8, gradient_accumulation_steps=1, and HF's time for 1 epoch is 3 times slower than fairseq's. Did I do anything wrong?
```
| 05-04-2022 00:06:13 | 05-04-2022 00:06:13 | Seems I didn't set the fp16 correctly----I should start the training by `torchrun --nproc_per_node=4 ` and set `--sharded_ddp zero_dp_3 `.<|||||>@ElderWanng Hey, I'm fairly new to machine learning. Can you tell me where do you run `torchrun --nproc_per_node=4` and `--sharded_ddp zero_dp_3`? Thank you<|||||>> @ElderWanng Hey, I'm fairly new to machine learning. Can you tell me where do you run `torchrun --nproc_per_node=4` and `--sharded_ddp zero_dp_3`? Thank you
torchrun is a substitutive cli command for 'python -m torch.distributed.launch' in newest pytorch 1.11, see in [https://pytorch.org/docs/stable/elastic/run.html](url)
BTW, I'm using deepspeed now, it's faster. The method above is for fairscale.
The start command is
`deepspeed --num_gpus=4 run_summarization.py \
--model_name_or_path facebook/bart-large \
--do_train --do_predict \
--dataset_name xsum \
--overwrite_output_dir \
--learning_rate 3e-05 \
--label_smoothing_factor 0.1 \
--greater_is_better True \
--warmup_steps 500 \
--num_train_epochs 10 \
--max_source_length 1024 \
--max_target_length 1024 \
--val_max_target_length 80 \
--gradient_accumulation_steps 2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 32 \
--num_beams 6 \
--save_strategy steps \
--save_steps 750 \
--evaluation_strategy steps \
--eval_steps 750 \
--load_best_model_at_end True \
--metric_for_best_model loss \
--greater_is_better False \
--predict_with_generate \
--fp16 True \
--deepspeed deepspeed_config/ds_config_zero3.json`
the config.json could find in `https://github.com/huggingface/transformers/tree/main/tests/deepspeed`<|||||>This is awesome, thank you for sharing :) Do you think it's possible to include deepspeed or fairscale in the code, like maybe a parameter in Trainer args? Or the only option is to write everything in a py file then run with torchrun or deepspeed?<|||||>> This is awesome, thank you for sharing :) Do you think it's possible to include deepspeed or fairscale in the code, like maybe a parameter in Trainer args? Or the only option is to write everything in a py file then run with torchrun or deepspeed?
In HF Trainer, I didn't see any workaround (I didn't dive in too much, maybe answer is yes). But I wrote another version in pytorch-lightning, which makes 'launcher as arguments' possible. And I reproduce the same rouge score on XSUM, the training time is the same. If you want I'm glad to share <|||||>> This is awesome, thank you for sharing :) Do you think it's possible to include deepspeed or fairscale in the code, like maybe a parameter in Trainer args? Or the only option is to write everything in a py file then run with torchrun or deepspeed?
Oh BTW, fairscale is as an extension for torch now. If not to activate fairscale, I still use torchrun as the launcher instead of `python run_summarization.py`. For 'launcher as arguments', the answer is yes at least on fairscale. U could simply switch on/off by change `--strategy`. I think sharded training is like a default training setting instead of naïve DDP in 2022.
For Deepspeed, I'm not sure whether the HF trainer maintains the clean cli args for us. I just followed the tutorial in [https://huggingface.co/docs/transformers/main_classes/trainer](url)<|||||>Thank you so much for the insight! I'm currently fine-tuning on Colab and the standard is to run code by cell so I was looking for ways to integrate the speedup in code. But actually now that I look back, fairseq kinda just wrapped most of the training code into the train.py file
Anyway, I'm glad that someone confirmed that Huggingface training can be as fast as Fairseq
<|||||>> Thank you so much for the insight! I'm currently fine-tuning on Colab and the standard is to run code by cell so I was looking for ways to integrate the speedup in code. But actually now that I look back, fairseq kinda just wrapped most of the training code into the train.py file
>
> Anyway, I'm glad that someone confirmed that Huggingface training can be as fast as Fairseq
Now I understand why you have to use args in pure python env. My advice is to try to activate fp16 training, that help a lot. I'm not sure if colab supports fp16 training. I used 4 x RTX8000 card, 10 epochs, about 40mins/epoch. <|||||>FP16 is activated and I run the code on one V100 GPU. When I was training fairseq, I enabled FP16 too.
HuggingFace seemed to take a lot longer for one Transformer Base epoch (1 hour) while Fairseq took around just 20-30 min for one Transformer Big epoch. If only HuggingFace can improve the speed by default, it will totally obliterate Fairseq<|||||>fairscale or deepspeed would make it to about 40min (still can't catch up with fairseq from 2 years before)<|||||>So from your experience with all the optimization you have, it's still not as fast as fairseq? How much slower would you say the most optimal HF training compared with the normal fairseq?<|||||>I forget it. It is close enough. I think the sacrifice of training time is worth when it compared to the inconvenience of fairseq.<|||||>Ah ok, good to hear, I'll apply all the optimization you mentioned. Hope it will go down to at least 40m an epoch. <|||||>Hey, it's me again, I tried a few approaches but surprisingly the most effective one is very simple. I simple increase the batch size in `--per_device_train_batch_size`. If you double the batch size, the training is literally twice as fast. One ML engineer told me bigger batch size will enable more efficient parallelization. <|||||>> Hey, it's me again, I tried a few approaches but surprisingly the most effective one is very simple. I simple increase the batch size in `--per_device_train_batch_size`. If you double the batch size, the training is literally twice as fast. One ML engineer told me bigger batch size will enable more efficient parallelization.
lol, the fairscale or deepspeed will save your GPURAM----by reducing unnecessary optimizer state overhead. Then you could press a bigger batch into training. Seems I forgot to make it clear: open those advanced settings and double the batch size. The fairseq use dynamic batchsize, loading more tokens by current available GPU RAM. But this code is hard to extract and apply to other frameworks. ( TBH I think fairseq in engineering aspect is really hard to say is a good framework ). For me the best settings in HFTrainer is deepzero stage2, no CPU offload, then adjust the batch size to occupy all GPU ram. <|||||>What do you want me to do?
On Wed, 18 May 2565 BE at 5:33 am, ElderWanng ***@***.***>
wrote:
> Hey, it's me again, I tried a few approaches but surprisingly the most
> effective one is very simple. I simple increase the batch size in
> --per_device_train_batch_size. If you double the batch size, the training
> is literally twice as fast. One ML engineer told me bigger batch size will
> enable more efficient parallelization.
>
> lol, the fairscale or deepspeed will save your GPURAM----by reducing
> unnecessary optimizer state overhead. Then you could press a bigger batch
> into training. Seems I forgot to make it clear: open those advanced
> settings and double the batch size. The fairseq use dynamic batchsize,
> loading more tokens by current available GPU RAM. But this code is hard to
> extract and apply to other frameworks. ( TBH I think fairseq in engineering
> aspect is really hard to say is a good framework ). For me the best
> settings in HFTrainer is deepzero stage2, no CPU offload, then adjust the
> batch size to occupy all GPU ram.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17075#issuecomment-1129241051>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AWVHH4XFX6M5FG5TCQKTZ3DVKPX7XANCNFSM5VANUWTQ>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
--
Christina Vongphit iPhone
<|||||>> > Hey, it's me again, I tried a few approaches but surprisingly the most effective one is very simple. I simple increase the batch size in `--per_device_train_batch_size`. If you double the batch size, the training is literally twice as fast. One ML engineer told me bigger batch size will enable more efficient parallelization.
>
> lol, the fairscale or deepspeed will save your GPURAM----by reducing unnecessary optimizer state overhead. Then you could press a bigger batch into training. Seems I forgot to make it clear: open those advanced settings and double the batch size. The fairseq use dynamic batchsize, loading more tokens by current available GPU RAM. But this code is hard to extract and apply to other frameworks. ( TBH I think fairseq in engineering aspect is really hard to say is a good framework ). For me the best settings in HFTrainer is deepzero stage2, no CPU offload, then adjust the batch size to occupy all GPU ram.
Oh, now I really see how fairscale or deepspeed can reduce GPU RAM. Also, there is one issue I saw in HuggingFace but I don't know if you also encountered it. The GPU RAM seemed to be accumulated when I go from the first epoch to the second epoch, which crashed the training. This totally didn't happen with fairseq. After I include the data path in fairseq, then the GPU usage is always constant, never spiked. I wonder if you have any insight about this issue or if you just YOLO and use deepspeed and it solved everything lol<|||||>u might set --eval_accumulation_steps=1,<|||||>> u might set --eval_accumulation_steps=1,
Oh, this sound very interseting.
I also saw this parameter when training. `Gradient accumulation steps = 16`.
Do you think it has something to do with epoch ram accumulation too?<|||||>> > > Hey, it's me again, I tried a few approaches but surprisingly the most effective one is very simple. I simple increase the batch size in `--per_device_train_batch_size`. If you double the batch size, the training is literally twice as fast. One ML engineer told me bigger batch size will enable more efficient parallelization.
> >
> >
> > lol, the fairscale or deepspeed will save your GPURAM----by reducing unnecessary optimizer state overhead. Then you could press a bigger batch into training. Seems I forgot to make it clear: open those advanced settings and double the batch size. The fairseq use dynamic batchsize, loading more tokens by current available GPU RAM. But this code is hard to extract and apply to other frameworks. ( TBH I think fairseq in engineering aspect is really hard to say is a good framework ). For me the best settings in HFTrainer is deepzero stage2, no CPU offload, then adjust the batch size to occupy all GPU ram.
>
> Oh, now I really see how fairscale or deepspeed can reduce GPU RAM. Also, there is one issue I saw in HuggingFace but I don't know if you also encountered it. The GPU RAM seemed to be accumulated when I go from the first epoch to the second epoch, which crashed the training. This totally didn't happen with fairseq. After I include the data path in fairseq, then the GPU usage is always constant, never spiked. I wonder if you have any insight about this issue or if you just YOLO and use deepspeed and it solved everything lol
there is a "context manager" is fairseq: when catch cuda OOM error, trying to restore training. I didn't see similar logic in HF. That's what I said about FS having many engineering optimizations. So usually I leave 10% or so RAM margin by adjusting batch size in case of OOM. In terms of exploiting video memory, no other framework is as good as FS. <|||||>> > u might set --eval_accumulation_steps=1,
>
> Oh, this sound very interseting. I also saw this parameter when training. `Gradient accumulation steps = 16`.
>
> Do you think it has something to do with epoch ram accumulation too?
updata_freq is just a trick to simulate "multi-card". In there original paper, they fine-tuned it on a 8 card node so this coeficcent is 2. I only have a 4 card node, so I set it to 4. When it comes to 1 card node, you should set it to 16. I think it has nothing to do with RAM usage.<|||||>> So usually I leave 10% or so RAM margin by adjusting batch size in case of OOM.
By this, you mean the batch size is hardcoded right?
Also, I didn't realize gradient accumulation steps is --update-freq in Fairseq. Thank you! |
transformers | 17,074 | closed | quicktour.mdx en -> pt translation | # What does this PR do?
Translate en/quicktour.mdx into a brazilian portuguese version.
May be worth checking some words and terms translation to maintain continuity between all portuguese translations.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-03-2022 23:37:26 | 05-03-2022 23:37:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thank you @vitorfrois for the PR! Sorry for the late reply I lost track of the PR. Relates to #16824.
@sgugger LGTM :). I think the test error in "`Build PR Documentation / build / build_pr_documentation`" relates mostly to a JS issue.<|||||>Thanks a lot! |
transformers | 17,073 | closed | Fix pipeline doctests | Clean PR of #17013 to fix doctests for the `pipeline` tutorial:
- Updated output values in vision example as suggested by @ydshieh.
- Fixed output format issues. Showing each output result on a new line fails :( | 05-03-2022 23:28:39 | 05-03-2022 23:28:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(I don't really follow the `... ) # doctest: +SKIP`) thing. If you have a bit time, it would be great if you can explain a bit to me what issue we have previously.<|||||>Generation is random @ydshieh so the results will be different for each run. `# doctest: +SKIP` tells `doctest` to not test this particular example. |
transformers | 17,072 | closed | KeyError "labels" occurring in distill_classifier.py official example notebook | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.27
- Python version: 3.8.2
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@VictorSanh
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Running the official [Distilling Zero Shot Classification.ipynb ](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing#scrollTo=ECt06ndcnpyb) results in a KeyError: 'labels'
Here is the full output for reference:
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
05/03/2022 09:50:19 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
05/03/2022 09:50:19 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=./distilbert-base-uncased-agnews-student/runs/May03_09-50-19_CHI-LX-L-035,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
optim=OptimizerNames.ADAMW_HF,
output_dir=./distilbert-base-uncased-agnews-student,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=128,
per_device_train_batch_size=32,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=./distilbert-base-uncased-agnews-student,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=0,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
05/03/2022 09:50:19 - INFO - __main__ - Generating predictions from zero-shot teacher model
[INFO|configuration_utils.py:654] 2022-05-03 09:50:19,224 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:690] 2022-05-03 09:50:19,225 >> Model config RobertaConfig {
"_name_or_path": "roberta-large-mnli",
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.18.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|modeling_utils.py:1772] 2022-05-03 09:50:19,391 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at /home/eknochenhauer/.cache/huggingface/transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
[WARNING|modeling_utils.py:2048] 2022-05-03 09:50:22,672 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:2065] 2022-05-03 09:50:22,672 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training.
[INFO|tokenization_auto.py:344] 2022-05-03 09:50:22,808 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:654] 2022-05-03 09:50:22,950 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:690] 2022-05-03 09:50:22,951 >> Model config RobertaConfig {
"_name_or_path": "roberta-large-mnli",
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.18.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at /home/eknochenhauer/.cache/huggingface/transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1778] 2022-05-03 09:50:23,934 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None
[INFO|configuration_utils.py:654] 2022-05-03 09:50:24,079 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:690] 2022-05-03 09:50:24,079 >> Model config RobertaConfig {
"_name_or_path": "roberta-large-mnli",
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.18.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
0%| | 0/2500 [00:00<?, ?it/s]/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
100%|█████████████████████████████████████| 2500/2500 [5:37:27<00:00, 8.10s/it]
05/03/2022 15:27:51 - INFO - __main__ - Initializing student model
[INFO|configuration_utils.py:654] 2022-05-03 15:27:51,844 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333
[INFO|configuration_utils.py:690] 2022-05-03 15:27:51,846 >> Model config DistilBertConfig {
"_name_or_path": "distilbert-base-uncased",
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3"
},
"initializer_range": 0.02,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3
},
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.18.0",
"vocab_size": 30522
}
[INFO|modeling_utils.py:1772] 2022-05-03 15:27:52,023 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /home/eknochenhauer/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[WARNING|modeling_utils.py:2048] 2022-05-03 15:27:52,616 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_projector.bias', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_transform.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:2059] 2022-05-03 15:27:52,616 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|configuration_utils.py:654] 2022-05-03 15:27:52,923 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333
[INFO|configuration_utils.py:690] 2022-05-03 15:27:52,926 >> Model config DistilBertConfig {
"_name_or_path": "distilbert-base-uncased",
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.18.0",
"vocab_size": 30522
}
[INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,860 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /home/eknochenhauer/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1778] 2022-05-03 15:27:53,861 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|configuration_utils.py:654] 2022-05-03 15:27:54,019 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /home/eknochenhauer/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333
[INFO|configuration_utils.py:690] 2022-05-03 15:27:54,021 >> Model config DistilBertConfig {
"_name_or_path": "distilbert-base-uncased",
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.18.0",
"vocab_size": 30522
}
100%|███████████████████████████████████| 20000/20000 [00:07<00:00, 2840.94ex/s]
05/03/2022 15:28:01 - INFO - __main__ - Training student model on teacher predictions
[INFO|trainer.py:566] 2022-05-03 15:28:01,148 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text. If text are not expected by `DistilBertForSequenceClassification.forward`, you can safely ignore this message.
/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
[INFO|trainer.py:1290] 2022-05-03 15:28:01,165 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-03 15:28:01,165 >> Num examples = 20000
[INFO|trainer.py:1292] 2022-05-03 15:28:01,165 >> Num Epochs = 1
[INFO|trainer.py:1293] 2022-05-03 15:28:01,165 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1294] 2022-05-03 15:28:01,165 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1295] 2022-05-03 15:28:01,165 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1296] 2022-05-03 15:28:01,165 >> Total optimization steps = 625
0%| | 0/625 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module>
main()
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main
trainer.train()
File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step
loss = self.compute_loss(model, inputs)
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
File "/home/eknochenhauer/repos/themis-key-phrase-filtering-and-aggregation/.venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 235, in __getitem__
return self.data[item]
KeyError: 'labels'
0%| | 0/625 [00:00<?, ?it/s]
### Expected behavior
```shell
No KeyError. Successful training of student classifier and output available in output_dir.
```
| 05-03-2022 20:58:32 | 05-03-2022 20:58:32 | This is not a maintained example, so I don't think this will be fixed any time soon (just my 2 cents since I'm tagged, I have never seen this notebook before ;-))<|||||>Understood. It falls under the research_projects folder here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation<|||||>Resolved issue by downgrading to transformers==4.4.0 and using datasets==1.6.1 |
transformers | 17,071 | closed | Add model UniTE to huggingface repository. | # What does this PR do?
This PR adds model UniTE to the repository of Huggingface.
Paper: [UniTE: Unified Translation Evaluation](https://arxiv.org/abs/2204.13346).
Former discussion is [here](https://github.com/huggingface/transformers/issues/16366).
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed.
| 05-03-2022 20:31:22 | 05-03-2022 20:31:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17071). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. If I understand correctly, this adds a new type of sequence classification on XLMRoberta, but this won't work at all inside Transformers, since you are not following the API of other models:
- the model should return a `ModelOuput`
- it should accept the same arguments as other sequence classification models and return the same kind of outputs otherwise it just won't work with the `Trainer` or the `pipeline` function.
Also please follow the general guidelines for a [new model addition](https://huggingface.co/docs/transformers/add_new_model).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,070 | closed | fix: :bug: changing context of multiprocessing while decoding for Windows | # What does this PR do?
- This PR aim is to fix the context for the multiprocessing used in `wav2vec_with_LM batch_decode()` to change from `fork` to `spawn` so it can run with non-Linux based systems as well.
Fixes # (issue)
- multiprocessing context when `batch_decode`
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
This is the [link for the GitHub issue](https://github.com/huggingface/transformers/issues/16898)
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
- Wav2Vec2 with LM
Library:
- benchmarks: @patrickvonplaten
Documentation: @sgugger
| 05-03-2022 20:30:56 | 05-03-2022 20:30:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17070). All of your documentation changes will be reflected on that endpoint.<|||||>Interesting! @elsheikh21 - it seems like `pyctcdecode` is not a big fan of `"spawn"` :-/ . Could you maybe open an issue there: https://github.com/kensho-technologies/pyctcdecode ?
Sorry I thought this would work, but apparently Kensho's `pyctcdecode` doesn't like it. Maybe let's just ask in their repo how to run the code on Windows :-) <|||||>@patrickvonplaten
Yes, It caught me by surprise as well. Of course, I can and I did open [an issue](https://github.com/kensho-technologies/pyctcdecode/issues/65), I will be waiting for their response
In the meanwhile, I thought of the following
- get context based on the user's OS, using `import sys; sys.platform`, refer to [documentation](https://docs.python.org/3/library/sys.html#sys.platform) -if needed.
```
import sys
# specify spawn for WindowsOS
contexts = {
"win32": "spawn",
"cygwin": "spawn"
}
# if the platform is not windows then use 'fork'
_context = contexts.get(sys.platform, "fork")
pool = get_context(_context ).Pool(num_processes)
```
or wdyt?<|||||>Yeah this would be fine by me as well I think! But did you check that pyctcdecode then works correctly?
To me it really looks like a bug within pyctcdecode<|||||>Yes I opened the issue in pyctcdecode but did not start investigating what might be causing the error, yet I will do so and if ai found sth. I will update them<|||||>Hi all, I've merged PR in pyctcdecode (https://github.com/kensho-technologies/pyctcdecode/pull/68) to bypass this issue by either passing None instead of a pool or automatically detecting a pool made with a spawn context and running without multiprocessing. The issue is that the language model is saved as a class variable, which allows fork to work without reloading the model, but this doesn't get automatically populated for spawn, so the model is missing in the new processes. All of the things I've tried to actually use the pool with spawn require pickling or reloading the model which ends up hurting performance a lot. If anyone wants to try to figure out if there's a way to use spawn without also making everything really slow, please go ahead and discuss or make a PR in pyctcdecode<|||||>Thanks a lot for the fix @lopez86 ! @elsheikh21 I think we should then probably go for the same solution in Transformers no?<|||||>@patrickvonplaten
sorry for not getting back to you earlier, I had some issues in my personal life.
Okay, that seems like a good idea, but just to clarify how do u plan for me to change that in `transformers` package as well?<|||||>Could you try to add such a function to it to see if it works? https://github.com/kensho-technologies/pyctcdecode/pull/68#discussion_r894696237<|||||>@patrickvonplaten, as I understand, https://github.com/kensho-technologies/pyctcdecode/pull/68 completely ignores spawn contexts. This means that at least for now (until https://github.com/kensho-technologies/pyctcdecode/issues/65 is closed), we should not even get in the trouble of creating a spawn pool in Windows. There's probably an overhead when creating a pool that won't be used.<|||||>I agree @falcaopetri,
Should we maybe just allow the user to pass `pool` ? and only when it's `None` we create it ourselves?<|||||>I guess so, @patrickvonplaten. Moreover, I think that users should be warned if only a `spawn `pool is available or if one was passed by the user, since `pyctcdecode` currently can't use such pool. I've also proposed adding a warning message within `pyctcdecode` (https://github.com/kensho-technologies/pyctcdecode/pull/78).
Assuming that users can pass an active `pool` (#17879), we might warn them if a `spawn` one is passed (we can either count on the proposed `pyctcdecode`'s warning or add one within `Wav2Vec2ProcessorWithLM`).
If `None`, we could warn users when only `spawn` is available and call `pyctcdecode` with `None` (saving the creation of a pool that would otherwise be ignored by current `pyctcdecode` implementation). If `fork` is available we would keep current behavior.
If in the future `pyctcdecode` supports both `fork` and `spawn`, we might just roll back https://github.com/huggingface/transformers/pull/15247.<|||||>Sorry to have dropped the ball here a bit - @falcaopetri do you feel like opening a PR to allow passing `pool` or should I do it? :-)<|||||>Thanks for pinging me, @patrickvonplaten, and sorry for the delay.
I've added some tests to my initial proposal and a `Tip` under `batch_decode`'s `pool` arg.
I'll submit the PR till tomorrow.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,069 | closed | Added spanish translation of autoclass_tutorial. | # Translation of autoclass_tutorial.mdx into spanish
I made the translation of autoclass_turorial.mdx into Spanish (fixes #15947). The document is located in the docs/source/es folder.
This PR also includes the translation of _toctree.yml to include autoclass_tutorial.
FYI @omarespejel
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? | 05-03-2022 20:19:18 | 05-03-2022 20:19:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! Just that little change in the `_toctree.yml`, @Duedme. Thanks!
Is it ok if I merge when the change is made @sgugger? |
transformers | 17,068 | closed | Add the auto_find_batch_size capability from Accelerate into Trainer | # What does this PR do?
This PR introduces the `find_executable_batch_size` decorator into `Trainer`, so the training loop is repeated if a CUDA OOM is reached, lowering the batch size.
The API looks as so:
```python
trainer = Trainer()
trainer.train(auto_find_batch_size=True)
```
By default it is False, and requires `Accelerate` be installed to use.
Fixes # (issue)
Partially solves https://github.com/huggingface/transformers/issues/16987
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 05-03-2022 19:31:24 | 05-03-2022 19:31:24 | @stas00 I'm getting a test failure on the metrics:
```python
tests/trainer/test_trainer.py:1426: in check_mem_metrics
metrics = trainer.train().metrics
src/transformers/trainer.py:1215: in train
ignore_keys_for_eval=ignore_keys_for_eval,
src/transformers/trainer.py:1571: in _inner_training_loop
self._memory_tracker.stop_and_update_metrics(metrics)
src/transformers/trainer_utils.py:536: in stop_and_update_metrics
stage = self.derive_stage()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <transformers.trainer_utils.TrainerMemoryTracker object at 0x7f929f787e10>
def derive_stage(self):
"""derives the stage/caller name automatically"""
caller = inspect.currentframe().f_back.f_back.f_code.co_name
if caller in self.stages:
return self.stages[caller]
else:
raise ValueError(
> f"was called from {caller}, but only expect to be called from one of {self.stages.keys()}"
)
E ValueError: was called from _inner_training_loop, but only expect to be called from one of dict_keys(['__init__', 'train', 'evaluate', 'predict'])
```
Any advice on how to approach a solution?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This will overcome the problem:
```
diff --git a/src/transformers/trainer_utils.py b/src/transformers/trainer_utils.py
index 22b44a2f0..d4c523249 100644
--- a/src/transformers/trainer_utils.py
+++ b/src/transformers/trainer_utils.py
@@ -356,6 +356,7 @@ class TrainerMemoryTracker:
stages = {
"__init__": "init",
"train": "train",
+ "_inner_training_loop": "train",
"evaluate": "eval",
"predict": "test",
}
```
<|||||>Please make sure all tests pass after resolving conflicts and before merging!<|||||>Any chance similar functionality could be supported for inference? 🙏 |
transformers | 17,067 | closed | MLflowCallback set experiment name | # What does this PR do?
This PR includes the following:
- Resolves #12841: uses the MLFLOW_EXPERIMENT_NAME environment variable and the mlflow.set_experiment() method to ensure the experiment is created if it does not exist already.
- Fixes #17066: Checks properly for an active run using mlflow.active_run() (Bug introduced in #16131).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 05-03-2022 18:30:38 | 05-03-2022 18:30:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Oh actually, multiple checks in our CI are not running for some reason. Can you try sending empty commits to see if it triggers it?<|||||>@sgugger I tried an empty commit, a comment change on the py file, and a rebase. They all trigger only two checks. They seem to pass tho.<|||||>Yes, but for some reason, the whole battery of tests run by cicleCI is not launching (check any other PR to see there are actually 18 to 20 checks). I have no idea why they don't, and can't merge without being sure nothing is broken by the PR.<|||||>@sgugger no clue what's going on. I even tried a new PR in #17091 which also trigger only two CI jobs.<|||||>I have no idea what the problem is. Wrote to circleCI support to try to get some help.<|||||>Closing this PR in favor of #17091, which is running all the CI tests. |
transformers | 17,066 | closed | Incorrect check for MLFlow active run in MLflowCallback | ### System Info
```shell
- mlflow==1.25.1
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.10.76-linuxkit-aarch64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.10.2 (False)
```
### Who can help?
Should be fixed by #17067
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Follow Training tutorial as per https://huggingface.co/docs/transformers/training
2. Change the training arguments to use `TrainingArguments(output_dir="test_trainer", report_to=['mlflow'], run_name="run0")`
3. On `trainer.train()` the MLFlow UI should report a run with a Run Name of `run0` which is not currently the case.
Cause of the Issue:
```
>> import mlflow
>> print(mlflow.active_run is None, mlflow.active_run() is None)
False True
```
In `src/transformers/integrations.py` the line `if self._ml_flow.active_run is None:` need to be replaced by `if self._ml_flow.active_run() is None:`
### Expected behavior
PR #14894 introduce support for run_name in the MLflowCallback. Though, this does not work as expected since the active run is checked using a method reference that always returns true. Bug introduced by #16131.
| 05-03-2022 18:23:46 | 05-03-2022 18:23:46 | |
transformers | 17,065 | closed | symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev' | ### System Info
**OS**: macOS Monterey Version 12.4 Beta
**Model**: MacBook Air (M1, 2020)
**Chip**: Apple M1
**Memory**: 8GB
```shell
Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:24:38)
[Clang 12.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers, flax, tensorflow, torch
/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/jax/_src/lib/__init__.py:33: UserWarning: JAX on Mac ARM machines is experimental and minimally tested. Please see https://github.com/google/jax/issues/5501 in the event of problems.
warnings.warn("JAX on Mac ARM machines is experimental and minimally tested. "
>>> transformers.__version__
'4.18.0'
>>> flax.__version__
'0.4.1'
>>> tensorflow.__version__
'2.8.0'
>>> torch.__version__
'1.10.1'
```
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Importing anything from the library results in a "symbol not found" error. I am sure this issue has something to do with the Apple Silicon architecture.
Here's the stack trace:
```shell
>>> from transformers import *
Traceback (most recent call last):
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 857, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/ryanrudes/miniforge3/envs/.../lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 72, in <module>
from tokenizers import AddedToken
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 857, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/ryanrudes/miniforge3/envs/.../lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py", line 23, in <module>
from transformers.pipelines import Pipeline, pipeline
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 28, in <module>
from ..models.auto.configuration_auto import AutoConfig
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/layoutlm/__init__.py", line 22, in <module>
from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/layoutlm/configuration_layoutlm.py", line 19, in <module>
from transformers import PretrainedConfig, PreTrainedTokenizer, TensorType
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 847, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 859, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.tokenization_utils because of the following error (look up to see its traceback):
dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1037, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 845, in __getattr__
value = self._get_module(name)
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 859, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.convert_graph_to_onnx because of the following error (look up to see its traceback):
Failed to import transformers.tokenization_utils because of the following error (look up to see its traceback):
dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
```
Same issue when importing tokenizers:
```shell
Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:24:38)
[Clang 12.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from tokenizers import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: dlopen(/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/tokenizers/tokenizers.cpython-39-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
```
### Expected behavior
Obviously, the library is supposed to import without any errors.
| 05-03-2022 17:49:22 | 05-03-2022 17:49:22 | Uninstalling a couple dependencies including the transformers package itself, and then installing everything within conda solved my problem. Specifically for anyone with the same problem:
```shell
pip uninstall torch tokenizers transformers
conda install pytorch
conda install -c huggingface transformers
``` |
transformers | 17,064 | closed | type hints for pytorch models | # What does this PR do?
Fixes #16059 :
Added type hints for pytorch models - `canine`, `convbert`, `convnext`, `encoder_decoder`, `gpt2`, `gptj`, `megatron_bert`, `mobilebert`, `perceiver`, `retribert`, `swin`, `transfo_xl` & `van`.
For the code quality, I ran **`make fixup`** and reformatted the codes & also resolved consistency problems across other models [which were - `decision_transformer`, `glpn`, `maskformer`, & `segformer`]
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-03-2022 16:54:17 | 05-03-2022 16:54:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 I have added the changes suggested by you & all checks are passing!<|||||>Looks great, thanks for the PR! |
transformers | 17,063 | closed | Make sure telemetry arguments are not returned as unused kwargs | # What does this PR do?
As pointed out in #17056, the telemetry arguments are sometimes returned as unused kwargs. This is because `AutoConfig.from_pretrained` ends up using the `from_dict` method and not the `from_pretrained` method in most cases, and that `from_dict` method does not treat those kwargs.
Fixes #17056 | 05-03-2022 15:47:58 | 05-03-2022 15:47:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,062 | closed | Deprecate model templates | # What does this PR do?
This PR officially deprecates the model templates and moves their test to a daily scheduled job. | 05-03-2022 15:29:38 | 05-03-2022 15:29:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! |
transformers | 17,061 | closed | End2End RAG training hangs | ### System Info
```shell
Python 3.9.0 (default, Nov 15 2020, 14:28:56)
...
>>> transformers.__version__
'4.17.0'
>>> torch.__version__
'1.7.1'
>>> ray.__version__
'1.11.0'
```
### Who can help?
@shamanez, thanks for a great example! I'm having trouble training a model end2end and wonder if you could help out. I have two GPUs available so I've set the arguments relevant for the end2end training to:
```
--gpus 1 \
--end2end \
--distributed_retriever ray \
--num_retrieval_workers 4 \
--index_gpus 1 \
--gpu_order [3,6]
```
This configuration doesn't seem to work since it results in `len(self.retrieval_workers)=0` which means that the call to `re_load()` in `RagRayDistributedRetriever` just hangs eternally.
I also tried setting `--gpu 2` above, but that breaks with error
```*** ValueError: Failed to look up actor with name 'retrieval_worker_0'. This could because 1. You are trying to look up a named actor you didn't create. 2. The named actor died. 3. You did not use a namespace matching the namespace of the actor.```
I don't quite know how ray works so I don't know if ranging `re_load()' to this would break it:
```
if len(self.retrieval_workers) > 0:
ray.get([worker.clear_object.remote() for worker in self.retrieval_workers])
# build the index object again
index = self._build_index(self.config)
ray.get(
[
worker.create_rag_retriever.remote(
self.config, self.question_encoder_tokenizer, self.generator_tokenizer, index
)
for worker in self.retrieval_workers
]
)
else:
self.index = self._build_index(self.config)
```
What are your thoughts? Thanks!
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Set end2end arguments as shown above.
### Expected behavior
```shell
Continuous training without hanging at function re_load()
```
| 05-03-2022 14:42:35 | 05-03-2022 14:42:35 | Hi this bug is caused by the laters RAY version. Actually, I fixed it in the original RAG repository :).
Please change this [line](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L692) similar to [this](line).
In other words, just add a name pace into the RAY init. :) <|||||>Thanks a lot!<|||||>Hi @shamanez , Change which line similar to which line? the link ([this](https://github.com/huggingface/transformers/issues/line)) is broken.<|||||>I've updated the entire codebase. please check the latest version. |
transformers | 17,060 | closed | Add LayoutLMv3 | # What does this PR do?
This PR implements LayoutLMv3. LayoutLMv3 doesn't require a Detectron2 backbone anymore (yay!).
The PR also includes an example script that can be used to reproduce results of the paper.
Fixes #16914
To do:
- [x] fix remaining tokenizer tests. These are very black-boxy to me. Pinging @SaulLu here.
- [x] add model to doc tests
- [x] remove `is_detection` logic
- [x] Make sure the slow tests involving `PyTesseract` pass
- [x] Merge `add_layoutlmv3_simplify` branch | 05-03-2022 14:15:12 | 05-03-2022 14:15:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just for the purpose to keep track of the current status.
As discussed offline I think the next step to "solve" the tokenization tests is to figure out how `["hello", "world"]` is tokenized in the original code: is it [`0, 42891, 8331, 2]` (`['<s>', 'Ġhello', 'Ġworld', '</s>']`) or `[0, 20760, 232, 2]` (`['<s>', 'hello', "world", '</s>']`) or something else ? :blush: <|||||>As seen [here](https://github.com/microsoft/unilm/blob/925de7a9ea500e992ec5de02ea193a5eb9d5aa26/layoutlmv3/examples/run_funsd_cord.py#L313), text is tokenized using RobertaTokenizer, where one provides `is_split_into_words=True`. Hence, ["hello", "world"] is tokenized as follows:
```
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("microsoft/layoutlmv3-base")
text = ["hello", "world"]
encoding = tokenizer(text, is_split_into_words=True)
```
So this results in [0, 20760, 232, 2].<|||||>Thanks for the clarification! I've opened a PR on your branch (https://github.com/NielsRogge/transformers/pull/38) which proposes several changes including 1) changing the default behaviour so that by default a space prefix is added and including all the changes needed to make it work and 2) some small changes to resolve several of the tests that were failing.
I wonder if we shouldn't just remove the option to set `add_prefix_space` to False because the result will not be satisfactory for decoding and I'm not sure we want to do any fancy tricks to make it "work". (Or at least we should log a message to warn the user that the option is risky).<|||||>Hi @NielsRogge
As the issue #13554 and PR #17092, when `input_ids` is longer than model's `max_length`, it would be split into multiple inputs, but `pixel_values` still has 1 image. Are you going to fix this right now, or next PR?
How to reproduce
```python
from transformers import AutoProcessor, AutoModelForTokenClassification
from datasets import load_dataset
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-large")
processor.feature_extractor.apply_ocr = False
model = AutoModelForTokenClassification.from_pretrained("microsoft/layoutlmv3-large")
words = ['hello' for i in range(1000)]
boxes = [[0, 1, 2, 3] for i in range(1000)]
encoding = processor(
image,
text=words,
boxes=boxes,
truncation=True,
padding='max_length',
return_overflowing_tokens=True,
return_tensors="pt"
)
print(encoding['input_ids'].shape) # torch.Size([2, 512])
print(encoding['pixel_values'].shape) #torch.Size([1, 3, 224, 224])
overflow_to_sample_mapping = encoding.pop('overflow_to_sample_mapping')
model(**encoding)
# ---> RuntimeError: Sizes of tensors must match except in dimension 1.
# Expected size 4 but got size 1 for tensor number 1 in the list.
```<|||||>Thank you so much for your fantastic work. I was wondering if you plan to include the object detection task in LayoutLMv3 as well. I noticed that the [PubLayNet fine-tuned model weights](https://huggingface.co/HYPJUDY/layoutlmv3-base-finetuned-publaynet) have already been uploaded to HuggingFace, but I couldn't find any documentation on this capability in this repository. <|||||>> EDIT: Just realized these are the visual tokens... controlled via `add_visual_labels`
@NielsRogge Thanks for this contribution!
While testing the processor, I'm seeing extra padding on the resultant labels that I did not expect and have not experienced with older versions of layoutlmv2processor.
```
import numpy as np
from transformers.models.auto.processing_auto import AutoProcessor
processor = AutoProcessor.from_pretrained(
pretrained_model_name_or_path="microsoft/layoutlmv3-base",
use_fast=True,
add_prefix_space=True,
apply_ocr=False,
)
# not batched
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]]
word_labels = [1, 2]
image = np.zeros((224, 224, 3), dtype=np.uint8)
results = processor(
image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt"
)
for k, v in results.items():
print(k, v.size())
labels = results.labels.squeeze().tolist()
print(labels)
```
output:
```
input_ids torch.Size([1, 8])
attention_mask torch.Size([1, 8])
bbox torch.Size([1, 8, 4])
labels torch.Size([1, 205])
pixel_values torch.Size([1, 3, 224, 224])
[-100, 1, -100, -100, -100, -100, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]
```
This happens beyond maximum seq length as well... where the labels will have a dimension seq_length + ~197
Is this expected? <|||||>Hi @dcyoung,
thanks for taking a look. Actually you make a great point; I implemented it as the original implementation (where the authors label all visual tokens with -100 and just add a classifier on top of the entire `sequence_output`), however it makes a lot of sense to just simplify the code in `LayoutLMv3ForTokenClassification` and not make the processor do this.
Thanks a lot!<|||||>And hi @sina-ehsani,
unfortunately I'm (for now) not planning to add the object detection part, because the framework being used (Mask R-CNN) is a ridiculous amount of code and it's not straightforward - for now - to add this to the Transformers library (as there's a "one model, one file" philosophy). So I'd advise to use the original repository for that.
It may be that in the future we add this framework, but I'm actually much more a fan of simpler frameworks like DETR and YOLOS. It would be great if someone fine-tuned a [YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos) model initialized with the weights of the [Document Image Transformer (DiT)](https://huggingface.co/docs/transformers/model_doc/dit). I feel like you would get the same performance. <|||||>Thank you so much for adding the model, I had a question on segment position embeddings. How do you create segment position embeddings during inference when the labels are unknown and are just bounding boxes from an ocr. In this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb) the test set also contains segment level bounding box. I have trained a model on segment level embeddings on my use case and it doesn't perform well on token level 2D embeddings during inference.<|||||>> YOLOS
Thanks for the idea. I will have a go at this.
My understanding unilm repo uses Detectron2 (Mask-RCNN) for the backbone of Object Detection in LayoutLMv3 for benchmarking compatibility. Would it be possible to swap out the image backbone for a vision transformer in the LayoutLMv3 training. I saw in the paper:
`LayoutLMv3 is the first multimodal model in Document AI
that does not rely on a pre-trained CNN or Faster R-CNN
backbone to extract visual features, which significantly saves
parameters and eliminates region annotations.`
My understanding is that LayoutLMv3 is able to generalise better with the unsupervised pre-training over the MIM+MLM+WPA objectives. It also learns correlations between the text / visual inputs that it benefits with on downstream tasks. YOLOS wouldn't include this key text information in document layout anlaysis.
Please correct me if I am wrong... I am learning here.<|||||>@NielsRogge
>
This thread has lead me to hacking a model that combines the YolosLoss and YolosObjectDetection head with the LayoutLMv3Model to build a LayoutLMv3ObjectDetection prediction head.
Changes to the LayoutLMv3Config and LayoutLMv3FeatureExtractor had to be made to allow for this.
This approach avoids the Mask R-CNN discussed.
Is this something you would be interested in reviewing and integrating if I open a PR?
Or does it deviate too significantly from the research paper? |
transformers | 17,059 | closed | Remove Python and use v2 action | # What does this PR do?
Fix the model templates GitHub job which was broken with the Python 3.6 removal. | 05-03-2022 13:38:32 | 05-03-2022 13:38:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,058 | closed | Chinese parentheses can't be handled by fast tokenizer | ### Problem description
The chinese language has some special characters for parentheses: "(", ")" (= parentheses with an integrated whitespace). Both characters aren't part of the XLMRoBERTa Tokenizer, neither the fast nor the "slow" one. Interestingly enough, it is possible to add these characters to the tokenizer vocabulary but only for the "slow"/non-fast tokenizer variant.
### System Info
```shell
Package Version
------------------ ---------
certifi 2021.10.8
charset-normalizer 2.0.12
click 8.1.3
filelock 3.6.0
huggingface-hub 0.5.1
idna 3.3
joblib 1.1.0
numpy 1.22.3
packaging 21.3
pip 20.0.2
pkg-resources 0.0.0
pyparsing 3.0.8
PyYAML 6.0
regex 2022.4.24
requests 2.27.1
sacremoses 0.0.53
sentencepiece 0.1.96
setuptools 44.0.0
six 1.16.0
tokenizers 0.12.1
torch 1.11.0
tqdm 4.64.0
transformers 4.18.0
typing-extensions 4.2.0
urllib3 1.26.9
```
### Who can help?
@LysandreJik, @Sau
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Code to reproduce
```python
import sentencepiece
import transformers
from transformers import AutoTokenizer
print("Transformers version:", transformers.__version__)
print("----------------")
special_chinese_parantheses = "("
print(special_chinese_parantheses)
# "slow"
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base", use_fast=False)
print(tokenizer.tokenize(special_chinese_parantheses))
tokenizer.add_tokens(special_chinese_parantheses)
print(tokenizer.tokenize(special_chinese_parantheses))
print("----------------")
# fast
tokenizer_fast = AutoTokenizer.from_pretrained("xlm-roberta-base", use_fast=True)
print(tokenizer_fast.tokenize(special_chinese_parantheses))
tokenizer_fast.add_tokens(special_chinese_parantheses)
print(tokenizer_fast.tokenize(special_chinese_parantheses))
```
### Output
```sh
Transformers version: 4.18.0
----------------
(
['▁(']
['(']
----------------
['▁(']
['(']
``` | 05-03-2022 11:48:29 | 05-03-2022 11:48:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still open.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,057 | closed | Rewrite TensorFlow train_step and test_step | Draft PR for a full rewrite of the TF train/test steps. I swear this will fix like 50% of our TF issues in one PR.
Current status:
- Correctly handles output mapping across most model classes for losses + metrics
- Keras metrics are back, even with the dummy loss. (!!!!)
- Keras metrics work correctly even for multi-output models (like QA)
- In most cases, users can pass tensors in either the input dict or the labels and the model will handle them correctly.
- No more errors when calling `fit()` when the model has nested output structure (e.g. the model outputting a `past` tuple)
What's left to do:
- [X] Models with multiple unusual outputs that do not match label names may still have issues with metrics. This is relatively uncommon. We support adding a property to those classes to tell Keras what to do with the labels, but we haven't added it to any models yet. (None are failing in tests, so hopefully we won't need to worry too much about this!)
- [x] Testing testing testing! I want to rerun all notebooks/examples and make sure the user experience is good.
- [X] CI testing - We need to make sure we don't regress on any of this
- [ ] Discoverability: After this is merged we should update notebooks/examples to show off the cool new features, and document our TF workflow/philosophy somewhere that new users will find.
| 05-03-2022 11:01:34 | 05-03-2022 11:01:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(Requesting reviews now that @gante is back) |
transformers | 17,056 | closed | AutoConfig.from_pretrained("model", return_unused_kwargs=True) returns `"_from_auto": True` field against specification | ### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
@sgugger
Small bug in which for some cases `{"_from_auto": True}` is returned [against specification](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/auto/configuration_auto.py#L646).
Seems to originate [here](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/auto/configuration_auto.py#L671) and/or [here](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/configuration_utils.py#L659)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Replicable with
```py
>>> from transformers import AutoConfig
>>> config, kwargs = AutoConfig.from_pretrained("bert-base-uncased", return_unused_kwargs=True)
>>> kwargs
{'_from_auto': True}
```
### Expected behavior
There should be no `"_from_auto": True` field in returned dict.
```py
>>> from transformers import AutoConfig
>>> config, kwargs = AutoConfig.from_pretrained("bert-base-uncased", return_unused_kwargs=True)
>>> kwargs
{}
```
| 05-03-2022 10:47:23 | 05-03-2022 10:47:23 | I can reproduce, thanks for flagging and for taking the time to give us a clear example of code that fails!
I will try to dive into this and have a fix ready later today. |
transformers | 17,055 | closed | Fix RNG reload in resume training from epoch checkpoint | # What does this PR do?
This PR fixes the reproducibility in training when checkpoints are saved every epoch. The main reason it was failing (as pointed out in #17032) is that the RNG states were never reloaded. They need to be reloaded exactly before iterating through the new epoch, as the call to this will change the global PyTorch RNG (even if the dataloader uses its own generator...) The new test added makes sure this reproducibility is fully tested.
While debugging this, two issues occurred, which this PR also fixes.
1. There are multiple warnings for the computation of flos when the model is not an NLP model. This PR reduces it to one.
2. The test of this reproducibility is flaky on multiple GPUs because it relies on some randomness inside the model, but the PyTorch RNG will be called in random order between the two "copies" of the model executed by `DataParallel` (an issue that wouldn't be the case with `DistributedDataParallel` but we would need to execute the test via a launcher in that case). So in the test, we only do PyTorch randomness on one or zero GPU to fix this flakiness.
Fixes #17032 | 05-02-2022 20:22:15 | 05-02-2022 20:22:15 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.