repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,657 | closed | update readme CodeParrot | - use CodeParrot scores of v1.1
- change evaluation command to use accelerate
| 06-10-2022 13:47:25 | 06-10-2022 13:47:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,656 | closed | Fix dtype getters | # What does this PR do?
This is a hotfix for #17649 which rely on the old behavior when there is no floating point parameters. I think some empty parameter generators are responsible but I can't understand why the old code works. I will dig more into it but for now we need something to fix all uses of (at least) BERT in examples as for now
```
python examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --output_dir ~/tmp/test-glue/ --do_train
```
fails (not after this PR).
Fixes #17649 | 06-10-2022 11:20:37 | 06-10-2022 11:20:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@stas00 Merging this without waiting for you to wake up to have the examples running again. Once I've investigated more, I'll open a new PR with a (hopefully) better fix and a new test (since we should have caught that issue on the CI).<|||||>Thank you @sgugger ! You are faster before I even realized the scheduled CI is broken ❤️ <|||||>This is very mysterious, almost undefined behavior by python.
So the error is:
> UnboundLocalError: local variable 't' referenced before assignment
so then the loop variable is not visible in the `else` struct - that behavior is not defined by the python documentation,
https://docs.python.org/3/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops
A quick test shows that it is visible:
```
In [1]: for i in [1,2]:
...: print(i)
...: else:
...: print(i)
...:
1
2
2
```
Unless the loop generator returned nothing, and thus `i` was never initialized:
```
In [1]: for i in []:
...: print(i)
...: else:
...: print(i)
NameError: name 'i' is not defined
```
but if `parameter.parameters()` was an empty generator, your fix `next(parameter.parameters())` should have failed too.
Very mysterious.
But in any case, eventually we need to add a test that covers that use case. Most likely none of the existing tests, including the one I added covered the `else` branch and thus we missed it.
and since you're no longer relying on the loop's local variable we probably should drop `else` as this no longer seems right.<|||||>Thank you all for this PR!
@stas00 Actually when generator is empty, `next()` throws `StopIteration` which is already handled by try-except statement, that's why this PR makes it work properly.<|||||>ah, I see, thank for highlighting what I missed, @KeremZaman! that explains it!
so @sgugger the other 2 cases can be reverted to how they were before this PR, since we already have the last entry.
and really this PR can be just to augment:
```
- except StopIteration:
+ except StopIteration, UnboundLocalError:
```
but it's totally fine to leave things as they were merged. except someone is bound to try to fix it down the road.
I propose a much cleaner and more readable solution that requires no try/except at all, POC:
```
if len(parameter.parameters()) > 0:
for loop ...
else:
nn.DataParallel compatibility in PyTorch 1.5 code
```<|||||>Agreed. Like I said, I will dive deeper this afternoon and make another PR for this today or Monday :-) |
transformers | 17,655 | closed | CircleCI branch filtering on push-ci branch | To avoid the branch `push-ci` (the branch to launch the actual Push CI on GitHub actions) to run `CircleCI` tests, as those are already run when the commits are merged into `main`.
The `filters` could not be added directly in the job definitions, like
```
run_tests_torch_and_tf_all:
filters:
branches:
ignore:
- circleci_branch_filtering
working_directory: ~/transformers
```
nor
```
run_tests_torch_and_tf_all:
<<: *job_filters
working_directory: ~/transformers
```
otherwise I got error
```
... [#/jobs/check_code_quality] extraneous key [filters] is not permitted| | Permitted keys:
```
See
https://stackoverflow.com/a/57366603 | 06-10-2022 10:42:12 | 06-10-2022 10:42:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I will close this PR, once #17692 is merged (as it is preferred). |
transformers | 17,654 | closed | Can we do adaptive pretraining on BERT-related models using transformers? | ### Feature request
Adaptive pretraining methods like domain adaptive pretraining and task adaptive pretraining can benefit downstream tasks, which is illustrated in [https://aclanthology.org/2020.acl-main.740.pdf](url). In [https://huggingface.co/models](url), there are successful models pretrained with data from source domain. I would like to do adaptive pretraining(with tasks like MLM) to chinese-roberta-wwm-ext-large: [https://huggingface.co/hfl/chinese-roberta-wwm-ext-large](url) using unlabeled target domain data, so as to get better result in downstream tasks.
### Motivation
BERT and related models are benefiting some areas. The following are some examples:
1. [http://arxiv.org/abs/2004.02288](url)
2. [http://arxiv.org/abs/1908.10063](url)
3. [http://arxiv.org/abs/2007.15779](url)
4. [http://arxiv.org/abs/1906.02124](url)
5. [http://arxiv.org/abs/1904.05342](url)
I would like to do adaptive pretraining to chinese-roberta-wwm-ext-large, using unlabeled Chinese data in my area. It is good to start with conventional MLM. Afterwards, I might try and follow the pretraining task setting of chinese-roberta-wwm-ext-large, i.e. whole word masking and dynamic masking.
### Your contribution
Hopefully, a domain-specific pretrained language model. | 06-10-2022 09:31:24 | 06-10-2022 09:31:24 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>I will. Thanks for the instruction.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,653 | closed | Abnormal behavior of OPT except OPT-350m | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.11.0-1020-azure-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
def log_probs_with_ppl(path, prompt):
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import torch.nn.functional as F
# for half precision (13b models): torch_dtype=torch.float16
model = AutoModelForCausalLM.from_pretrained(path).cuda()
model.eval()
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
outputs = model(input_ids, labels=input_ids)
logits = outputs.logits
arg_probs, _ = F.softmax(logits, dim=-1).max(-1)
print("argmax probility:", arg_probs[0].cpu().detach().numpy())
log_probs, tokens = F.log_softmax(logits, dim=-1).max(-1)
print("argmax log probability:", log_probs[0].cpu().detach().numpy())
sent = tokenizer.decode(tokens.squeeze().cpu().detach().numpy(), skip_special_tokens=False)
print("argmax tokens:", sent)
xentropy_loss = outputs[0]
print("cross entropy loss:", xentropy_loss.item())
ppl = torch.exp(xentropy_loss).item()
print("ppl:", ppl)
if __name__ == "__main__":
model_path = 'huggingface/opt-1.3b'
prompts = "There is a book on the desk."
log_probs_with_ppl(model_path, prompts)
```
### Expected behavior
The scripts above are used to test both the gpt2 and opt models, the results are shown below:
#### Sentence Scoring-GPT2
```
Input: There is a book on the desk.
# gpt2
argmax probility: [0.05637151 0.27859244 0.08230144 0.11579145 0.1521898 0.34015855 0.1640605 0.16998181]
argmax log probability: [-2.8757913 -1.2780054 -2.4973667 -2.1559646 -1.8826268 -1.0783434 -1.8075199 -1.772064 ]
argmax tokens: is no lot called the subject of It
cross entropy loss: 4.143580913543701
ppl: 63.02811813354492
# gpt2-Medium
argmax probility: [0.39332193 0.26675868 0.08419792 0.1576896 0.2581378 0.1720277 0.1351828 0.13347614]
argmax log probability: [-0.93312687 -1.3214109 -2.474585 -1.8471267 -1.3542618 -1.7600996 -2.0011272 -2.0138326 ]
argmax tokens: are no lot called the subject in
cross entropy loss: 3.641242504119873
ppl: 38.13919448852539
# gpt2-Large
argmax probility: [0.33576348 0.27546927 0.09161323 0.16216931 0.29808053 0.09624117 0.16370784 0.15139417]
argmax log probability: [-1.0913483 -1.2892792 -2.3901796 -1.8191143 -1.2103915 -2.340898 -1.809672 -1.8878684]
argmax tokens: are no lot called the subject, It
cross entropy loss: 3.1841206550598145
ppl: 24.146047592163086
```
#### Sentence Scoring-OPT
```
Input: There is a book on the desk.
# opt-125m
argmax probility: [0.00063085 0.00046801 0.00079859 0.00062031 0.00056935 0.00048211 0.00078747 0.00045703 0.00154377]
argmax log probability: [-7.368442 -7.6670203 -7.1326685 -7.3852873 -7.471012 -7.6373434 -7.146683 -7.690766 -6.4735274]
argmax tokens: I aren nothing difference called youtube subjectawaru It
cross entropy loss: 9.004321098327637
ppl: 8138.173828125
# opt-350m
argmax probility: [0.09890626 0.29053134 0.30989376 0.05687688 0.31782693 0.24106818 0.15811151 0.12125468 0.2616786 ]
argmax log probability: [-2.313583 -1.2360439 -1.1715258 -2.8668664 -1.1462483 -1.4226755 -1.8444548 -2.109862 -1.3406382]
argmax tokens: 's a lot called the subject in
cross entropy loss: 3.714618682861328
ppl: 41.042930603027344
# opt-1.3b
argmax probility: [0.18575612 0.51934767 0.6326897 0.51414996 0.97731984 0.9402624 0.63661176 0.20046458 0.6865138 ]
argmax log probability: [-1.6833206 -0.65518177 -0.4577752 -0.6652403 -0.0229413 -0.06159634 -0.45159534 -1.6071177 -0.37612897]
argmax tokens: I are no difference called Amazon subject beside It
cross entropy loss: 7.282720565795898
ppl: 1454.94091796875
# opt-6.7b
argmax probility: [0.9414 0.391 0.766 0.627 0.998 0.7646 0.978 0.4473 0.8735]
argmax log probability: [-0.0602 -0.939 -0.2664 -0.4668 -0.001997 -0.268 -0.02211 -0.8047 -0.135 ]
argmax tokens: I's no lot called this subject.
cross entropy loss: 7.17578125
ppl: 1307.0
```
When the model size increases, gpt2 tends to predict more accurate results with smaller ppl. However, opt models (except opt-350m) produce much larger ppl than the ppl of opt-350m.
Besides, it is abnormal that when the model size increases, opt models seem to have larger confidence score about the argmax decoding tokens (check **argmax probility** above).
I wonder what is causing such an issue. Looking forward to your reply. Thx!
| 06-10-2022 08:58:13 | 06-10-2022 08:58:13 | Related issue https://github.com/huggingface/transformers/issues/17545<|||||>Hmm, that's a very good test - it indeed seems like there is a bug with the model weights conversion... @stephenroller can you by any chance run those models with metaseq to verify that `ppl` is reasonable? <|||||>Also @hemingkx, I can confirm your results - just sent a mail to the authors since we think the problem lies in the model conversion<|||||>Thanks for your early reply! Hope this can be resolved soon 😊.<|||||>This is in my queue, just have a long queue.<|||||>@stephenroller Big Science was hoping to compare the BLOOM model to OPT. Do you have any idea if it’s reasonable to expect that this is resolved by the end of the month?
I would be very appreciative if you could keep me apprised of the progress.<|||||>@stephenroller do you use Tensor Parallelism for OPT models >= 1.3b? (I can't find the information on the paper)
I faced a strange behaviour when trying to convert BLOOM model trained on Megatron-LM. It might be related if so but not sure..
Related issue: https://github.com/pytorch/pytorch/issues/76232<|||||>We used tensor parallelism for all models EXCEPT 350 :P<|||||>@patrickvonplaten then it might explain these numbers on all models except 350m ?
I remember trying a quick test on BLOOM-176b with [this hack](https://github.com/huggingface/transformers/blob/f44e2c2b6f319514d8a80eccc2dd4c480697cc13/src/transformers/models/bloom/modeling_bloom.py#L453) and without the hack and it made quite a difference quantitatively (logits exactness) but when I qualitatively compared the generation results it didn't made any difference.
@stephenroller what was the TP rank used for these models just out of curiosity?
<|||||>Also do you remember which layer did you TP-ranked?<|||||>EleutherAI has found that merging TP parallel models can be extremely non-intuitive, and it took some substantial work from @zphang to figure out how to do it while maintaining performance. You can find code for merging the TP parallel shards for our 20B model [here](https://github.com/EleutherAI/gpt-neox/blob/main/tools/merge20b.py) and I think we have some more general code as well but I would have to hunt it down to release it.<|||||>> @patrickvonplaten then it might explain these numbers on all models except 350m ? I remember trying a quick test on BLOOM-176b with [this hack](https://github.com/huggingface/transformers/blob/f44e2c2b6f319514d8a80eccc2dd4c480697cc13/src/transformers/models/bloom/modeling_bloom.py#L453) and without the hack and it made quite a difference quantitatively (logits exactness) but when I qualitatively compared the generation results it didn't made any difference. @stephenroller what was the TP rank used for these models just out of curiosity?
https://github.com/facebookresearch/metaseq/blob/cf24413b2c78ad2f293fb9ac53a74be20f087863/metaseq/launcher/opt_job_constants.py#L32-L44
The final field in each of these (except 350M, where i trained it in a different one-off pipeline using MP1)<|||||>Cool thanks! <|||||>That's very interesting! Thanks for all the pointers here @StellaAthena @stephenroller @younesbelkada - let's try out whether this fixes it :-)<|||||>Okay I think we've fixed the issue, I think it was caused because of poor conversion on our side where we missed those lines from `metaseq`: https://github.com/facebookresearch/metaseq/blob/e0c4f6b0e4c523906ad8d561f727e3f2ac3a8e73/metaseq/models/transformer.py#L466-L477
Fix is in #17785 , preliminary tests on 125m and 1B3 showed that the fix significantly reduces the ppl.
```python
>>> model_path="fixed_opt_125m"
>>> prompt="Hello my name is"
>>> log_probs_with_ppl(model_path, prompt)
Input torch.Size([1, 5])
Logits torch.Size([1, 5, 50272])
torch.return_types.max(
values=tensor([[0.2398, 0.2326, 0.3332, 0.9363, 0.0097]], grad_fn=<MaxBackward0>),
indices=tensor([[ 100, 6, 766, 16, 1236]]))
argmax probility: [[0.23982257 0.23258895 0.33315504 0.9362957 0.00967377]]
argmax log probability: [[-1.4278558 -1.4584825 -1.0991473 -0.06582398 -4.6383367 ]]
argmax tokens: I, name is j
cross entropy loss: 4.051314830780029
ppl: 57.47297286987305
```<|||||>Hi @stephenroller ! We would like to have a final check about something while merging #17785 , does all OPT models have `share_input_output_embed` set to `True` ? We know this is set to True for opt-350 but we are not sure about other models https://github.com/facebookresearch/metaseq/blob/e0c4f6b0e4c523906ad8d561f727e3f2ac3a8e73/metaseq/models/transformer.py#L486 <|||||>Yes, all models have that True.
Yay, glad you were able to fix it!<|||||>Hi,
Thanks everyone for working on this issue!
I'm not sure the issue is fully resolved. I found that the weight of the final layer norm is one-initialized, which is the default initialization in `nn.LayerNorm`. The bias seems fine, however, meaning it is not zero-initialized.
```python
>>> from transformers import OPTModel
>>> model = OPTModel.from_pretrained("facebook/opt-13b")
>>> all(model.decoder.final_layer_norm.weight == 1)
True
>>> all(model.decoder.final_layer_norm.bias == 0)
False
```
Same observation with the variants 2.7B and 6.7B. This seems unexpected. Is it possible the final layer norm weight was lost at some point during the conversion?<|||||>We observed that models 13B and larger ended up learning layer norm weights of 1; we chalked it up to (1 + epsilon) precision issues. Can you check other layer norms for the same set of models?<|||||>I checked with 2.7B which is faster to load and all layer norm weights are indeed 1:
```python
>>> from transformers import OPTModel
>>> model = OPTModel.from_pretrained("facebook/opt-2.7b")
>>> all(all(layer.final_layer_norm.weight == 1) for layer in model.decoder.layers)
True
>>> all(all(layer.self_attn_layer_norm.weight == 1) for layer in model.decoder.layers)
True
```
So this seems to be the expected value based on your comment. Thanks @stephenroller!<|||||>@patrickvonplaten IMO we should double check that, this seems like a highly unlikely thing for weights ...<|||||>This doesn't happen for 125m:
```python
>>> from transformers import OPTModel
>>> model = OPTModel.from_pretrained("facebook/opt-125m")
>>> all(all(layer.final_layer_norm.weight == 1) for layer in model.decoder.layers)
False
>>> all(all(layer.self_attn_layer_norm.weight == 1) for layer in model.decoder.layers)
False
>>> all(model.decoder.final_layer_norm.weight == 1)
False
```
So I'd expect this not to be an issue with conversion scripts of whatever .... Were you able to generate using 13b? Are the generations not as good as you expected?<|||||>I'm working on a project where we apply 8-bit quantization to the OPT linear weights. It works well for 125m where we get the same output with and without quantization, but there are unexpected repetitions with the quantized 350m and larger variants. (https://github.com/OpenNMT/CTranslate2/issues/818).
I found it odd that the models we have some issues with are precisely the models where all layer norm weights are 1 (so far I verified this to be true for 350m, 1.3b, 2.7b, 6.7b, and 13b). As you said this is generally unlikely for weights so I thought this is worth mentioning.<|||||>350m didn't have issues normally. I agree it's very unlikely, we're going to run a perplexity test to check that it is fairly close to gpt2. @stephenroller did you make it untrainable and just set it at 1?<|||||>We didn't make it untrainable. We actually spent some time digging into this and had a quite vigorous internal debate on whether this was a "problem." Again, since we trained with memory efficient fp16 with often low LRs, my believe was that the gradients were experiencing (1 + epsilon) = 1 underflow type issues; epsilon is much higher than you might expect:
```
torch>>> torch.finfo(torch.float16)
finfo(resolution=0.001, min=-65504, max=65504, eps=0.000976562, tiny=6.10352e-05, dtype=float16)
>>> torch.finfo(torch.bfloat16)
finfo(resolution=0.01, min=-3.38953e+38, max=3.38953e+38, eps=0.0078125, tiny=1.17549e-38, dtype=bfloat16)
```
However, to my knowledge, this weight=1 issue didn't crop up until 13B scale... Let me see if I have some human readable checkpoints.<|||||>To add to this, I just evaluated the perplexity on the corrected OPT checkpoints (the ones that are online on the Hub now) with the following script (slighly adapted from the original one):
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import os
import torch.nn.functional as F
def log_probs_with_ppl(path, prompt):
model = AutoModelForCausalLM.from_pretrained(path)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
with torch.no_grad():
outputs = model(input_ids, labels=input_ids)
logits = outputs.logits
arg_probs, _ = F.softmax(logits, dim=-1).max(-1)
print("argmax probility:", arg_probs[0].cpu().detach().numpy())
log_probs, tokens = F.log_softmax(logits, dim=-1).max(-1)
print("argmax log probability:", log_probs[0].cpu().detach().numpy())
sent = tokenizer.decode(tokens.squeeze().cpu().detach().numpy(), skip_special_tokens=False)
print("argmax tokens:", sent)
xentropy_loss = outputs[0]
print("cross entropy loss:", xentropy_loss.item())
ppl = torch.exp(xentropy_loss).item()
print("ppl:", ppl)
if __name__ == "__main__":
prompts = "There is a book on the desk."
for model_id in ["opt-125m", "opt-350m", "opt-1.3b", "opt-2.7b", "opt-6.7b", "opt-13b", "opt-30b"]:
print(20 * "=" + model_id + 20 * "=")
model_path = os.path.join("facebook", model_id)
log_probs_with_ppl(model_path, prompts)
```
and the results look as follows (a bit weird that the 30B ppl is not lower)
```
====================opt-125m====================
argmax probility: [0.23981844 0.31434286 0.32221574 0.05977888 0.34688717 0.14583494
0.21301866 0.16133878 0.17206082]
argmax log probability: [-1.4278731 -1.157271 -1.132534 -2.817103 -1.0587558 -1.9252799
-1.5463755 -1.8242489 -1.7599072]
argmax tokens: I's a lot called the subject that
cross entropy loss: 3.867582321166992
ppl: 47.82661819458008
====================opt-350m====================
argmax probility: [0.09871461 0.2903144 0.3098579 0.05694651 0.31773096 0.24076065
0.15807256 0.12118834 0.26144707]
argmax log probability: [-2.3155224 -1.2367908 -1.1716415 -2.8656428 -1.1465503 -1.423952
-1.844701 -2.1104095 -1.3415234]
argmax tokens:
's a lot called the subject in
cross entropy loss: 3.7145543098449707
ppl: 41.04029083251953
====================opt-1.3b====================
argmax probility: [0.10844591 0.2721116 0.30265862 0.03994901 0.42767116 0.21947733
0.21639532 0.16303496 0.27209064]
argmax log probability: [-2.2215037 -1.301543 -1.1951498 -3.2201512 -0.8494007 -1.5165063
-1.5306484 -1.8137906 -1.3016201]
argmax tokens: I's a lot called the subject.
cross entropy loss: 3.5381228923797607
ppl: 34.40228271484375
====================opt-2.7b====================
argmax probility: [0.10350165 0.29980636 0.3279065 0.04002884 0.3831317 0.17393681
0.06864104 0.12390347 0.3078983 ]
argmax log probability: [-2.2681677 -1.2046186 -1.1150268 -3.2181551 -0.9593765 -1.7490631
-2.6788647 -2.0882525 -1.1779857]
argmax tokens: I's a lot called the subject in
cross entropy loss: 3.407679557800293
ppl: 30.195096969604492
====================opt-6.7b====================
argmax probility: [0.10619629 0.29815957 0.3240549 0.04175518 0.38586977 0.20265782
0.14770415 0.18028003 0.15865195]
argmax log probability: [-2.2424662 -1.2101265 -1.1268424 -3.1759317 -0.95225537 -1.5962363
-1.912544 -1.713244 -1.8410425 ]
argmax tokens: I's a lot called the subject.
cross entropy loss: 3.3324668407440186
ppl: 28.007347106933594
====================opt-13b====================
argmax probility: [0.11410075 0.24206492 0.32771447 0.04382524 0.42932945 0.19955206
0.09738682 0.18719622 0.23314796]
argmax log probability: [-2.1706734 -1.4185493 -1.1156125 -3.1275454 -0.8455307 -1.6116802
-2.3290644 -1.6755979 -1.456082 ]
argmax tokens: I is a lot called the shelf.
cross entropy loss: 3.196335792541504
ppl: 24.44280242919922
====================opt-30b====================
argmax probility: [0.05695914 0.21338376 0.29346094 0.05644348 0.14937086 0.306213
0.14233655 0.19602104 0.15788034]
argmax log probability: [-2.865421 -1.5446631 -1.2260108 -2.8745155 -1.9013231 -1.1834744
-1.949561 -1.6295333 -1.8459179]
argmax tokens: _ is a bug called the table in
cross entropy loss: 3.548802375793457
ppl: 34.77164840698242
```<|||||>30B looks kinda weird with its higher ppl than the rest...<|||||>Yeah - tried out another prompt `prompts = "In its most general sense, the term 'world' refers to the totality of entities, to the whole of reality or to everything that is."`:
```
====================opt-125m====================
argmax probility: [0.23982129 0.15632838 0.13140368 0.75963086 0.27387255 0.878215
0.3593415 0.02516824 0.29971686 0.01008662 0.16188805 0.38820833
0.94960755 0.3430672 0.10371881 0.9371656 0.35841522 0.19422102
0.10693377 0.26196504 0.09036761 0.27197453 0.3381063 0.48046958
0.20347297 0.47123003 0.20022035 0.25318944 0.08799087 0.22637637]
argmax log probability: [-1.4278613 -1.8557965 -2.0294812 -0.2749227 -1.2950925 -0.12986381
-1.0234821 -3.6821725 -1.2049171 -4.5965457 -1.8208503 -0.9462132
-0.05170649 -1.0698289 -2.2660718 -0.06489524 -1.0260631 -1.6387584
-2.2355456 -1.3395443 -2.4038694 -1.3020469 -1.0843949 -0.73299134
-1.5922221 -0.7524089 -1.6083368 -1.3736173 -2.4305222 -1.4855564 ]
argmax tokens: I the first recent form, the new "re- is to the world of the, not the extent of the. the the that is not
cross entropy loss: 3.4385933876037598
ppl: 31.14312171936035
====================opt-350m====================
argmax probility: [0.09871422 0.16178179 0.08027261 0.35097533 0.519601 0.88294435
0.1850073 0.03688832 0.36755642 0.01141822 0.14522597 0.33600938
0.9539717 0.33490822 0.17595953 0.95351464 0.3859613 0.17998883
0.12469297 0.28728586 0.07168846 0.28822082 0.35224074 0.44462964
0.22922419 0.33133236 0.25537238 0.7224368 0.147327 0.15977049]
argmax log probability: [-2.3155262 -1.8215069 -2.522327 -1.0470394 -0.65469414 -0.12449309
-1.6873599 -3.2998602 -1.0008785 -4.4725447 -1.9294643 -1.0906162
-0.04712127 -1.0938988 -1.7375013 -0.0476005 -0.95201814 -1.7148604
-2.0819008 -1.2472775 -2.6354256 -1.2440283 -1.0434405 -0.8105136
-1.4730548 -1.1046333 -1.3650324 -0.32512537 -1.9151007 -1.8340169 ]
argmax tokens:
the first recent sense, the term "s' is to the world of the, including the totality of the. to the in exists.
cross entropy loss: 2.962890148162842
ppl: 19.35382652282715
====================opt-1.3b====================
argmax probility: [0.10844579 0.15118013 0.06978295 0.6313941 0.4198188 0.8240803
0.17271882 0.16285989 0.27022356 0.01286044 0.24925965 0.28614777
0.9725371 0.33411652 0.12802885 0.9795981 0.24178138 0.15681867
0.05040615 0.4303817 0.07268634 0.3975281 0.31267446 0.45626673
0.3057922 0.5389601 0.4383189 0.4491574 0.09853235 0.14064014]
argmax log probability: [-2.221505 -1.8892832 -2.6623657 -0.45982504 -0.86793214 -0.19348735
-1.7560903 -1.814865 -1.3085057 -4.3535995 -1.3892602 -1.2512469
-0.02784706 -1.0962654 -2.0554996 -0.0206129 -1.4197214 -1.8526651
-2.987642 -0.84308285 -2.6216018 -0.92248964 -1.1625926 -0.7846777
-1.1848495 -0.61811376 -0.8248085 -0.8003819 -2.3173704 -1.9615508 ]
argmax tokens: I the latest recent sense, the term "social' is to the universe of all that or which totality of the. to the that exists.
cross entropy loss: 2.8560848236083984
ppl: 17.393295288085938
====================opt-2.7b====================
argmax probility: [0.10350163 0.15577659 0.08376333 0.5673112 0.50874865 0.8624449
0.21431658 0.15363932 0.4019405 0.01176295 0.31341943 0.22355694
0.9744133 0.4234775 0.14364612 0.9677136 0.25195396 0.16395077
0.05868356 0.2550737 0.1175658 0.33331287 0.24672066 0.46241838
0.15379134 0.5719755 0.31928268 0.480221 0.26081583 0.18780598]
argmax log probability: [-2.268168 -1.8593324 -2.47976 -0.5668472 -0.6758013 -0.14798404
-1.5403011 -1.8731475 -0.9114513 -4.442801 -1.1602129 -1.4980892
-0.02591975 -0.8592549 -1.9404025 -0.03281909 -1.3785089 -1.808189
-2.8355956 -1.3662028 -2.140757 -1.0986737 -1.3994985 -0.7712852
-1.8721585 -0.5586591 -1.1416783 -0.7335089 -1.3439407 -1.6723459 ]
argmax tokens: I the first recent sense, a term �social' is to the universe of all that including which universe of the. existence the that exists. In
cross entropy loss: 2.7184252738952637
ppl: 15.1564359664917
====================opt-6.7b====================
argmax probility: [0.1061962 0.15520018 0.08349195 0.41061476 0.4894317 0.85579187
0.19358145 0.16583215 0.3086383 0.0087592 0.25194162 0.22671384
0.9799279 0.39524195 0.22661391 0.958905 0.28948593 0.23172891
0.08076624 0.20174423 0.24156563 0.29045665 0.19864736 0.5106398
0.2697726 0.5609671 0.46792567 0.72609276 0.34715182 0.18653922]
argmax log probability: [-2.242467 -1.8630395 -2.483005 -0.8900999 -0.7145103 -0.15572807
-1.642057 -1.7967792 -1.1755853 -4.737651 -1.3785579 -1.4840667
-0.0202763 -0.92825717 -1.4845076 -0.04196331 -1.2396486 -1.462187
-2.5161963 -1.6007546 -1.4206141 -1.236301 -1.616224 -0.6720909
-1.3101759 -0.578093 -0.75944585 -0.32007748 -1.057993 -1.6791137 ]
argmax tokens: I the first recent sense, the term �f' refers to the totality of all that including the totality of the. existence the that exists. In
cross entropy loss: 2.5715460777282715
ppl: 13.086040496826172
====================opt-13b====================
argmax probility: [0.11410033 0.18445235 0.07394046 0.3377592 0.4820613 0.89032376
0.26655433 0.22948444 0.3482319 0.00974461 0.36039528 0.21514857
0.9776878 0.32965162 0.12019254 0.9643266 0.2866262 0.18254285
0.09594422 0.30989775 0.1353004 0.32887647 0.18270463 0.42996788
0.2804617 0.59530467 0.37093756 0.7375983 0.372651 0.17102638]
argmax log probability: [-2.1706772 -1.6903641 -2.604495 -1.085422 -0.729684 -0.11617013
-1.3221772 -1.4719201 -1.0548866 -4.631041 -1.0205538 -1.5364265
-0.02256491 -1.1097189 -2.1186602 -0.03632521 -1.2495763 -1.7007704
-2.3439884 -1.1715128 -2.000258 -1.1120731 -1.6998845 -0.84404474
-1.2713182 -0.518682 -0.9917216 -0.30435592 -0.98711294 -1.7659374 ]
argmax tokens: I the current basic sense, a term �b' refers to the entire of all that both which totality of the. existence the that exists. In
cross entropy loss: 2.5906801223754883
ppl: 13.33884048461914
====================opt-30b====================
argmax probility: [0.05695887 0.12192544 0.118644 0.28989854 0.61544293 0.8780987
0.21866173 0.10945266 0.18478885 0.01414752 0.61560327 0.3003071
0.9654389 0.39148605 0.2840576 0.96917915 0.23066148 0.18086651
0.13592616 0.27291402 0.18672818 0.301525 0.23815584 0.5115922
0.41115332 0.5572077 0.40895483 0.7513009 0.50103414 0.13948028]
argmax log probability: [-2.865426 -2.1043456 -2.1316278 -1.2382243 -0.48541304 -0.12999624
-1.5202293 -2.212263 -1.6885414 -4.258216 -0.48515254 -1.2029496
-0.03517244 -0.9378054 -1.2585783 -0.0313058 -1.4668041 -1.7099961
-1.9956435 -1.2985984 -1.6781013 -1.1989024 -1.4348301 -0.67022747
-0.8887891 -0.58481723 -0.8941506 -0.28594905 -0.691081 -1.9698321 ]
argmax tokens: _format constructor basic form, the term "in' refers to the totality of all that events which totality of the. existence the that exists. In
cross entropy loss: 2.6525726318359375
ppl: 14.190498352050781
```
30B still a bit higher. But note that all those are also run on CPU (currently don't have access to a big GPU)<|||||>(sorry - there was one final problem with opt-30b, there was a typo with the begin token for 30B see: https://huggingface.co/facebook/opt-30b/commit/a7fad6ce41655b751249f8801cc3a24ede359d31).
Updated results make more sense IMO:
For `"There is a book on the desk."`
```
argmax probility: [0.10173301 0.2815346 0.34502357 0.04142236 0.4594141 0.24795197
0.13434087 0.19007264 0.17950189]
argmax log probability: [-2.2854035 -1.2674999 -1.0641425 -3.1839345 -0.77780336 -1.3945202
-2.0073748 -1.660349 -1.7175696 ]
argmax tokens: I's a lot called this subject in
cross entropy loss: 3.267496109008789
ppl: 26.245540618896484
```
For `"In its most general sense, the term 'world' refers to the totality of entities, to the whole of reality or to everything that is."`
```
====================opt-30b====================
argmax probility: [0.10173287 0.19473663 0.07857301 0.41664603 0.40225816 0.9184978
0.24668908 0.32442567 0.34429762 0.01023421 0.26163322 0.21400526
0.96574026 0.44348246 0.1901864 0.9594519 0.1832267 0.18785793
0.11864074 0.35639173 0.21181443 0.32442087 0.2604604 0.51606554
0.4302902 0.56918555 0.3640891 0.7693961 0.49747008 0.15007249]
argmax log probability: [-2.2854047 -1.6361073 -2.543727 -0.87551826 -0.9106612 -0.0850158
-1.3996265 -1.1256989 -1.0662489 -4.5820193 -1.3408117 -1.5417547
-0.03486039 -0.81309706 -1.6597506 -0.04139308 -1.6970311 -1.6720693
-2.1316555 -1.0317248 -1.5520447 -1.1257137 -1.3453044 -0.66152155
-0.84329545 -0.56354874 -1.0103567 -0.26214936 -0.6982199 -1.8966368 ]
argmax tokens: I the current basic sense, a present �b' refers to the totality of all that events which totality of the. existence the that exists. In
cross entropy loss: 2.567533493041992
ppl: 13.033637046813965
```<|||||>Awesome @patrickvonplaten ! So we assume the conversion was correct in that sense I guess.<|||||>OPT-66B is up as well. Ran the above prompts only on GPU with fp16 so slightly less precision.
For `"There is a book on the desk."`:
```
====================opt-66b====================
argmax probility: [0.118 0.287 0.3286 0.04562 0.39 0.2258 0.0776 0.1586 0.177 ]
argmax log probability: [-2.137 -1.248 -1.113 -3.088 -0.942 -1.488 -2.557 -1.842 -1.731]
argmax tokens: I's a lot called this subject in
cross entropy loss: 3.240234375
ppl: 25.546875
```
For `"In its most general sense, the term 'world' refers to the totality of entities, to the whole of reality or to everything that is."`
```
====================opt-66b====================
argmax probility: [0.11804 0.1587 0.09045 0.379 0.3762 0.9214 0.2576 0.421 0.3442
0.012 0.2072 0.2379 0.982 0.4968 0.155 0.9736 0.1715 0.1682
0.0908 0.3071 0.299 0.538 0.4558 0.5166 0.8755 0.526 0.736
0.919 0.408 0.2372 ]
argmax log probability: [-2.137 -1.841 -2.402 -0.9707 -0.9775 -0.0817 -1.356 -0.865
-1.066 -4.42 -1.574 -1.436 -0.01802 -0.699 -1.864 -0.02663
-1.763 -1.782 -2.398 -1.181 -1.207 -0.62 -0.7856 -0.6606
-0.1332 -0.6426 -0.307 -0.0847 -0.8965 -1.438 ]
argmax tokens: I the first recent sense, a term �f' is to the totality of all, events which totality of reality. existence the that exists. In
cross entropy loss: 2.630859375
ppl: 13.8828125
```<|||||>That surprises me less. The 66B was actually harder to train than the 175B, weirdly. |
transformers | 17,652 | closed | Use the new hub ci | Uses the new CI environment for tests, which uses Gitaly and is automatically updated to the latest version of the Hub. | 06-10-2022 08:51:28 | 06-10-2022 08:51:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17652). All of your documentation changes will be reflected on that endpoint.<|||||>Yay!<|||||>> Yay!
Blocked by https://github.com/huggingface/huggingface_hub/pull/898 :)<|||||>Done in #17716, @XciD lost a contribution to the codebase =) =) cc @sgugger @LysandreJik <|||||>If you don't ping us for reviews, your PR doesn't exist :-p <|||||>> If you don't ping us for reviews, your PR doesn't exist :-p
Yes, my bad, I was waiting for another one before pinging <|||||>No worries, was just joking around why neither Lysandre nor I had seen it :) |
transformers | 17,651 | closed | 🐛 Properly raise `RepoNotFoundError` when not authenticated | # What does this PR do?
Reverts #17646
If the user is not authenticated, the Hub now returns a 401 error if a repo is not found (either because it does not exist or it is private).
This PR catches that and properly raises `RepoNotFoundError` in `get_from_cache` to be consistent with the previous behaviour.
| 06-10-2022 08:47:40 | 06-10-2022 08:47:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> (Style changes in research_project should go in another PR normally, but why not).
Sorry, these changes were applied automatically by `make style`.
I can remove them if that's better 🙂
<|||||>Merging since the CI failure seems unrelated |
transformers | 17,650 | closed | Rag end2end new | # What does this PR do?
I revamped the **[rag-end2end-retriever](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag-end2end-retriever)** to be compatible with the latest versions of PL and RAY. This was requested by @aaronmueller.
@patrickvonplaten | 06-10-2022 08:29:39 | 06-10-2022 08:29:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Ok for me! Could we maybe though delete the commented out code?
Thanks a lot for the update. I did remove unwanted comments. Please let me know if I need to do anything more. |
transformers | 17,649 | closed | UnboundLocalError when running run_glue.py | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
### Who can help?
@LysandreJik @sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Run `run_glue.py` for `snli` dataset:
```bash
python run_glue.py \
--model_name_or_path bert-base-uncased \
--dataset_name snli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 128 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir ./bert_base_uncased_snli_53 \
--save_steps 1500 \
--seed 53
```
3. The error code is raised:
```
0%| | 0/6450 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/myuser/transformers/examples/pytorch/text-classification/run_glue.py", line 617, in <module>
main()
File "/home/creativegan/kerem/download/download/finetuning/transformers/examples/pytorch/text-classification/run_glue.py", line 529, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/trainer.py", line 1367, in train
return inner_training_loop(
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/trainer.py", line 1609, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/trainer.py", line 2300, in training_step
loss = self.compute_loss(model, inputs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/trainer.py", line 2332, in compute_loss
outputs = model(**inputs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise
raise exception
UnboundLocalError: Caught UnboundLocalError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1556, in forward
outputs = self.bert(
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 991, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/modeling_utils.py", line 836, in get_extended_attention_mask
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/modeling_utils.py", line 729, in dtype
return get_parameter_dtype(self)
File "/home/myuser/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/modeling_utils.py", line 163, in get_parameter_dtype
return t.dtype
UnboundLocalError: local variable 't' referenced before assignment
```
### Expected behavior
```shell
Normally, it should continue training but in `modeling_utils.py`, it gives the error for the line after `else` statement.
def get_parameter_dtype(parameter: Union[nn.Module, GenerationMixin, "ModuleUtilsMixin"]):
"""
Returns the first found floating dtype in parameters if there is one, otherwise returns the last dtype it found.
"""
try:
for t in parameter.parameters():
if t.is_floating_point():
return t.dtype
# if no floating dtype was found return whatever the first dtype is
else:
return t.dtype
```
```
| 06-10-2022 08:22:11 | 06-10-2022 08:22:11 | I can reproduce, will try to fix this morning! |
transformers | 17,648 | closed | Rm hardcoded style fromdoc expert acc imgs | Having a hardcoded style of `width:600px` is overriding hf.co doc styles for images, which is causing an overflowing problem on mobile
| before | after |
|--------|-------|
| <img width="400" alt="Screenshot 2022-06-10 at 09 53 20" src="https://user-images.githubusercontent.com/11827707/173018283-a1d5e103-4bd6-4814-af25-f0d0fc9f3888.png"> | <img width="400" alt="Screenshot 2022-06-10 at 09 53 12" src="https://user-images.githubusercontent.com/11827707/173018274-2227571a-702a-4c82-b103-3a5a1a4ae029.png"> | | 06-10-2022 07:52:48 | 06-10-2022 07:52:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17648). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,647 | closed | Revert "Skip tests relying on RepoNotFound errors until bug is fixed" | Reverts huggingface/transformers#17646
Test will fail for now, but one day, a hero will fix them and we can click the "Relaunch workflow from failed" button to see them passing, and we will be able to be merge this PR. | 06-10-2022 01:33:43 | 06-10-2022 01:33:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17647). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,646 | closed | Skip tests relying on RepoNotFound errors until bug is fixed | # What does this PR do?
There is an (hopefully temporary) bug on the Hub right now not returning the `RepoNotFoundError` when someone tries to hit a wrong repo. This PR skips all the tests relying on it until the bug is fixed so the main branch stays green and community contributors don't start getting strange errors.
Will merge as soon as it's green. | 06-10-2022 01:20:13 | 06-10-2022 01:20:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,645 | closed | Bump cookiecutter from 1.7.2 to 2.1.1 in /examples/research_projects/decision_transformer | Bumps [cookiecutter](https://github.com/cookiecutter/cookiecutter) from 1.7.2 to 2.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/cookiecutter/cookiecutter/releases">cookiecutter's releases</a>.</em></p>
<blockquote>
<h2>2.1.1</h2>
<h2>Documentation updates</h2>
<ul>
<li>Fix local extensions documentation (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1686">#1686</a>) <a href="https://github.com/alkatar21"><code>@alkatar21</code></a></li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Sanitize Mercurial branch information before checkout. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1689">#1689</a>) <a href="https://github.com/ericof"><code>@ericof</code></a></li>
</ul>
<h2>This release is made by wonderful contributors:</h2>
<p><a href="https://github.com/alkatar21"><code>@alkatar21</code></a>, <a href="https://github.com/ericof"><code>@ericof</code></a> and <a href="https://github.com/jensens"><code>@jensens</code></a></p>
<h2>2.1.0</h2>
<h2>Preamble</h2>
<p>This release log lists all changes from 1.7.3 to this release.
It includes the log of the 2.0.x releases, which were never published on PyPI.
Because of that it might look a bit blurry.</p>
<p>We release the current stable state of the project, knowing there are a bunch of open pull requests.
Those will be reviewed by the core-committers and merged or dropped.</p>
<p>Future releases will happen more frequently. Stay tuned.</p>
<p>Fetch fresh from PyPI <a href="https://pypi.org/project/cookiecutter/2.1.0/">https://pypi.org/project/cookiecutter/2.1.0/</a></p>
<h2>Changes</h2>
<ul>
<li>Move contributors and backers to credits section (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1599">#1599</a>) <a href="https://github.com/doobrie"><code>@doobrie</code></a></li>
<li>test_generate_file_verbose_template_syntax_error fixed (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1671">#1671</a>) <a href="https://github.com/MaciejPatro"><code>@MaciejPatro</code></a></li>
<li>Removed changes related to setuptools_scm (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1629">#1629</a>) <a href="https://github.com/ozer550"><code>@ozer550</code></a></li>
<li>Release 2.0.1 (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1620">#1620</a>) <a href="https://github.com/audreyfeldroy"><code>@audreyfeldroy</code></a></li>
</ul>
<h2>Breaking Changes</h2>
<ul>
<li>Release preparation for 2.0.1rc1 (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1608">#1608</a>) <a href="https://github.com/audreyfeldroy"><code>@audreyfeldroy</code></a></li>
<li>Replace poyo with pyyaml. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1489">#1489</a>) <a href="https://github.com/dHannasch"><code>@dHannasch</code></a></li>
<li>Added: Path templates will be rendered when copy_without_render used (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/839">#839</a>) <a href="https://github.com/noirbizarre"><code>@noirbizarre</code></a></li>
<li>Added: End of line detection and configuration. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1407">#1407</a>) <a href="https://github.com/insspb"><code>@insspb</code></a></li>
<li>Remove support for python2.7 (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1386">#1386</a>) <a href="https://github.com/ssbarnea"><code>@ssbarnea</code></a></li>
</ul>
<h2>Minor Changes</h2>
<ul>
<li>Documentation overhaul (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1677">#1677</a>) <a href="https://github.com/jensens"><code>@jensens</code></a></li>
<li>Feature/local extensions (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1240">#1240</a>) <a href="https://github.com/mwesterhof"><code>@mwesterhof</code></a></li>
<li>Adopt setuptools-scm packaging (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1577">#1577</a>) <a href="https://github.com/ssbarnea"><code>@ssbarnea</code></a></li>
<li>Log the error message when git clone fails, not just the return code (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1505">#1505</a>) <a href="https://github.com/logworthy"><code>@logworthy</code></a></li>
<li>allow jinja 3.0.0 (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1548">#1548</a>) <a href="https://github.com/wouterdb"><code>@wouterdb</code></a></li>
<li>Added uuid extension to be able to generate uuids (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1493">#1493</a>) <a href="https://github.com/jonaswre"><code>@jonaswre</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/cookiecutter/cookiecutter/blob/master/HISTORY.md">cookiecutter's changelog</a>.</em></p>
<blockquote>
<h2>2.1.1 (2022-06-01)</h2>
<h3>Documentation updates</h3>
<ul>
<li>Fix local extensions documentation (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1686">#1686</a>) <a href="https://github.com/alkatar21"><code>@alkatar21</code></a></li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Sanitize Mercurial branch information before checkout. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1689">#1689</a>) <a href="https://github.com/ericof"><code>@ericof</code></a></li>
</ul>
<h3>This release is made by wonderfull contributors:</h3>
<p><a href="https://github.com/alkatar21"><code>@alkatar21</code></a>, <a href="https://github.com/ericof"><code>@ericof</code></a> and <a href="https://github.com/jensens"><code>@jensens</code></a></p>
<h2>2.1.0 (2022-05-30)</h2>
<h3>Changes</h3>
<ul>
<li>Move contributors and backers to credits section (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1599">#1599</a>) <a href="https://github.com/doobrie"><code>@doobrie</code></a></li>
<li>test_generate_file_verbose_template_syntax_error fixed (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1671">#1671</a>) <a href="https://github.com/MaciejPatro"><code>@MaciejPatro</code></a></li>
<li>Removed changes related to setuptools_scm (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1629">#1629</a>) <a href="https://github.com/ozer550"><code>@ozer550</code></a></li>
<li>Feature/local extensions (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1240">#1240</a>) <a href="https://github.com/mwesterhof"><code>@mwesterhof</code></a></li>
</ul>
<h3>CI/CD and QA changes</h3>
<ul>
<li>Check manifest: pre-commit, fixes, cleaning (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1683">#1683</a>) <a href="https://github.com/jensens"><code>@jensens</code></a></li>
<li>Follow PyPA guide to release package using GitHub Actions. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1682">#1682</a>) <a href="https://github.com/ericof"><code>@ericof</code></a></li>
</ul>
<h3>Documentation updates</h3>
<ul>
<li>Fix typo in dict_variables.rst (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1680">#1680</a>) <a href="https://github.com/ericof"><code>@ericof</code></a></li>
<li>Documentation overhaul (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1677">#1677</a>) <a href="https://github.com/jensens"><code>@jensens</code></a></li>
<li>Fixed incorrect link on docs. (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1649">#1649</a>) <a href="https://github.com/luzfcb"><code>@luzfcb</code></a></li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Restore accidentally deleted support for click 8.x (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1643">#1643</a>) <a href="https://github.com/jaklan"><code>@jaklan</code></a></li>
</ul>
<h3>This release was made possible by our wonderful contributors:</h3>
<p><a href="https://github.com/doobrie"><code>@doobrie</code></a>, <a href="https://github.com/jensens"><code>@jensens</code></a>, <a href="https://github.com/ericof"><code>@ericof</code></a>, <a href="https://github.com/luzfcb"><code>@luzfcb</code></a></p>
<h2>2.0.2 (2021-12-27)</h2>
<p><em>Remark: This release never made it to official PyPI</em></p>
<ul>
<li>Fix Python version number in cookiecutter --version and test on Python 3.10 (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1621">#1621</a>) <a href="https://github.com/ozer550"><code>@ozer550</code></a></li>
<li>Removed changes related to setuptools_scm (<a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1629">#1629</a>) <a href="https://github.com/audreyfeldroy"><code>@audreyfeldroy</code></a> <a href="https://github.com/ozer550"><code>@ozer550</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/f9376a96097086476ce9eb0b93297a471ae520e0"><code>f9376a9</code></a> Prepare release 2.1.1</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/fdffddb31fd2b46344dfa317531ff155e7999f77"><code>fdffddb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1689">#1689</a> from cookiecutter/sanitize-mercurial-checkout</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/85a7884f11a5200535706a6c5d31a9acbdadae1a"><code>85a7884</code></a> Lint fixes</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/e26c46582cd9033dcea318f1c29a1f06fb74f456"><code>e26c465</code></a> Sanitize Mercurial branch information before checkout.</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/94036d0324d09cd6a4eb5e2a5707062c1e409cd1"><code>94036d0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1687">#1687</a> from cookiecutter/bump-version-back-to-dev</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/70b2ee2a3521ea71634269e72f3d3f701c51cb7d"><code>70b2ee2</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1686">#1686</a> from alkatar21/patch-1</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/8b33e96c94ac75277e8f67cc1a71d90f488b5edb"><code>8b33e96</code></a> Bump version to 2.1.1.dev0</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/58d716f51fda78ec793975eea5876691aa576b2c"><code>58d716f</code></a> [Docs] Fix local extensions documentation</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/f601b710324fd9d0255e790121dba8f74cb6e423"><code>f601b71</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/cookiecutter/cookiecutter/issues/1684">#1684</a> from cookiecutter/bump-release-2.1.0</li>
<li><a href="https://github.com/cookiecutter/cookiecutter/commit/96c68260eac572505f33381e627ad42b61aef357"><code>96c6826</code></a> bump version and edit historie</li>
<li>Additional commits viewable in <a href="https://github.com/cookiecutter/cookiecutter/compare/1.7.2...2.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-10-2022 00:34:53 | 06-10-2022 00:34:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,644 | closed | [utils] torch.cpu didn't exist in pt-1.9 | PR https://github.com/huggingface/transformers/pull/17138 introduced another issue: `torch.cpu` didn't exist till pt-1.10 | 06-09-2022 22:27:28 | 06-09-2022 22:27:28 | Thank you @stas00!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>the CI errors seem to be related to network failure, so ignoring these and merging. |
transformers | 17,643 | closed | fix typo from emtpy to empty on decoding error messages | # What does this PR do?
I am working with constrained generation a lot and a I keep seeing this error typo.
This PR fixes it in order for the error message to be a little freindlier.
cc @patrickvonplaten
| 06-09-2022 21:56:45 | 06-09-2022 21:56:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The test failures don't appear to be related to my PR BTW - I ran all the lints in the CONTRIBUTING guide...<|||||>test failures are unrelated |
transformers | 17,642 | closed | Italian translation of run_scripts.mdx gh-17459 | # What does this PR do?
Italian translation of run_scripts.mdx
See issue: gh-17459
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-09-2022 21:43:57 | 06-09-2022 21:43:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, @nickprock 😉.
I fixed all the wrong parts.<|||||>Hi @lorenzobalzani!! Your translation looks perfect to me! I'm just asking if you could change "assicurati di esserti autenticato su Hugging Face" to be more gender-neutral, e.g. to: "assicurati di aver effettuato l'accesso su Hugging Face". Thank you very much for your contribution 🚀🚀<|||||>@mfumanelli thanks for the tips ;) Next time I'll be more careful!<|||||>Thanks @lorenzobalzani 🌈🌈 Look perfect to me @omarespejel 🚀<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@mfumanelli why has the issue been closed without merging? |
transformers | 17,641 | closed | Documentation: RemBERT fixes | Hi,
this PR introduces some fixes for the RemBERT documentation:
Python Codeblock fix for the configuration class. It is currently broken:

And now fixed with this PR:

See [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17641/en/model_doc/rembert#transformers.RemBertConfig.example) for live demo.
Additionally, the model identifier `rembert` is not working, because the model is stored under the `google` namespace. This fix was done passing `google/rembert` to the `checkpoint` argument of the `add_code_sample_docstrings` decorator. This was reported in #17147.
Old [model identifier example](https://huggingface.co/docs/transformers/v4.19.3/en/model_doc/rembert#transformers.RemBertModel.forward.example) vs. [new working model identifier example](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17641/en/model_doc/rembert#transformers.RemBertModel.forward.example). | 06-09-2022 19:29:43 | 06-09-2022 19:29:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good catch!<|||||>Thanks for fixing @stefan-it |
transformers | 17,640 | closed | Add Italian translation of create_model.mdx and serialization.mdx | # What does this PR do?
Italian translation of create_model.mdx and serialization.mdx
Issues : #17459
<!--
Italian translation of create_model.mdx
Italian translation of serialization.mdx
Issues : #17459
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
This is my first issues so if anyone want to make suggestions to help my improve for next time or if there is any problem with my translation and commits.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli
| 06-09-2022 19:13:37 | 06-09-2022 19:13:37 | Hi @F02934 ❤️ Thanks for your contribution! The reason why the code does not pass the tests is related to the fact that the file you create with the translation must be called exactly by the name in the toctree under "local:", in this case in local it says create_a_model while your file is create_model.
While I'm at it, I'd like to ask you if you can also translate the part in the _toctree called "title" for example with: "Crea un'architettura personalizzata". I will now take a moment to look at the rest of the PR, thank you 🌈🌈🌈<|||||>Alright I will correct it!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@mfumanelli Done!:)<|||||>Hi @F02934, I have just had a look at your file, I ask you if you can edit these things:
- "Crea un architettura personalizzata" --> "Crea un'architettura personalizzata"
- "chiunque sia interessato nel studiare" --> "qualunque persona sia interessata nel studiare" (to keep the text more gender-neutral)
- "Creare un architettura modello" --> "Creare un'architettura per un modello"
- "Creare un tokenizer per testo lento e veloce" --> "Creare un tokenizer lento e veloce per il testo"
- "Una configuration si riferisce agli" --> "Una configurazione si riferisce agli"
- "Dai un'occhiata piu vicino a" --> "Dai un'occhiata più da vicino a"
- "creando uno spazion per sperimentare" --> "creando uno spazio per sperimentare"
- "Utilizzare tasso di drop out più elevato per le probalità di attenzione con il parametro attention_dropout" --> "Utilizzare un tasso di dropout più elevato per le probabilità di attention con il parametro attention_dropout"
- "la configurazione del modello ti soddisfa, lo puoi salvare con" --> "la configurazione del modello ti soddisfa, la puoi salvare con"
- "Il file della tua configurazione e memorizzato" --> "Il file della tua configurazione è memorizzato"
- "Il prossimo passo e di creare model." --> "Il prossimo passo è creare un modello."
- "sono usati per definire l'archittetura" --> "sono usati per definire l'architettura"
- "soppressione dei self-attention heads" --> "soppressione delle self-attention heads"
- "Questo significa che i modelli sono compatibili con ciascuno dei rispettivi utilizzi del framework" --> "Ciò significa che i modelli sono compatibili con l'uso di ciascun framework."
- "la configurazione del modello predefinito è automaticamente caricato" --> "la configurazione del modello predefinito è automaticamente caricata"
- "puoi ancora sostituire - alcuni o tutti - gli attributi di configurazion del modello predefinito con i tuoi se lo desideri:" --> "puoi ancora sostituire gli attributi - alcuni o tutti - di configurazione del modello predefinito con i tuoi se lo desideri:"
(these last corrections above for both pytorch and tf)
- edit this everywhere "modello head" with "model head"
- "hai un modello DistilBERT base in cui output sono gli hidden states" --> "hai un modello DistilBERT base i cui output sono gli hidden states"
- "DistilBERT base con la sequenza di classificazione head" --> "DistilBERT base con una testa di classificazione per sequenze"
- edit this everywhere "sequenza di classificazione head" with "testa di classificazione per sequenze"
- "per un altra attività" --> "per un'altra attività"
- "ad un modello head diverso" --> "ad una model head differente"
- "Il head di risposta alle domande" --> "La head per compiti di question answering"
- "un occhiata a questo table" --> "un'occhiata a questa tabella"
- "tuo file vocabulario:" --> "tuo file vocabolario:"
- "Per impostazione predefinita" --> "Per l'impostazione predefinita"
thank you very much! 🚀🚀<|||||>@mfumanelli of course I will look into it<|||||>hello @mfumanelli I edited the things you asked, you can check if everything is alright. I also made a mistake and commited here another translation I was doing but it's small (I added the serialization on _toctree.yml and mistakenly commited here). What should I do?<|||||>Hi @F02934! You can simply remove the file from your repo locally and re-commit and push. Or if you prefer, add "WIP" to the PR title and also add to the PR description that you are translating the serialization file 😊<|||||>Alright I will remove the file from my local repo and re-commit push<|||||>@mfumanelli everything good know! You can check if there is any problems<|||||>Hello @mfumanelli I finished another translation (serialization.mdx) But I don't know why it keep pushing to this pull request.
<|||||>I changed the PR describe that I also translated serialization.mdx here<|||||>Hi @F02934!! I was just about to reply to you! Your PR keeps updating because you are working on the main of the forked repo. If you create a branch for each file you translate, this doesn't happen. I'll also look at the new file later, sorry I didn't reply sooner 😊<|||||>@mfumanelli ok sorry I'm still new thank you for answering. So I need to create another branch each time I'm working on a new translation rigth?<|||||>Don't worry @F02934 ❤️ Yes, that's right, then you delete the branch every time they merge your PR (there's actually an option to delete it once they merge it)<|||||>@mfumanelli hi , just wondering if you forgot about this pr 🧑💻<|||||>Hi @F02934, no! It will be approved, there are several PRs in the stack, don't worry ❤️<|||||>cc @omarespejel <|||||>> cc @omarespejel
Hi what does it mean?<|||||>Hi @F02934 could you solve conflicts, please?
I think they are in the _toctree.yml because some PRs have been merged and the file has updated.
> > cc @omarespejel
>
> Hi what does it mean?
@omarespejel and @sgugger are the reviewers with write access to the repo.
|
transformers | 17,639 | closed | Add `BloomForSequenceClassification` and `BloomForTokenClassification` classes | # What does this PR do?
This PR adds 2 new classes for the BLOOM model with sequence classification and token classification heads. Mentioned briefly by me in PR #17474 .
We are planning to use the smaller BLOOM models for these tasks downstream in the Bigscience Multilingual Modeling WG, so we need these classes implemented to do so.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). -> NO
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -> TODO
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
_Will tag patrickvonplaten and sgugger when ready--still need to write tests first!_
Let me know if anything is wrong with this PR! | 06-09-2022 18:51:09 | 06-09-2022 18:51:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten @sgugger
This PR should be ready for review.
There are tests failing, but I am not sure why--I think I've looked at all the failing ones, and their failure seems to be unrelated to anything I changed (ie. image segmentation tests and tokenization tests fail, and text generation pipeline `test_small_model_pt` seems to fail because "sshleifer/tiny-ctrl" cannot be downloaded, none of which relate to files I touched in this PR afaik.) Hopefully I'm not mistaken on this end!<|||||>Hi @haileyschoelkopf , I have managed to fix some tests that fails on this PR, can I directly push to this PR in case the tests still fail on your side?<|||||>Ah thank you @younesbelkada , you got to fixing the tests before I had time to do it!
I think that you can push to this PR already (you're marked as a maintainer right?) so feel free to do so :)<|||||>Perfect thanks! Just pushed my commit <|||||>After a minor change to return type of `BloomForTokenClassification` , all tests pass!
This should be ready to merge now :) <|||||>Comments should be resolved now!<|||||>Waiting for the last lights to be green 🟢 and we'll merge ! <|||||>Hi @haileyschoelkopf ,
thanks for adding this! Do you have any recommendations for a good hyper-param configuration (batch size, learning rate) when doing NER? I tried CoNLL-2003 with the `bigscience/bloom-350m` checkpoint, but results are around 67% on test set, which is very bad. (I tried epoch = 10, learning rate = 5e-06 and batch size = 4, that was working well with XLM-R Large and it took 114 minutes on a V100 using fp16).<|||||>I don't know if this is related but, bloom-350m's pre-training has not finished yet! You may want to try it with bloom-1b3 which is a model where the pre-training has been completed! https://huggingface.co/bigscience/bloom-1b3<|||||>Hi @stefan-it, I agree with what @younesbelkada said, bloom-1b3 is worth trying since it's finished pretraining! We haven't actually gotten to NER experiments but I'll let you know if we do end up finding good hyperparams.<|||||>Hi @younesbelkada and @haileyschoelkopf thanks for that hint! I used the `bigscience/bloom-1b3` model with DeepSpeed and the result after one epoch of fine-tuning (same hyper-params as mentioned above) are also very bad. So I'm going to tune the hyper-params and please let me know if you found a working setup for your NER task(s) :hugs: |
transformers | 17,638 | closed | Avoid GPU OOM for a TF Rag test | # What does this PR do?
After changing torch from `cu102` to `cu113`, the test `test_rag_token_generate_batch` on TF side causes GPU OOM.
This PR splits the batch into 2 batches to avoid this issue.
I can just run only 1 batch if you think it's better. | 06-09-2022 17:15:58 | 06-09-2022 17:15:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merged as the change is quite obvious and small + being approved by 1 maintainer. |
transformers | 17,637 | closed | Running a pipeline of `float16`. | # What does this PR do?
When we're preparing the tensors for CPU for postprocessing, we need
to upgrade the `float16` to `float32` since CPUs don't have instructions
for `[b]float16`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17616
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-09-2022 16:54:05 | 06-09-2022 16:54:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,636 | closed | fix use_amp rename after pr 17138 | PR https://github.com/huggingface/transformers/pull/17138 renamed `self.use_amp` to `self.use_cuda_amp` but didn't rename it everywhere so deepspeed tests were failing. This PR fixes a few missed bits.
@sgugger
cc: @ydshieh who reported this | 06-09-2022 16:18:21 | 06-09-2022 16:18:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just add info: There are also 3 files in `examples/research_projects/` also use `self.amp`, but as @sgugger told me before, we should/could leave it.<|||||>I'm going to merge this before Sylvain gets a chance to look since it impacts Deepspeed's CI, which currently is failing on our tests.
Deepspeed CI is running our deepspeed tests so that they could catch any breakages caused by their changes quickly, but this requires us that we keep our deepspeed tests working.<|||||>Thanks for fixing! |
transformers | 17,635 | closed | Change no trainer image_classification test | # What does this PR do?
I noticed that this test has been failing for the last few months, and it's due to the fact that on a single GPU or CPU the tests pass (we hit 50% accuracy), but on multi GPU it's a hit or miss whether or not it does (each epoch/batch is either 50% or 0%, and we happen to miss that coin flip each time).
This PR modifies the no_trainer test to mimic the equivalent pytorch example tests, and I can confirm it passes repeatedly on a single GPU, CPU, and multi GPU: https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L390-L417
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @NielsRogge | 06-09-2022 16:17:22 | 06-09-2022 16:17:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I've addressed the speed concern in the test, but you won't actually see it show up due to a bug in the scheduler step recalculation. I'll open up another PR for that fix and then merge this one after that's done |
transformers | 17,634 | closed | fix tolerance for a bloom slow test | # What does this PR do?
FIxes a slow test that was ignored in the previous PR
- The number was not exactly equal due to some torch version mismatch when I did the tests on the DGX (when the test was designed) -> Note that the results of some operations are not similar across torch version compiled differently
- Just changed the tolerance and seems to work fine
cc @ydshieh
| 06-09-2022 16:06:40 | 06-09-2022 16:06:40 | 👍 Never assert equal float -> the result will be float too 😄 <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,633 | closed | Add skip logic for attentions test - Levit | # What does this PR do?
Adds logic to skip tests checking attentions as LeViT doesn't take `output_attentions` in its prediction call.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 06-09-2022 15:39:36 | 06-09-2022 15:39:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,632 | closed | GPTNeoXForCausalLM examples fail to run | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj
I have been trying to using Huggingface GPTNeoX models to generate text. However even the basic example use case fails both on `CPU` and `GPU`.
```
from transformers import GPTNeoXTokenizerFast, GPTNeoXForCausalLM, GPTNeoXConfig
if __name__ == "__main__":
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
config = GPTNeoXConfig.from_pretrained("EleutherAI/gpt-neox-20b")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", config=config)
outputs = model.generate(
inputs.input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(outputs)[0]
print(gen_text)
```
Fails with
```
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Traceback (most recent call last):
File "/home/joy/.venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1320, in generate
return self.sample(
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1938, in sample
outputs = self(
File "/home/joy/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 590, in forward
outputs = self.gpt_neox(
File "/home/joy/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 482, in forward
outputs = layer(
File "/home/joy/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 290, in forward
attention_layer_outputs = self.attention(
File "/home/joy/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 148, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/home/joy/.venv/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 211, in _attn
attn_output = torch.matmul(attn_weights, value)
RuntimeError: Expected batch2_sizes[0] == bs && batch2_sizes[1] == contraction_size to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
Similar failure happen when running on CUDA as well.
Based on -- https://huggingface.co/docs/transformers/main/en/model_doc/gpt_neox
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run the example script in https://huggingface.co/docs/transformers/main/en/model_doc/gpt_neox
2. See failure both on CPU and GPU
### Expected behavior
```shell
The model should generate text.
On CUDA I have tried with `remove_invalid_values=True` but then the model produces garbage.
```
| 06-09-2022 15:26:05 | 06-09-2022 15:26:05 | @zphang Would you be able to help us get this working? Thanks 🙏<|||||>I am using my own custom generate function and didn't have the invalid values problem, but I was also having issues with nonsensical generations that might possibly be related. I found that this was due to a mistake with caching the previous attention states on `line 146` of `modeling_gpt_neox.py`
I think `present = None if use_cache else (key, value)` should be `present = (key,value) if use_cache else None`
Changing that fixed my issue. You may also be able to set `use_cache=False` as an argument to `model.generate`, although that will give slightly slower generation.<|||||>Hi, sorry I've been busy. I should be able to take a look at this this weekend.<|||||>Thank you @benkrause and @zphang ! <|||||>Hi @benkrause @zphang I have same issue. Can you please help me to resolve it ?
```
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Traceback (most recent call last):
File "app.py", line 15, in <module>
use_cache=False
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1330, in generate
**model_kwargs,
File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1975, in sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi @benkrause @zphang I have same issue. Can you please help me to resolve it ?
>
> ```
> The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
> Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
> Traceback (most recent call last):
> File "app.py", line 15, in <module>
> use_cache=False
> File "/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
> return func(*args, **kwargs)
> File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1330, in generate
> **model_kwargs,
> File "/opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1975, in sample
> next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
> RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
> ```
I'm also experiencing this issue. Have you already solved it?
|
transformers | 17,631 | closed | Add Italian translation of sharing_custom_models.mdx | # What does this PR do?
Italian translation of sharing_custom_models.mdx
See issue: https://github.com/huggingface/transformers/issues/17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/17459
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli
| 06-09-2022 15:24:25 | 06-09-2022 15:24:25 | Hi @Xpiri! I think you are missing the title at the end of the section in the _toctree file:
**title: How-to guides**<|||||>@Xpiri ! I also ask if you can edit a couple of things:
- "così che tutti possano usarlo" I know it's usual to speak in the masculine when referring to the masses whose gender you don't know, however, I'm asking if you can change it to "così che tutte le persone possano usarlo", so that it's more neutral. Thank you
- "la tua config" sometimes you write config in masculine and sometimes in feminine, I would always write "il tuo config" what do you think?
- here "e chiamiamo l'inizializzazione della superclasse con il metodoconfig(un po' come quando scrivi un normale" from the doc preview I notice that there are random highlights. I think there are some wrong quotes
- "Se il to modello" -> "Se il tuo modello"
- "Se stai copiando i file un modello dalla libreria" -> "Se stai copiando i file relativi alla modellizzazione dalla libreria"
- "e registrare in modo corretto" -> "e registrarli in modo corretto"
- "assicurati che di essere loggato. Lancia dal tuo terminal" -> "assicurati di aver effettuato l'accesso. Lancia dal tuo terminale"
- "che l'autore del modello non abbiano aggiornato il codice" -> "che le autrici o gli autori del modello non abbiano aggiornato il codice"
- "non ti fidi completamente degli autori del modello" -> "non ti fidi completamente della fonte"
Thank you!! 🌈🌈<|||||>Hi @mfumanelli, thank you very much for catching several typos. I also tried while I was writing to keep everything as gender neutral as possible, but seems like I still managed to make a couple of mistakes here and there!
Lastly, I ended up using "la config" instead of "il config" as the feminine version is more coherent with the expression "la configurazione". I noticed that I was using the masculine version whenever I was refering to "il file config" and using the feminine version whenever I was refering to "la configurazione". Now it should always be written in the feminine version.
I hope I did everything right with the toctree file so I can get started with the next translation 😄 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Xpiri!! Looks perfect to me @omarespejel 🚀🚀<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Xpiri Sorry for the wait. The table of content has a little problem with the merge I had to do, could you just move the `-sections` on line 31 just below the ` title: How-to-guides` line?<|||||>@sgugger I should have fixed the table of contents. Let me know if everything is correct! 😄 |
transformers | 17,630 | closed | Fix very long job failure text | # What does this PR do?
Slack SDK have a limit of `3000` characters for "text" field. For some rare cases, we have very long test error trace.
See this [dummy job run](https://github.com/huggingface/transformers/runs/6753872641?check_suite_focus=true)
This PR tries to limit the length, so the report could be sent to Slack.
(Previous version has some check, but it didn't take into account that a single test failure could have very long trace itself along)
For the record, the test failure with very long trace is like
```
E ValueError: torch.float32 is enabled but the following parameters have dtype that is not torch.float32: [('shared.weight', torch.float16), ('encoder.block.0.layer.0.SelfAttention.q.weight', torch.float16), ('encoder.block.0.layer.0.SelfAttention.k.weight', torch.float16), ('encoder.block.0.layer.0.SelfAttention.v.weight', torch.float16), ('encoder.block.0.layer.0.SelfAttention.o.weight', torch.float16), ('encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', torch.float16), ('encoder.block.0.layer.0.layer_norm.weight', torch.float16), ('encoder.block.0.layer.1.DenseReluDense.wi.weight', torch.float16), ('encoder.block.0.layer.1.DenseReluDense.wo.weight', torch.float16), ('encoder.block.0.layer.1.layer_norm.weight', torch.float16), ('encoder.block.1.layer.0.SelfAttention.q.weight', torch.float16), ('encoder.block.1.layer.0.SelfAttention.k.weight', torch.float16), ('encoder.block.1.layer.0.SelfAttention.v.weight', torch.float16), ('encoder.block.1.layer.0.SelfAttention.o.weight', torch.float16), ('encoder.block.1.layer.0.layer_norm.weight', torch.float16), ('encoder.block.1.layer.1.DenseReluDense.wi.weight', torch.float16), ('encoder.block.1.layer.1.DenseReluDense.wo.weight', torch.float16), ('encoder.block.1.layer.1.layer_norm.weight', torch.float16), ('encoder.final_layer_norm.weight', torch.float16), ('decoder.block.0.layer.0.SelfAttention.q.weight', torch.float16), ('decoder.block.0.layer.0.SelfAttention.k.weight', torch.float16), ('decoder.block.0.layer.0.SelfAttention.v.weight', torch.float16), ('decoder.block.0.layer.0.SelfAttention.o.weight', torch.float16), ('decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', torch.float16), ('decoder.block.0.layer.0.layer_norm.weight', torch.float16), ('decoder.block.0.layer.1.EncDecAttention.q.weight', torch.float16), ('decoder.block.0.layer.1.EncDecAttention.k.weight', torch.float16), ('decoder.block.0.layer.1.EncDecAttention.v.weight', torch.float16), ('decoder.block.0.layer.1.EncDecAttention.o.weight', torch.float16), ('decoder.block.0.layer.1.layer_norm.weight', torch.float16), ('decoder.block.0.layer.2.DenseReluDense.wi.weight', torch.float16), ('decoder.block.0.layer.2.DenseReluDense.wo.weight', torch.float16), ('decoder.block.0.layer.2.layer_norm.weight', torch.float16), ('decoder.block.1.layer.0.SelfAttention.q.weight', torch.float16), ('decoder.block.1.layer.0.SelfAttention.k.weight', torch.float16), ('decoder.block.1.layer.0.SelfAttention.v.weight', torch.float16), ('decoder.block.1.layer.0.SelfAttention.o.weight', torch.float16), ('decoder.block.1.layer.0.layer_norm.weight', torch.float16), ('decoder.block.1.layer.1.EncDecAttention.q.weight', torch.float16), ('decoder.block.1.layer.1.EncDecAttention.k.weight', torch.float16), ('decoder.block.1.layer.1.EncDecAttention.v.weight', torch.float16), ('decoder.block.1.layer.1.EncDecAttention.o.weight', torch.float16), ('decoder.block.1.layer.1.layer_norm.weight', torch.float16), ('decoder.block.1.layer.2.DenseReluDense.wi.weight', torch.float16), ('decoder.block.1.layer.2.DenseReluDense.wo.weight', torch.float16), ('decoder.block.1.layer.2.layer_norm.weight', torch.float16), ('decoder.final_layer_norm.weight', torch.float16)]
```
| 06-09-2022 13:43:44 | 06-09-2022 13:43:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>FMI:
Not urgent, but we also have to deal with other blocks, considering the failure below
https://github.com/huggingface/transformers/actions/runs/2472383354
(The summary might be too long when a commit introduces too many test failures)
I will work on it at some point |
transformers | 17,629 | closed | Add Ray's scope to training arguments | # What does this PR do?
As discussed in https://github.com/huggingface/transformers/issues/16683, it often happens that you want more control over gridsearch. By default, Ray will run different trials with different hyperparameters, and then select the best one based on the loss (or other metric) of each trial's _final_ checkpoint. That's not always what you want however (when some trials converge faster than others and therefore overfit in the same number of steps). Luckily Ray allows [other options](https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial). With this PR, the user gets a little bit more control over how to use Ray's search.
<!-- Remove if not applicable -->
Fixes #16683
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. **Discussed but never received an official reply. Got some community responses of others who would like to see this change, however.**
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@richardliaw, @amogkam
| 06-09-2022 13:33:52 | 06-09-2022 13:33:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Awesome stuff :) @Yard1 could you help shepherd this PR?<|||||>@BramVanroy Hey this looks great! Looks like you only need to run black to reformat the files.<|||||>@richardliaw @Yard1 I can't manage to satisfy the quality checks (`doc-builder`). The issue being the very long Ray URL. Any thoughts how I should deal with this? <|||||>Hey @BramVanroy, running `make fixup` at the root of your clone should fix it. It should apply a fix such as the following:
```diff
diff --git a/src/transformers/training_args.py b/src/transformers/training_args.py
index 23f8cd752..e54227b46 100644
--- a/src/transformers/training_args.py
+++ b/src/transformers/training_args.py
@@ -459,10 +459,9 @@ class TrainingArguments:
ray_scope (`str`, *optional*, defaults to `"last"`):
The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will
then use the last checkpoint of all trials, compare those, and select the best one. However, other options
- are also available. See the
- [Ray documentation](
- https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial)
- for more options.
+ are also available. See the [Ray documentation](
+ https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) for
+ more options.
"""
output_dir: str = field(
```<|||||>In general, you can break up very long links with `\`, eg.
```
ray_scope (`str`, *optional*, defaults to `"last"`):
The scope to use when doing hyperparameter search with Ray. By default, `"last"` will be used. Ray will
then use the last checkpoint of all trials, compare those, and select the best one. However, other options
are also available. See the
[Ray documentation](
https://docs.ray.io/en/latest/tune/api_docs/analysis.html\
#ray.tune.ExperimentAnalysis.get_best_trial)
for more options.
```<|||||>cc @sgugger for the additional argument to the trainings arguments |
transformers | 17,628 | closed | Move Clip image utils to image_utils.py | - Use image_utils functions instead of duplicate functions in clip/feature_extraction_clip.py
- Move convert_rgb to image_utils.py
| 06-09-2022 13:29:25 | 06-09-2022 13:29:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Isn't this a breaking change? The CLIP method uses resize with `default_to_square=False`, right?
Yes, my mistake. Will fix it shortly! |
transformers | 17,627 | closed | Add ONNX support for ConvNeXT | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds ONNX support for ConvNeXT. Linked to #16308.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-09-2022 13:06:54 | 06-09-2022 13:06:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,626 | closed | Enable crop_center method to handle (W, H, C) images | - Enables crop_center image utility method to handle (W, H, C) and (C, W, H) images
- Added support to expand 2D / (W, H) image arrays to 3D / (C, W, H) shape
| 06-09-2022 13:05:05 | 06-09-2022 13:05:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,625 | closed | FX function refactor | # What does this PR do?
This PR just refactors some pieces of code that are useful in `symbolic_trace` so that they can be used as standalone functions, and in other libraries, for instance [here](https://github.com/huggingface/optimum/pull/216/files#diff-7b6db6e19d4646fd3712298b1abef6ab3ba778d9a9c411aed1d12599ec25725fR164). | 06-09-2022 12:56:15 | 06-09-2022 12:56:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17625). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,624 | closed | Remove shape_list and use actual TF functions | Oh god don't merge this one, I'm testing a codebase cleanup and seeing what breaks! | 06-09-2022 11:43:32 | 06-09-2022 11:43:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,623 | closed | Migrate HFDeepSpeedConfig from trfrs to accelerate | ### What does this PR do?
1. Migrates HFDeepSpeedConfig from transformers repo to accelerate repo as it is generic enough and specific bits remain in transformers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-09-2022 10:28:35 | 06-09-2022 10:28:35 | We discussed offline with Sourab on adding a few wrappers as static methods of `AcceleratorState` to make this code easier to read on the Transformers side :-)<|||||>My apologies @stas00 I was (wrongly) thinking the DeepSpeed API was still marked as experimental and therefore was okay with some small breaking changes, but it's not so we shouldn't break things.
So let's limit the changes to the move of the config inside Accelerate, with the rest of the code the exact same indeed (in particular leave the weakref as it was).<|||||>That sounds great, Sylvain. Thank you!
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for all the work on this, good to merge on my side!<|||||>Good to merge, @pacman100 - thank you for bearing with me! |
transformers | 17,622 | closed | Added it documentation for fast_tokenizers #17459 | # What does this PR do?
Addresses #17459 with italian translation of fast_tokenizers.mdx
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 06-09-2022 08:42:41 | 06-09-2022 08:42:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17622). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @andreafailla, I'm just taking a look at your PR. Can I ask you to change a little of thing? In the toctree please remove the quotation marks here "Usare i tokenizzatori di 🤗 Tokenizers". Everything else looks perfect to me! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,621 | closed | AttributeError: 'TFBertForQuestionAnswering' object has no attribute 'save_pretrained | ### System Info
```shell
I coding in Colab
```
### Who can help?
AttributeError Traceback (most recent call last)
<ipython-input-64-1a5b1a678c55> in <module>()
----> 4 model.save_pretrained('model_bert_20220609.pt')
AttributeError: 'TFBertForQuestionAnswering' object has no attribute 'save_pretrained
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BertTokenizerFast
model = TFBertModel.from_pretrained("klue/bert-base", from_pt = True)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer=optimizer, loss=loss)
history = model.fit(
x_train,
y_train,
epochs = 5,
verbose = 1,
batch_size = 16,
)
model.save_pretrained('model_bert_20220609.pt')
### Expected behavior
```shell
I want to save my trainned models step by steps but it is not saved.
How can I load and save?
Thank you.
```
| 06-09-2022 08:38:03 | 06-09-2022 08:38:03 | I am sorry, I am new for git bug issue menu.
but I have no idea about [save_pretrained]<|||||>I saw https://huggingface.co/blog/tf-serving
this guide was used save_pretrained function.
`from transformers import TFBertForSequenceClassification
model = TFBertForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-imdb", from_pt=True)
model.save_pretrained("my_model", saved_model=True)`
and I check the code about class
`https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/bert/modeling_tf_bert.py`
I really really thinking about it.no problem for save_pretrained.
Why save_pretrained is not no attribute? I think it is bugs.<|||||>After 2 days of analysis, I found that there is a difference between the original file on github and the library I received.
The hugging face lib I received was the wrong file, and the class was a completely different model with the same name.<|||||>
I deleted the file and got a clone of github again to solve it. |
transformers | 17,620 | closed | Ensure image feature extractors outputs pixel values of shape (batch_size, n_channels, height, width) | Fixes #15055 #17526
- center_crop in image_utils.py now can process numpy and torch images of shape (n_channels, height, width) and (height, width, n_channels), and returns cropped image of shape (n_channels, height, width).
- Fixed feature extractors that inherit from ImageFeatureExtractionMixin to always return numpy arrays or tensors of shape (batch_size, n_channels, height, width) | 06-09-2022 07:15:24 | 06-09-2022 07:15:24 | Hi, @alaradirik. Thank you for the (first) PR! I have a few suggestions here :-)
1. It would be great if we could have a more descriptive PR title.
2. Each PR should focus on a single issue.
- **Unless** the issues are tightly coupled, and the changed really have to be done simultaneously otherwise things break.
- (I didn't go through the changes in this PR though)
3. If 2.) could be achieved, it's more likely easier to find a descriptive titles for each PR.
I don't mean that we should close this PR and open new ones, I will let the reviews to decide.
But if you are able to think of a good PR title, it would be very nice. Thank you.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17620). All of your documentation changes will be reflected on that endpoint.<|||||>> Hi, @alaradirik. Thank you for the (first) PR! I have a few suggestions here :-)
>
> 1. It would be great if we could have a more descriptive PR title.
> 2. Each PR should focus on a single issue.
>
> * **Unless** the issues are tightly coupled, and the changed really have to be done simultaneously otherwise things break.
> * (I didn't go through the changes in this PR though)
> 3. If 2.) could be achieved, it's more likely easier to find a descriptive titles for each PR.
>
> I don't mean that we should close this PR and open new ones, I will let the reviews to decide. But if you are able to think of a good PR title, it would be very nice. Thank you.
Thank you! I will keep these in mind in my next PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,619 | closed | convert assertion to raised exception in debertav2 | # What does this PR do?
Replaces assert with ValueError as per https://github.com/huggingface/transformers/issues/12789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 06-09-2022 04:50:01 | 06-09-2022 04:50:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,618 | closed | Add Checkpoint Loading from MLflow Model Registry | ### Feature request
I would like the ability to pass a model URI that points to a model in an MLflow model registry then load a HuggingFace transformer directly from the registry.
### Motivation
Model versioning and lifecycle management is a common practice in MLOps so I think it makes sense for this to be a first-class feature in HuggingFace.
### Your contribution
I already have this functional but would like to contribute it back to the project so others can leverage the MLflow model registry to maintain their model lifecycles from development to production! | 06-09-2022 04:38:38 | 06-09-2022 04:38:38 | @sgugger this is linked to https://github.com/huggingface/transformers/pull/17686
The code I have working currently looks like
```python
import os
import mlflow
import glob
def download_from_registry(src_path, dst_path):
if not os.path.isdir(dst_path):
os.mkdir(dst_path)
mlflow.pyfunc.load_model(src_path, dst_path=dst_path)
return glob.glob(os.path.join(dst_path, "artifacts", "checkpoint-*"))[0]
model_path = download_from_registry("models:/my-model/1", "./my-model/")
model = AutoModelForSequenceClassification.from_pretrained(model_path)
```
and I'm wondering what you think this would look like contributed to the open-source. The code above only works once you have logged the checkpoint as an artifact and registered that model in the MLflow registry. However, it could be factored to also load the model from an MLflow run I think. wdyt?<|||||>Hi @sam-h-bean !
We do not plan on supporting other model repositories than our model Hub for the `from_pretrained` method in Transformers. You should build a bridge to upload those checkpoints from MLFlow to the Hub and benefit from all the goodies we have such as the inference widget, model cards, community PRs etc. 😃
Your solution also works since we support local checkpoints, and only takes three lines of code as you demonstrated 😉 <|||||>Hey @sgugger does the Hub have support for private models? These are proprietary models and thus can not be made public. What is the suggest method for cases such as this? It seems like this is a case that will become more prevalent as more companies adopt large language models.<|||||>Yes, you can have private models/datasets/spaces on the Hub. See the [doc](https://huggingface.co/docs/hub/repositories-settings#private-repositories)!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,617 | closed | add onnx support for deberta and debertav2 | # What does this PR do?
Details: Add ONNX Support for DeBERTa and DeBERTaV2.
Issue: https://github.com/huggingface/optimum/issues/207
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@JingyaHuang @ChainYo
| 06-09-2022 04:32:51 | 06-09-2022 04:32:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @sam-h-bean, thanks for the PR; it looks nice! <|||||>> Hi @sam-h-bean, thanks for contributing! The `OnnxConfig` of DeBERTa-V2 looks good to me. Can you also add it to the test [tests/onnx/test_onnx_v2.py](https://github.com/huggingface/transformers/blob/main/tests/onnx/test_onnx_v2.py#L176) and run
>
> ```
> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "deberta_v2"
> ```
>
> to confirm that the export goes well?
@JingyaHuang I am seeing a segmentation fault locally but tests pass in CI/CD?
```
10461 ± RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "deberta_v2"
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
=========================================================================================================== test session starts ============================================================================================================
platform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/marklar/workspace/transformers, configfile: setup.cfg
plugins: xdist-2.5.0, forked-1.4.0, timeout-2.1.0, hypothesis-6.47.0, dash-2.5.0
collected 341 items / 329 deselected / 12 selected
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 633/633 [00:00<00:00, 130kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52.0/52.0 [00:00<00:00, 53.1kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.33M/2.33M [00:00<00:00, 9.58MB/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
FSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
FSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Fatal Python error: Segmentation fault
Thread 0x000070000d4e3000 (most recent call first):
```
UPDATE:
I realized if you follow the documentation to the letter it only has you install the dev deps which does not include onnxruntime which feels a bit weird. I am now getting a non-segfault error
```
======================================================================================================================================= short test summary info ========================================================================================================================================
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_041_deberta_v2_default - AttributeError: 'torch._C.Value' object has no attribute 'dtype'
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_041_deberta_v2_default - AttributeError: 'torch._C.Value' object has no attribute 'dtype'
====================================================================================================================== 2 failed, 340 deselected, 35 warnings in 89.38s (0:01:29) =======================================================================================================================
```
SECOND UPDATE:
Still getting some seg faults and some of this dtype error. I was able to run the distilbert tests OK. I see now that CI/CD does not run the slow tests so this just seems to be a serialization issue. I was able to attach a debugger and get into the core torch.export function but the failure is in there so I'm not sure if it makes sense to go deeper in the debugger or if this is just something obvious I'm missing.<|||||>I am seeing that [deberta v2 is leveraging some symbolic library using onnx already](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L124-L137). Line 135 is where the crash is happening now
UPDATE:
I tried deleting the symbolic API for the XSoftmax class and rerunning the test, however I think the code is coming from the hub which unfortunately still has that method in it and is crashing the onnx export with
```
E AssertionError: deberta-v2, default -> Unsupported: ONNX export of Slice with dynamic inputs. DynamicSlice is a deprecated experimental op. Please use statically allocated variables or export to a higher opset version.
```
This is coming directly from the opset9 library. I also tried setting the default opset to 9 in my onnx config object and got the same error. Does it make sense to delete that method from the code in the model hub since the method is unused?<|||||>Hi @sam-h-bean, the symbolic function was added intentionally(check #14013) to support ONNX export as XSoftmax is not a natively supported onnx op. Can you try to export Deberta-V2 with an upper opset, eg. with `opset=15(onnx>=1.10.0)`
```
onnx_inputs, onnx_outputs = export(preprocessor, model, onnx_config, 15, Path(output.name), device=device)
```
And maybe find the minimal opset if possible(currently we set [default it to 11](https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/config.py#L40)).
Keep me posted if it works, then we can make the necessary change on the export.<|||||>@sam-h-bean To confirm, can you try this snippet:
```python
from collections import OrderedDict
from typing import Mapping
from pathlib import Path
from transformers.onnx import export
from transformers.onnx import OnnxConfig
from transformers import AutoTokenizer, AutoModel, AutoConfig
class DebertaV2OnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
if self.task == "multiple-choice":
dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
else:
dynamic_axis = {0: "batch", 1: "sequence"}
return OrderedDict(
[
("input_ids", dynamic_axis),
("attention_mask", dynamic_axis),
]
)
@property
def default_onnx_opset(self) -> int:
return 15
config = AutoConfig.from_pretrained("microsoft/deberta-v3-large")
base_model = AutoModel.from_pretrained("microsoft/deberta-v3-large")
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v3-large")
onnx_config = DebertaV2OnnxConfig(config)
onnx_path = Path("deberta.onnx")
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
print(onnx_config.default_onnx_opset)
```<|||||>@JingyaHuang I tried the export in a try/except as follows
```python
try:
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
except (RuntimeError, ValueError) as e:
print(onnx_config.default_onnx_opset)
```
And it prints 15 with no onnx model exported 😢
The exception is
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/marklar/workspace/transformers/src/transformers/onnx/convert.py", line 335, in export
return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
File "/Users/marklar/workspace/transformers/src/transformers/onnx/convert.py", line 190, in export_pytorch
onnx_export(
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/__init__.py", line 305, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 118, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 738, in _export
proto, export_map, val_use_external_data_format = graph._export_onnx(
RuntimeError: ONNX export failed: Couldn't export Python operator XSoftmax
```
UPDATE: I had made some modification of the XSoftmax, once I reverted them I got this exception running your snippet
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/marklar/workspace/transformers/src/transformers/onnx/convert.py", line 335, in export
return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
File "/Users/marklar/workspace/transformers/src/transformers/onnx/convert.py", line 190, in export_pytorch
onnx_export(
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/__init__.py", line 305, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 118, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 719, in _export
_model_to_graph(model, args, verbose, input_names,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 503, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 232, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/__init__.py", line 359, in _run_symbolic_method
return utils._run_symbolic_method(*args, **kwargs)
File "/Users/marklar/workspace/transformers/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 846, in _run_symbolic_method
return symbolic_fn(g, *args)
File "/Users/marklar/workspace/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py", line 135, in symbolic
output = masked_fill(g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(self.dtype).min)))
AttributeError: 'torch._C.Value' object has no attribute 'dtype'
```<|||||>Hi @sam-h-bean,
I merged a PR a few days ago changing deberta symbolic function #17539 (to follow what is planned to be done [here](https://github.com/huggingface/transformers/pull/17306#issuecomment-1144846653)), that seems to break the ONNX export.
I think that the culprit here is the `torch.finfo(some_tensor.dtype)` part. Let me try to find a workaround.
Pinging @ydshieh to make him aware of the issue. <|||||>> Hi @sam-h-bean,
>
> I merged a PR a few days ago changing deberta symbolic function #17539 (to follow what is planned to be done [here](https://github.com/huggingface/transformers/pull/17306#issuecomment-1144846653)), that seems to break the ONNX export.
>
> I think that the culprit here is the `torch.finfo(some_tensor.dtype)` part. Let me try to find a workaround.
>
> Pinging @ydshieh to make him aware of the issue.
@michaelbenayoun Yeah it definitely does break. I have changed that line to
```python
output = masked_fill(g, self, r_mask, g.op("Constant", value_t=torch.tensor(torch.finfo(torch.float32).min)))
```
and it seems to work. I am now facing
```bash
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids
```<|||||>@michaelbenayoun @ydshieh this PR now contains a fix for the symbolic function<|||||>I have very little knowledge on this onnx thing. But what I can add here is: we need to understand what `self` actually is in
```python
def symbolic(g, self, mask, dim):
```
My quick search (in a previous PR) shows it should be some input to the model. And if it is a `torch.Tensor`, it should have `dtype`.
- Maybe it is not input to model?
- Maybe it is, but for some reason, it is not `torch.Tensor` (for example some symbolic tensor etc - I have no idea)
I suggest to set some break points, launch a code that will call this method, and investigate what `self` is.
Please don't merge this PR with `float32.dtype` without any further investigate first, thank you 🙏
<|||||>> I have very little knowledge on this onnx thing. But what I can add here is: we need to understand what `self` actually is in
>
> ```python
> def symbolic(g, self, mask, dim):
> ```
>
> My quick search (in a previous PR) shows it should be some input to the model. And if it is a `torch.Tensor`, it should have `dtype`.
>
> * Maybe it is not input to model?
>
> * Maybe it is, but for some reason, it is not `torch.Tensor` (for example some symbolic tensor etc - I have no idea)
>
>
> I suggest to set some break points, launch a code that will call this method, and investigate what `self` is. Please don't merge this PR with `float32.dtype` without any further investigate first, thank you 🙏
This code previously was
```python
output = masked_fill(g, self, r_mask, g.op("Constant", value_t=torch.tensor(float("-inf"))))
```
@ydshieh Then it was changed to this self reference which broke the code. This is now using the built-in torch -inf so the type will be the same. I can also just change it back to `torch.tensor(float("-inf"))` if you would prefer but I imagine this won't make a great deal of difference in practice
See screenshot of the inspection:

It is some object that is clearly not a tensor.
Here is the value of the value as it is now:

And the value it was before the broken code was merged:

<|||||>We would like to have the same logic applies to the usual model as well as in ONNX here. For the usual PyTorch models, we use the `dtype` of the inputs (which are torch tensors) to determine the large negative values.
I understand we also need to make ONNX work. Let's dive deeper next week.
In the meantime, could you provide the code snippet you used to investigate these values in your above comment?
(I didn't check all your previous comments , so probably it is there already)
If you have some more time for now, would you mind to check what type `self` is, say `type(self)`. You can also check `dir(self)` to see what kind of attributes it has.
It looks like some kind of symbolic tensor (in some sense).
<|||||>
It seems like some compiled C tensor.
You can see it has a type but no dtype below

<|||||>@ydshieh Sorry I did not see your request to get the debugging steps. I hacked the onnx test [here](https://github.com/huggingface/transformers/blob/main/tests/onnx/test_onnx_v2.py#L304) to look like the following
```python
# @parameterized.expand(_get_models_to_test(PYTORCH_EXPORT_MODELS))
# @slow
@require_torch
@require_vision
@require_rjieba
def test_pytorch_export(self):
self._onnx_export("test_name", "name", "microsoft/deberta-v2-xlarge", "default", DebertaV2OnnxConfig)
```
Then I imported the requisite `DebertaV2OnnxConfig` and set a breakpoint in the PyCharm debugger [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L135). Then I inspect in the debugger. <|||||>@sam-h-bean
Thank you for the code snippet. After some inspection, could you try
```python
self.type().dtype()
```
<img width="247" alt="Screenshot 2022-06-11 120746" src="https://user-images.githubusercontent.com/2521628/173183487-feff0617-aa06-4645-b275-93e78e4dfb38.png">
<|||||>> @sam-h-bean
>
> Thank you for the code snippet. After some inspection, could you try
>
> ```python
> self.type().dtype()
> ```
>
> <img alt="Screenshot 2022-06-11 120746" width="247" src="https://user-images.githubusercontent.com/2521628/173183487-feff0617-aa06-4645-b275-93e78e4dfb38.png">
That works! Thank you for the help, I agree deriving the type directly from the output is much more clean and robust<|||||>Pinging @michaelbenayoun for a review, especially for the modifications on the symbolic function.<|||||>> @sam-h-bean, thanks for the modifications! Can you do the last check by running the slow tests?
@JingyaHuang The test will not pass as-is. We either need to modify the tokenizer to return only `input_ids` and `attention_mask` when `type_vocab_size` = 0, or we need to run the test to modify the config we initialize the ONNX config to have a non-zero `type_vocab_size`. This is what I was trying to get across in my comment. By default the ONNX graph will be made expecting 2 inputs since the behavior is to ignore `type_token_ids` in DeBERTa. This is contrary to how Electra works which by default has `type_token_ids`<|||||>> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "deberta_v2"
Hi @sam-h-bean, my mistake, for sure it won't pass. To follow the default setting of the tokenizer, and to have the test passed without adding lines specific to DeBERTa, I would prefer to tailor the inputs according to the `type_vocab_size` to satisfy both scenarios. <|||||>Here is the full test suite passing. @lewtun @sgugger if I could get a review on this it would unblock getting the support into optimum which I sorely need for my production DeBERTa microservice at you.com
```bash
10571 ± RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "deberta" -v ⏎
===================================================================================== test session starts ======================================================================================
platform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0 -- /Users/marklar/workspace/transformers/venv/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/marklar/workspace/transformers/.hypothesis/examples')
rootdir: /Users/marklar/workspace/transformers, configfile: setup.cfg
plugins: xdist-2.5.0, forked-1.4.0, timeout-2.1.0, hypothesis-6.47.0, dash-2.5.0
collected 367 items / 345 deselected / 22 selected
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_043_deberta_v2_default PASSED [ 4%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_044_deberta_v2_masked_lm PASSED [ 9%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_045_deberta_v2_multiple_choice PASSED [ 13%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_046_deberta_v2_question_answering PASSED [ 18%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_047_deberta_v2_sequence_classification PASSED [ 22%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_048_deberta_v2_token_classification PASSED [ 27%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_049_deberta_default PASSED [ 31%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_050_deberta_masked_lm PASSED [ 36%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_051_deberta_question_answering PASSED [ 40%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_052_deberta_sequence_classification PASSED [ 45%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_053_deberta_token_classification PASSED [ 50%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_043_deberta_v2_default PASSED [ 54%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_044_deberta_v2_masked_lm PASSED [ 59%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_045_deberta_v2_multiple_choice PASSED [ 63%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_046_deberta_v2_question_answering PASSED [ 68%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_047_deberta_v2_sequence_classification PASSED [ 72%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_048_deberta_v2_token_classification PASSED [ 77%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_049_deberta_default PASSED [ 81%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_050_deberta_masked_lm PASSED [ 86%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_051_deberta_question_answering PASSED [ 90%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_052_deberta_sequence_classification PASSED [ 95%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_053_deberta_token_classification PASSED [100%]
```<|||||>> Looks great, thanks for iterating it!
>
> It probably makes sense to filter out the invalid inputs and I opened a PR on Optimum. But for the moment please remove `token_type_ids` from the inputs of InferenceSession if it doesn't exist in the exported ONNX.
@JingyaHuang I'm not sure what you mean by this. Do you want something beyond removing the inputs in generate dummy inputs? Or is this comment strictly about my personal use of this functionality?<|||||>> > Looks great, thanks for iterating it!
> > It probably makes sense to filter out the invalid inputs and I opened a PR on Optimum. But for the moment please remove `token_type_ids` from the inputs of InferenceSession if it doesn't exist in the exported ONNX.
>
> @JingyaHuang I'm not sure what you mean by this. Do you want something beyond removing the inputs in generate dummy inputs? Or is this comment strictly about my personal use of this functionality?
Hi @sam-h-bean, nothing that you should worry about. Here I mention it for another API([`ORTModelForXXX`](https://github.com/huggingface/optimum/blob/main/optimum/onnxruntime/modeling_ort.py)) in Optimum. I will merge this PR, and then by building transformers from source, you shall be able to leverage Quantization and Graph Optimization features in Optimum. Thank you again for the contribution.<|||||>Congratz @sam-h-bean, excellent work. Thanks for adding these configs!!<|||||>Can I get a t-shirt 😏 ? |
transformers | 17,616 | closed | Unable to run models bert/roberta/others w. FP16 | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@sgugger, @Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For several reasons (some performance related) we'd like to be able to run inference on a GPU w. bert-style models in fp16. Unfortunately, I don't believe this mode is currently supported? Unless I am not aware of the right parameter to pass during `pipeline` creation. Below is a code snippet to reproduce the behavior we are seeing.
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='bert-base-uncased', device=0, framework='pt')
# convert model to fp16
pipe.model.half()
response = pipe('Paris is the [MASK] of France.')
print(response)
```
When running this we see the following stack trace:
```
Traceback (most recent call last):
File "test.py", line 4, in <module>
response = pipe('Paris is the [MASK] of France.')
File "/home/jerasley/ds-env/lib/python3.8/site-packages/transformers/pipelines/fill_mask.py", line 227, in __call__
outputs = super().__call__(inputs, **kwargs)
File "/home/jerasley/ds-env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1026, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/jerasley/ds-env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1034, in run_single
outputs = self.postprocess(model_outputs, **postprocess_params)
File "/home/jerasley/ds-env/lib/python3.8/site-packages/transformers/pipelines/fill_mask.py", line 118, in postprocess
probs = logits.softmax(dim=-1)
RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'
```
The core issue in the stacktrace is from the fact that the logits are on the CPU and torch doesn't have a CPU implementation of softmax that works with fp16. I tried moving the model outputs to GPU but then saw several errors related to numpy calls that are not supported on GPU. One workaround (that's maybe not ideal) is if the model outputs are fp16 then we can upcast them to fp32. Would that be an acceptable workaround? If so, I am happy to make a PR that does this.
/cc @RezaYazdaniAminabadi, @mrwyattii, @cli99, @stas00
### Expected behavior
The pipeline should run successfully if the model itself is in fp16 or fp32.
| 06-09-2022 00:34:28 | 06-09-2022 00:34:28 | We could upcast the outputs to FP32 while transferring them back to CPU since most operations are not implemented in half on CPU. Wdyt @Narsil ?<|||||>Seems reasonable, is it no-op for already fp32 ?
We will probably check they are half though since some tensors might contain `int{8,32,64}` which we shouldn't change to float I think, right ?<|||||>I agree, I think we'd only want to upcast if a tensor's dtype is fp16. Which would be a no-op if the tensor(s) are already fp32. |
transformers | 17,615 | closed | Translation/autoclass | # What does this PR do?
Italian translation of autoclass_tutorial.mdx
See issue: https://github.com/huggingface/transformers/issues/17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@omarespejel
| 06-08-2022 19:02:16 | 06-08-2022 19:02:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @mfumanelli this period is a little strange:
Producendo questo codice agnostico ai checkpoint significa che se il tuo codice funziona per un checkpoint, funzionerà anche per un altro checkpoint, purché sia stato allenato per un compito simile, anche se l'architettura è differente.
I think it's clearer this way:
Produrre questo codice agnostico ai checkpoint significa che se il tuo codice funziona per un checkpoint, funzionerà anche per un altro checkpoint, purché sia stato allenato per un compito simile, anche se l'architettura è differente.<|||||>@nickprock I absolutely agree with you, I'll edit it right away! 🌈<|||||>@mfumanelli thank you for the PR! @nickprock thanks for the review 🚀. Amazing team.
@sgugger LGTM :) |
transformers | 17,614 | closed | [modeling_utils] torch_dtype/auto floating dtype fixes | As reported in https://github.com/huggingface/transformers/issues/17583 not all model's have their first param of floating dtype, which lead to failures like:
```
$ python -c 'from transformers import AutoModel; AutoModel.from_pretrained("hf-internal-testing/tiny-bert-for-token-classification", torch_dtype="auto")'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 2004, in from_pretrained
dtype_orig = cls._set_default_torch_dtype(torch_dtype)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 980, in _set_default_torch_dtype
raise ValueError(
ValueError: Can't instantiate BertModel model under dtype=torch.int64 since it is not a floating point dtype
```
1. This PR fixes that by searching for the first floating dtype instead.
2. adds test that failed before this PR
Fixes: https://github.com/huggingface/transformers/issues/17583
------------------------------------
**Possible additional TODO that wasn't part of the original report**
@sgugger, we can sort out the saving side of things here as well - I already added an alternative `get_parameter_dtype` => `get_parameter_first_float_dtype` - but I wanted to check in with you if we replace all instances of `get_parameter_dtype` or only some.
I didn't go ahead with doing that since we have a method called `dtype` which probably should call `get_parameter_dtype` and add `float_dtype`? Not sure - let's see what you think is the best way to proceed. | 06-08-2022 17:45:20 | 06-08-2022 17:45:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Probably good to merge now, right? |
transformers | 17,613 | closed | FX tracing of HubertForSequenceClassification fails with TypeError: 'HFProxy' object cannot be interpreted as an integer | ### System Info
```shell
current main branch, 4.20.0.dev0
```
### Who can help?
@mich
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import inspect
import transformers.utils.fx as fx
from transformers import *
model = HubertForSequenceClassification(HubertConfig())
input_names = {'input_values'} # model.dummy_inputs.keys() - doesn't work properly for HubertForSequenceClassification
sig = inspect.signature(model.forward)
concrete_args = {p.name: p.default for p in sig.parameters.values() if p.name not in input_names}
hf_tracer = fx.HFTracer()
hf_tracer.trace(model, concrete_args=concrete_args)
```
```
Traceback (most recent call last):
File "/Users/pbelevich/PycharmProjects/PiPPy/test/hf_test6.py", line 14, in <module>
hf_tracer.trace(model, concrete_args=concrete_args)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/utils/fx.py", line 924, in trace
self.graph = super().trace(root, concrete_args=concrete_args)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/models/hubert/modeling_hubert.py", line 1296, in forward
outputs = self.hubert(
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 577, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/utils/fx.py", line 881, in call_module
return super().call_module(m, forward, args, kwargs)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 372, in call_module
return forward(*args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 573, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/models/hubert/modeling_hubert.py", line 1064, in forward
hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/models/hubert/modeling_hubert.py", line 988, in _mask_hidden_states
mask_time_indices = _compute_mask_indices(
File "/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/transformers/models/hubert/modeling_hubert.py", line 132, in _compute_mask_indices
else [sequence_length for _ in range(batch_size)]
TypeError: 'HFProxy' object cannot be interpreted as an integer
```
### Expected behavior
```shell
No error
```
| 06-08-2022 17:15:26 | 06-08-2022 17:15:26 | Hi, this seems related to the model being in training mode, hence calling `transformers.models.hubert.modeling_hubert._compute_mask_indices`. The reason this is not caugh by the tests is because we are testing models in eval mode...
Anyways, if you just need for inference purposes, doing `model.eval()` should solve your issue.
A fix for this might not be very easy because the function in question uses numpy... I will try to rewrite it in PyTorch and see how it goes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,612 | closed | CLI: Print all different tensors on exception | # What does this PR do?
`pt-to-tf` CLI: instead of printing the maximum error and the corresponding tensor when the maximum error was above the defined threshold, prints ALL errors and their corresponding tensors.
Here's an example:

| 06-08-2022 16:17:26 | 06-08-2022 16:17:26 | > It might make more sense if it were called something like find_pt_tf_differences?
It makes total sense, thanks for the suggestion :D <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,611 | closed | SSLError: HTTPSConnectionPool(host='huggingface.co', port=443) | I'm trying in python:
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('all-MiniLM-L6-v2')
and I get this error:
SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/all-MiniLM-L6-v2 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1091)')))
I have no proxy, just getting direct to internet !!!
| 06-08-2022 15:46:00 | 06-08-2022 15:46:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>same error
import requests
requests.get('https://www.huggingface.co')
SSLError: HTTPSConnectionPool(host='www.huggingface.co', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm also getting the same issue now :(
<|||||># Define Sentence Transformer
embedder = SentenceTransformer('paraphrase-MiniLM-L6-v2')
SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/paraphrase-MiniLM-L6-v2 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
<|||||>downgrading requests to 2.27.1
+
import os
os.environ['CURL_CA_BUNDLE'] = ''
will solve the problem<|||||>Hello, I am still getting the same error :(<|||||>Hello, Yes still getting the same error. Downgrading requests did not work<|||||>I'm also struggling with this error.<|||||>> I'm also struggling with this error.
Use this inside your python code !!!!
import os
os.environ['CURL_CA_BUNDLE'] = ''
and you get away from the problem !!!!!<|||||>@alexsomoza Thanks for the suggestion. It's not working. Although Initially, it did work till downloading packages but after few minutes Readtimeout error is thrown as below. Looks like corporate security(Zscalar) is blocking the download. Have raised issue with the tech support team within the organization as well


<|||||>>
It works, thank you!
<|||||>Thanks it works if you do not forget to downgrade requests to 2.27.1 !!!<|||||>>
你的问题解决了吗?怎么解决的?
<|||||>> >
>
> 你的问题解决了吗?怎么解决的?
代码里加两行就好了
import os
os.environ['CURL_CA_BUNDLE'] = ''
<|||||>Meet the same question.. Adding the 2 line code below seams bloking in downloading model<|||||>Same issue here
```
requests.exceptions.ProxyError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /facebook/vit-mae-base/resolve/main/preprocessor_config.json (Caused by ProxyError('Cannot connect to proxy.', timeout('timed out')))
```<|||||>It seems my proxy failed to cause this. Directly export proxy in `.zshrc` work for me.
```
export https_proxy=http://127.0.0.1:7890
export http_proxy=http://127.0.0.1:7890
export all_proxy=socks5://127.0.0.1:7890
```
`source ~/.zshrc`<|||||>same issue here, + 2line and downgrade requests not work<|||||>Downgrading requests to 2.27.1 and using the import os code mentioned above worked for me. **But** only for python 3.9 and higher. It didn't work for 3.8<|||||>> Downgrading requests to 2.27.1 and using the import os code mentioned above worked for me. **But** only for python 3.9 and higher. It didn't work for 3.8
I tried with Python 3.11. 3 and it didn't work.
requests==2.27.1
transformers==4.29.2<|||||>>
Could you try with 3.9? And just to confirm you pasted the below code on top of your main file? Single quotes, not double.
```python3
import os
os.environ['CURL_CA_BUNDLE'] = ''
```
I tried with 3.11, but don't remember the specific version and it worked fine. Maybe there is something else that could be causing the error<|||||>> >
>
> Could you try with 3.9? And just to confirm you pasted the below code on top of your main file? Single quotes, not double.
>
> ```python
> import os
> os.environ['CURL_CA_BUNDLE'] = ''
> ```
>
> I tried with 3.11, but don't remember the specific version and it worked fine. Maybe there is something else that could be causing the error
same error,why does this happen<|||||>same issue<|||||>same issue in python 3.7<|||||>Same issue in python 3.9 and python 3.10
Setting export CURL_CA_BUNDLE="" is not working.
requests downgraded to 2.27.1 as well.<|||||>Had the same error and was trying the above fixes (it didn't help) for the last hour. It started to work again, with no changes from my side, so it seems to have been external/repo issue.<|||||>same issue in python 3.9.0<|||||>works for me with 3.9.16<|||||>Having this same issue<|||||>Same issue...<|||||>same issue ...<|||||>SAME!!! I wasted the whole day to solve this problem, and it really got me frustrated.
```
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
```
`requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /junnyu/roformer_chinese_base/resolve/main/vocab.txt (Caused by SSLError(SSLCertVerificationError(1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'huggingface.co'. (_ssl.c:1007)")))`
<|||||>same issue..
<|||||>Using proxy may help:
import os
os.environ['HTTP_PROXY'] = 'http://proxy.company.com:port number>'
os.environ['HTTPS_PROXY'] = 'http://proxy.company.com:port number'
<|||||>> downgrading requests to 2.27.1 + import os os.environ['CURL_CA_BUNDLE'] = '' will solve the problem
thank you,it solved my problem.<|||||>> >
>
> 你的问题解决了吗?怎么解决的?
pip install urllib3==1.25.11<|||||>maybe it is just a network error,my VPN always fluctuate,you could try to add:
# import os
# os.environ['CURL_CA_BUNDLE'] = ''
probably, these code didn't work.
| |
autumnvoice
|
|
***@***.***
|
---- Replied Message ----
| From | ***@***.***> |
| Date | 7/6/2023 15:08 |
| To | ***@***.***> |
| Cc | Qiusheng ***@***.***> ,
***@***.***> |
| Subject | Re: [huggingface/transformers] SSLError: HTTPSConnectionPool(host='huggingface.co', port=443) (Issue #17611) |
你的问题解决了吗?怎么解决的?
pip install urllib3==1.25.11
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***><|||||>same question!<|||||>> same question!
For me reconnecting to my linux machine worked.<|||||>bump !!!
I have the same issue irrespecive of proxy. my question is I setting is as follows:
python 3.9.17
requests 2.31.0
chromadb == 0.3.21
code :
```
settings = chromadb.config.Settings(chroma_server_host="127.0.0.1", chroma_server_http_port=8000)
chroma_client = chromadb.Client(settings=settings)
del os.environ['http_proxy']
del os.environ['https_proxy']
collection = chroma_client.create_collection(
name="teest",
metadata={"hnsw:space": "cosine"} # l2 is the default
)
collection.add(
documents=["lorem ipsum...", "doc2", "doc3"],
metadatas=[{"chapter": "3", "verse": "16"}, {"chapter": "3", "verse": "5"}, {"chapter": "29", "verse": "11"}],
ids=["id1", "id2", "id3",]
)
```
ERROR:
gaierror Traceback (most recent call last)
File [~/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py:200](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/rangrejja/kumarapp/~/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py:200), in HTTPConnection._new_conn(self)
[199](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=198) try:
--> [200](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=199) sock = connection.create_connection(
[201](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=200) (self._dns_host, self.port),
[202](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=201) self.timeout,
[203](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=202) source_address=self.source_address,
[204](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=203) socket_options=self.socket_options,
[205](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=204) )
[206](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/connection.py?line=205) except socket.gaierror as e:
File [~/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/util/connection.py:60](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/rangrejja/kumarapp/~/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/util/connection.py:60), in create_connection(address, timeout, source_address, socket_options)
[58](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/util/connection.py?line=57) raise LocationParseError(f"'{host}', label empty or too long") from None
---> [60](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/util/connection.py?line=59) for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
[61](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/urllib3/util/connection.py?line=60) af, socktype, proto, canonname, sa = res
File [~/miniconda3/envs/agent/lib/python3.9/socket.py:954](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/rangrejja/kumarapp/~/miniconda3/envs/agent/lib/python3.9/socket.py:954), in getaddrinfo(host, port, family, type, proto, flags)
[953](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/socket.py?line=952) addrlist = []
--> [954](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/socket.py?line=953) for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
[955](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/socket.py?line=954) af, socktype, proto, canonname, sa = res
gaierror: [Errno -3] Temporary failure in name resolution
The above exception was the direct cause of the following exception:
...
--> [519](file:///home/rangrejja/miniconda3/envs/agent/lib/python3.9/site-packages/requests/adapters.py?line=518) raise ConnectionError(e, request=request)
<|||||>Same issue here:
Tried this:
(proj01) dcorwell@computer:~/privateGPT$ export https_proxy=http://127.0.0.1:7890
export http_proxy=http://127.0.0.1:7890
export all_proxy=socks5://127.0.0.1:7890
(proj01) dcorwell@computer:~/privateGPT$ python3 ingest.py
Traceback (most recent call last):
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/connection.py", line 169, in _new_conn
conn = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/connection.py", line 353, in connect
conn = self._new_conn()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/all-MiniLM-L6-v2 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fb0fbe45a90>: Failed to establish a new connection: [Errno 111] Connection refused')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dcorwell/privateGPT/ingest.py", line 166, in <module>
main()
File "/home/dcorwell/privateGPT/ingest.py", line 143, in main
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/langchain/embeddings/huggingface.py", line 54, in __init__
self.client = sentence_transformers.SentenceTransformer(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__
snapshot_download(model_name_or_path,
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/sentence_transformers/util.py", line 442, in snapshot_download
model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 1677, in model_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 63, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/requests/adapters.py", line 513, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/all-MiniLM-L6-v2 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fb0fbe45a90>: Failed to establish a new connection: [Errno 111] Connection refused')))"), '(Request ID: e28d7bc5-c225-425d-a08a-c6f6ac103f1a)')
Tried this too:
(proj01) dcorwell@computer:~/privateGPT$ pip install urllib3==1.25.11
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fae13826890>: Failed to establish a new connection: [Errno 111] Connection refused'))': /simple/urllib3/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fae120f5850>: Failed to establish a new connection: [Errno 111] Connection refused'))': /simple/urllib3/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fae12114250>: Failed to establish a new connection: [Errno 111] Connection refused'))': /simple/urllib3/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fae12127fd0>: Failed to establish a new connection: [Errno 111] Connection refused'))': /simple/urllib3/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fae12130790>: Failed to establish a new connection: [Errno 111] Connection refused'))': /simple/urllib3/
ERROR: Could not find a version that satisfies the requirement urllib3==1.25.11 (from versions: none)
ERROR: No matching distribution found for urllib3==1.25.11
moved example.env to .env ....
I am at a loss - I'm just trying to ingest the SOTA example file as input.
Any help would be appreciated.
(proj01) dcorwell@computer:~/privateGPT$ python3 --version
Python 3.11.2
(proj01) dcorwell@computer:~/privateGPT$ pip --version
pip 23.2 from /home/dcorwell/privateGPT/proj01/lib/python3.11/site-packages/pip (python 3.11)
(proj01) dcorwell@computer:~/privateGPT$
BTW this is on a KVM
<|||||>Try to add the following in your main python file(launch.py)
import os
os.environ['CURL_CA_BUNDLE'] = ''<|||||>I get the same error when I tried to download clip model and I can't solve it using following codes
```python
import os
os.environ['CURL_CA_BUNDLE'] = ''
```
However I solve this error by following codes
```
from transformers import AutoTokenizer, CLIPTextModel
clip_model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32",proxies = {'https':"xx:xx"})
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32",proxies = {'https':"xx:xx"})
```
Just set the proxy in the inside function instead of `export`.
<|||||>>
The reason I got this error is because I used the proxy for my linux system because I'm in China and I can't connect with hugging face without proxy.<|||||>> downgrading requests to 2.27.1 + import os os.environ['CURL_CA_BUNDLE'] = '' will solve the problem
listen to this guy<|||||>The issue still exists.
Yesterday I did some more testing.
import os
os.environ['CURL_CA_BUNDLE'] = ''
does not make much difference.
Using it, quite often, I still need to try multiple times.
Without using it, I can get it done with multiple trying.<|||||>yep,I already said it maybe did not make a difference, that is just an environment variables, after all(some metaphysical effects 0.o)
| |
autumnvoice
|
|
***@***.***
|
---- Replied Message ----
| From | Tian ***@***.***> |
| Date | 8/3/2023 08:32 |
| To | ***@***.***> |
| Cc | Qiusheng ***@***.***> ,
***@***.***> |
| Subject | Re: [huggingface/transformers] SSLError: HTTPSConnectionPool(host='huggingface.co', port=443) (Issue #17611) |
The issue still exists.
Yesterday I did some more testing.
import os
os.environ['CURL_CA_BUNDLE'] = ''
does not make much difference.
Using it, quite often, I still need to try multiple times.
Without using it, I can get it done with multiple trying.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***><|||||>> >
>
> The reason I got this error is because I used the proxy for my linux system because I'm in China and I can't connect with hugging face without proxy.
Without using:
import os
os.environ['CURL_CA_BUNDLE'] = ''
I can download models and data from Hugging Face using transformers API on an Aliyun ECS which is located in North China zone 1 (Beijing). But usually I need to try multiple times.<|||||>Setting this environment variable will avoiding using SSL which causes this issue (CA related), and now Hugging Face gives a warning not to use in this way.
> yep,I already said it maybe did not make a difference, that is just an environment variables, after all(some metaphysical effects 0.o) | | autumnvoice | | ***@***.*** | ---- Replied Message ---- | From | Tian ***@***.***> | | Date | 8/3/2023 08:32 | | To | ***@***.***> | | Cc | Qiusheng ***@***.***> , ***@***.***> | | Subject | Re: [huggingface/transformers] SSLError: HTTPSConnectionPool(host='huggingface.co', port=443) (Issue #17611) | The issue still exists. Yesterday I did some more testing. import os os.environ['CURL_CA_BUNDLE'] = '' does not make much difference. Using it, quite often, I still need to try multiple times. Without using it, I can get it done with multiple trying. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***>
<|||||>There is no such issue in colab.
How does colab make it?<|||||>Finally I got it fixed, I think the environment varaiable of proxy server "http_proxy" was to be given.
Thanks for all who answered.
Jag<|||||>Amazon SageMaker Studio Lab does not have this issue. |
transformers | 17,610 | closed | Mention in the doc we drop support for fairscale | # What does this PR do?
As pointed out in #17599, it's not clear that we're not actively maintaining the FairScale integration anymore. This PR addesses that.
Fixes #17599 | 06-08-2022 15:33:47 | 06-08-2022 15:33:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,609 | closed | trocr model performs worse than unilm version | ### System Info
```shell
transformers 4.19.2, python 3.7
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

use this image for recognition.
You can use the following code to execute microsofts unilm trocr on it:
```
import task
import deit
import deit_models
import torch
import fairseq
from fairseq import utils
from fairseq_cli import generate
from PIL import Image
import torchvision.transforms as transforms
def init(model_path, beam=5):
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
[model_path],
arg_overrides={"beam": beam,
"task": "text_recognition",
"data": "", "fp16": False})
device = "cuda" if torch.cuda.is_available() else "cpu"
model[0].to(device)
model[0].eval()
img_transform = transforms.Compose([
transforms.Resize((384, 384), interpolation=3),
transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)
])
generator = task.build_generator(
model, cfg.generation, extra_gen_cls_kwargs={'lm_model': None, 'lm_weight': None}
)
generator.eval()
bpe = task.build_bpe(cfg.bpe)
return model, cfg, task, generator, bpe, img_transform, device
def preprocess(img_path, img_transform):
im = Image.open(img_path).convert('RGB')
im = img_transform(im).unsqueeze(0).to(device).float()
sample = {
'net_input': {"imgs": im},
}
return sample
@torch.no_grad()
def get_text(cfg, generator, model, sample, bpe):
decoder_output = task.inference_step(generator, model, sample, prefix_tokens=None, constraints=None)
decoder_output = decoder_output[0][0] #top1
hypo_tokens, hypo_str, alignment = utils.post_process_prediction(
hypo_tokens=decoder_output["tokens"].int().cpu(),
src_str="",
alignment=decoder_output["alignment"],
align_dict=None,
tgt_dict=model[0].decoder.dictionary,
remove_bpe=cfg.common_eval.post_process,
extra_symbols_to_ignore=generate.get_symbols_to_strip_from_output(generator),
)
detok_hypo_str = bpe.decode(hypo_str)
return detok_hypo_str
if __name__ == '__main__':
model_path = 'path/to/models/microsoft-large-handwritten.pt'
jpg_path = "Long_sentence.png"
beam = 5
model, cfg, task, generator, bpe, img_transform, device = init(model_path, beam)
sample = preprocess(jpg_path, img_transform)
text = get_text(cfg, generator, model, sample, bpe)
print('Raw text')
print(text)
print('done')
```
This results in the recognition:
"Zu Dyonis den Tyrannen schlich Damon den Dolchen im Gewande . Ihn schlugen die Verfolger in Bande. 3.1415926 ."
Trying the same with huggingface:
"Zu Dyonis den Tyrannen schlich Damon den Dolchen im Gewande . "
As you can see, the huggingface implementation is missing half the line.
Huggingface code:
```
import torch.cuda
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
if __name__ == "__main__":
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
processor = TrOCRProcessor.from_pretrained('models/microsoft-trocr-large-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('models/microsoft-trocr-large-handwritten', pad_token_id=processor.tokenizer.eos_token_id).to(device)
path = "path/to/Long_sentence.png"
image = Image.open(path).convert("RGB")
pixel_values = processor(images=image, return_tensors="pt").pixel_values.to(device)
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
for txt in generated_text:
print(txt)
```
### Expected behavior
```shell
When executing the script we expect the whole sequence to be recognized. It is recognized with the microsoft code in https://github.com/microsoft/unilm/tree/master/trocr
With this repo however we only recognize part of the sequence, which seems like an implementation error.
```
| 06-08-2022 15:30:05 | 06-08-2022 15:30:05 | TrOCR use autoregressive generation method, so the text are generated progressively from left to right. This process is time consuming ,thus there is a default cutoff length.
To increase the length, you can add max_length=N argument inn `generated_ids = model.generate(pixel_values)`.<|||||>Hi, indeed, as mentioned above, the `generate` method has a `max_length` argument which you can set appropriately.
Details here: https://huggingface.co/docs/transformers/v4.19.4/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate
Closing this issue. |
transformers | 17,608 | closed | Fix telemetry URL | # What does this PR do?
I forgot the `api/` in the URL for the telemetry call. :grimacing: | 06-08-2022 15:19:42 | 06-08-2022 15:19:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>lolz
Did you still get a `200 OK` response?<|||||>I don't look at the response since the function never errors out (it's not supposed to screw up the script ;-) )<|||||>got it! |
transformers | 17,607 | closed | Pre-build DeepSpeed | # What does this PR do?
Pre-build `DeepSpeed`, so the first deepspeed test won't timeout.
In the recent change, I installed it as `pip install deepspeed`, and we got a CI failure
```
# This one timed out
tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16
```
See [this comment](https://github.com/huggingface/transformers/pull/17417#issuecomment-1148907094) for more details. | 06-08-2022 14:27:52 | 06-08-2022 14:27:52 | @stas00
I changed what you provided
```
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e . \
--global-option="build_ext" --global-option="-j8"
```
to
```
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install deepspeed \
--global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check
```
as we want to use the latest release, instead of the nightly `DeepSpeed`.
(other flags are just copied from the previous version of our docker file)
Verified this command inside docker, and it compiles successfully.
Ran the tests, and it looks very speedy!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>(FMI)
Documentation about DeepSpeed Pre-Install
https://www.deepspeed.ai/tutorials/advanced-install/#pre-install-deepspeed-ops<|||||>`python3 -m ` is probably redundant, but otherwise looks good.
oh, don't you need `-U` there as well? as if it's already installed it'll get stuck at that version.<|||||>> `python3 -m ` is probably redundant, but otherwise looks good.
>
> oh, don't you need `-U` there as well? as if it's already installed it'll get stuck at that version.
I always forgot `-U` and used `python3 -m pip uninstall -y deepspeed` before installing it again.
I will change it to `-U`<|||||>while uninstall works, `-U` makes things more atomic and easier to not make a mistake of forgetting to uninstall especially when your build file has many entries. But either way works fine.<|||||>I tried with `-U` and it won't build DeepSpeed:
```
Requirement already satisfied: deepspeed in /opt/conda/lib/python3.8/site-packages (0.6.5)
```
So if we want to **pre-build** `DeepSpeed`, it's better to uninstall first.
<|||||>but isn't it because you already had it prebuilt and there is no newer version? so of course it skips the building since it was already built.
Unless you are using my recipe of first installing `pip install deepspeed` to get the dependencies in (it breaks with prebuilding as it tries to build each heavy binary dependency from source). In which case yes, the recipe would be:
```
# arrange for deepspeed deps install
pip install deepspeed
# now build from source, which requires removing it first
pip uninstall deepspeed -y
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install deepspeed ...
```
it's too bad `pip` doesn't have a flag to just install the deps of a given package and not the package.
<|||||>So far (current main), it is not pre-built.
```
# I just kept this as before. It also installs everything in `extras["testing"] + extras["optuna"]`
RUN python3 -m pip install --no-cache-dir -e ./transformers[deepspeed-testing]
RUN python3 -m pip uninstall -y deepspeed
RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install deepspeed --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check
```
So yes, it is using your recipe (at least in a similar sense).<|||||>what's the point of `-e` here? This should be sufficient:
```
RUN python3 -m pip install --no-cache-dir transformers[deepspeed-testing]
```
the rest looks good.<|||||>It's quite strange that pre-build deepspeed during docker image build will have some tests failed.
If I do pre-building when the docker running, but before the tests run, it is working.
I remember in the previous version (before all my work on docker and workflow), Lysandre was doing the same thing.
I will make this change though.
I will also post the error message to keep the info recorded.<|||||>is is possible that the docker is run with different options? i.e. when it works and when it doesn't<|||||>The docker runs have the same options - even the exact command to launch:
```bash
docker run --gpus all --shm-size "16gb" --ipc host -it huggingface/transformers-pytorch-deepspeed-latest-gpu-test
```
The very long full output of the test below is with the command
```bash
TRANSFORMERS_IS_CI=yes OMP_NUM_THREADS=8 MKL_NUM_THREADS=8 RUN_SLOW=yes TF_FORCE_GPU_ALLOW_GROWTH=true RUN_PT_TF_CROSS_TESTS=1 python3 -m pytest -sv --make-reports=tests_gpu tests/deepspeed -k "TestDeepSpeedWithLauncher and test_basic_distributed_zero2_fp16" --durations=0
```
(I only tested with `test_basic_distributed_zero2_fp16`)
(I also randomly tested another one: `test_init_zero3_fp16`, which is fine)
As you can see in the last commit, I just **pre-build DeepSpeed again in the workflow file**, and [it works fine](https://github.com/huggingface/transformers/actions/runs/2470871622)
**I will merge this PR now**, but keep the output below.
It would be super if you have immediate insights from this output. Otherwise I think it's fine to focus on other priorities for now.
Let me know if I can get more detailed outputs for you to look. (I haven't tried `py-spy dump --pid PID` yet)
## **Error without (uninstall first) pre-building again inside docker running**
(I saw `stdout: [2022-06-09 20:27:17,594] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 489`)
```
========================================================================================= test session starts ==========================================================================================
platform linux -- Python 3.8.8, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /opt/conda/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/workspace/transformers/.hypothesis/examples')
rootdir: /workspace/transformers, configfile: setup.cfg
plugins: xdist-2.5.0, forked-1.4.0, timeout-2.1.0, cov-2.11.1, pythonpath-0.7.3, hypothesis-4.50.8
collected 140 items / 139 deselected / 1 selected
tests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_basic_distributed_zero2_fp16
Running: deepspeed --num_nodes 1 --num_gpus 1 --master_port 10999 /workspace/transformers/examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json --validation_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmpboxxylsc --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 10 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix "translate English to Romanian: " --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3 --do_eval --max_eval_samples 16 --per_device_eval_batch_size 2 --deepspeed /workspace/transformers/tests/deepspeed/ds_config_zero2.json
stdout: [2022-06-09 20:27:03,709] [WARNING] [runner.py:159:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
stdout: [2022-06-09 20:27:03,717] [INFO] [runner.py:457:main] cmd = /opt/conda/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=10999 /workspace/transformers/examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json --validation_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmpboxxylsc --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 10 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix "translate English to Romanian: " --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3 --do_eval --max_eval_samples 16 --per_device_eval_batch_size 2 --deepspeed /workspace/transformers/tests/deepspeed/ds_config_zero2.json
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:96:main] 0 NCCL_VERSION=2.8.4
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:103:main] WORLD INFO DICT: {'localhost': [0]}
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:109:main] nnodes=1, num_local_procs=1, node_rank=0
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:122:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:123:main] dist_world_size=1
stdout: [2022-06-09 20:27:04,574] [INFO] [launch.py:125:main] Setting CUDA_VISIBLE_DEVICES=0
stdout: [2022-06-09 20:27:06,867] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
stdout: 06/09/2022 20:27:07 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
stdout: 06/09/2022 20:27:07 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
stdout: _n_gpu=1,
stdout: adafactor=False,
stdout: adam_beta1=0.9,
stdout: adam_beta2=0.999,
stdout: adam_epsilon=1e-08,
stdout: auto_find_batch_size=False,
stdout: bf16=False,
stdout: bf16_full_eval=False,
stdout: data_seed=None,
stdout: dataloader_drop_last=False,
stdout: dataloader_num_workers=0,
stdout: dataloader_pin_memory=True,
stdout: ddp_bucket_cap_mb=None,
stdout: ddp_find_unused_parameters=None,
stdout: debug=[],
stdout: deepspeed=/workspace/transformers/tests/deepspeed/ds_config_zero2.json,
stdout: disable_tqdm=False,
stdout: do_eval=True,
stdout: do_predict=False,
stdout: do_train=True,
stdout: eval_accumulation_steps=None,
stdout: eval_delay=0,
stdout: eval_steps=10,
stdout: evaluation_strategy=IntervalStrategy.NO,
stdout: fp16=True,
stdout: fp16_backend=auto,
stdout: fp16_full_eval=False,
stdout: fp16_opt_level=O1,
stdout: fsdp=[],
stdout: fsdp_min_num_params=0,
stdout: full_determinism=False,
stdout: generation_max_length=None,
stdout: generation_num_beams=None,
stdout: gradient_accumulation_steps=1,
stdout: gradient_checkpointing=False,
stdout: greater_is_better=None,
stdout: group_by_length=True,
stdout: half_precision_backend=auto,
stdout: hub_model_id=None,
stdout: hub_private_repo=False,
stdout: hub_strategy=HubStrategy.EVERY_SAVE,
stdout: hub_token=<HUB_TOKEN>,
stdout: ignore_data_skip=False,
stdout: include_inputs_for_metrics=False,
stdout: label_names=None,
stdout: label_smoothing_factor=0.1,
stdout: learning_rate=0.003,
stdout: length_column_name=length,
stdout: load_best_model_at_end=False,
stdout: local_rank=0,
stdout: log_level=-1,
stdout: log_level_replica=-1,
stdout: log_on_each_node=True,
stdout: logging_dir=/tmp/tmpboxxylsc/runs/Jun09_20-27-06_1e3412729191,
stdout: logging_first_step=False,
stdout: logging_nan_inf_filter=True,
stdout: logging_steps=500,
stdout: logging_strategy=IntervalStrategy.STEPS,
stdout: lr_scheduler_type=SchedulerType.LINEAR,
stdout: max_grad_norm=1.0,
stdout: max_steps=-1,
stdout: metric_for_best_model=None,
stdout: mp_parameters=,
stdout: no_cuda=False,
stdout: num_train_epochs=1.0,
stdout: optim=OptimizerNames.ADAMW_HF,
stdout: output_dir=/tmp/tmpboxxylsc,
stdout: overwrite_output_dir=True,
stdout: past_index=-1,
stdout: per_device_eval_batch_size=2,
stdout: per_device_train_batch_size=2,
stdout: predict_with_generate=True,
stdout: prediction_loss_only=False,
stdout: push_to_hub=False,
stdout: push_to_hub_model_id=None,
stdout: push_to_hub_organization=None,
stdout: push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
stdout: remove_unused_columns=True,
stdout: report_to=[],
stdout: resume_from_checkpoint=None,
stdout: run_name=/tmp/tmpboxxylsc,
stdout: save_on_each_node=False,
stdout: save_steps=0,
stdout: save_strategy=IntervalStrategy.STEPS,
stdout: save_total_limit=None,
stdout: seed=42,
stdout: sharded_ddp=[],
stdout: skip_memory_metrics=True,
stdout: sortish_sampler=False,
stdout: tf32=None,
stdout: torchdynamo=None,
stdout: tpu_metrics_debug=False,
stdout: tpu_num_cores=None,
stdout: use_ipex=False,
stdout: use_legacy_prediction_loop=False,
stdout: warmup_ratio=0.0,
stdout: warmup_steps=8,
stdout: weight_decay=0.0,
stdout: xpu_backend=None,
stdout: )
stdout: 06/09/2022 20:27:07 - WARNING - datasets.builder - Using custom data configuration default-fad90dc19745c6b9
stdout: 06/09/2022 20:27:07 - INFO - datasets.builder - Overwrite dataset info from restored data version.
stdout: 06/09/2022 20:27:07 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/json/default-fad90dc19745c6b9/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b
stdout: 06/09/2022 20:27:07 - WARNING - datasets.builder - Reusing dataset json (/root/.cache/huggingface/datasets/json/default-fad90dc19745c6b9/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b)
stdout: 06/09/2022 20:27:07 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/json/default-fad90dc19745c6b9/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b
100%|██████████| 2/2 [00:00<00:00, 752.81it/s]
stderr: [INFO|configuration_utils.py:659] 2022-06-09 20:27:07,430 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
stderr: [INFO|configuration_utils.py:708] 2022-06-09 20:27:07,444 >> Model config T5Config {
stderr: "_name_or_path": "t5-small",
stderr: "architectures": [
stderr: "T5WithLMHeadModel"
stderr: ],
stderr: "d_ff": 2048,
stderr: "d_kv": 64,
stderr: "d_model": 512,
stderr: "decoder_start_token_id": 0,
stderr: "dense_act_fn": "relu",
stderr: "dropout_rate": 0.1,
stderr: "eos_token_id": 1,
stderr: "feed_forward_proj": "relu",
stderr: "initializer_factor": 1.0,
stderr: "is_encoder_decoder": true,
stderr: "is_gated_act": false,
stderr: "layer_norm_epsilon": 1e-06,
stderr: "model_type": "t5",
stderr: "n_positions": 512,
stderr: "num_decoder_layers": 6,
stderr: "num_heads": 8,
stderr: "num_layers": 6,
stderr: "output_past": true,
stderr: "pad_token_id": 0,
stderr: "relative_attention_max_distance": 128,
stderr: "relative_attention_num_buckets": 32,
stderr: "task_specific_params": {
stderr: "summarization": {
stderr: "early_stopping": true,
stderr: "length_penalty": 2.0,
stderr: "max_length": 200,
stderr: "min_length": 30,
stderr: "no_repeat_ngram_size": 3,
stderr: "num_beams": 4,
stderr: "prefix": "summarize: "
stderr: },
stderr: "translation_en_to_de": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to German: "
stderr: },
stderr: "translation_en_to_fr": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to French: "
stderr: },
stderr: "translation_en_to_ro": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to Romanian: "
stderr: }
stderr: },
stderr: "transformers_version": "4.20.0.dev0",
stderr: "use_cache": true,
stderr: "vocab_size": 32128
stderr: }
stderr:
stderr: [INFO|tokenization_auto.py:384] 2022-06-09 20:27:07,585 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
stderr: [INFO|configuration_utils.py:659] 2022-06-09 20:27:07,723 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
stderr: [INFO|configuration_utils.py:708] 2022-06-09 20:27:07,724 >> Model config T5Config {
stderr: "_name_or_path": "t5-small",
stderr: "architectures": [
stderr: "T5WithLMHeadModel"
stderr: ],
stderr: "d_ff": 2048,
stderr: "d_kv": 64,
stderr: "d_model": 512,
stderr: "decoder_start_token_id": 0,
stderr: "dense_act_fn": "relu",
stderr: "dropout_rate": 0.1,
stderr: "eos_token_id": 1,
stderr: "feed_forward_proj": "relu",
stderr: "initializer_factor": 1.0,
stderr: "is_encoder_decoder": true,
stderr: "is_gated_act": false,
stderr: "layer_norm_epsilon": 1e-06,
stderr: "model_type": "t5",
stderr: "n_positions": 512,
stderr: "num_decoder_layers": 6,
stderr: "num_heads": 8,
stderr: "num_layers": 6,
stderr: "output_past": true,
stderr: "pad_token_id": 0,
stderr: "relative_attention_max_distance": 128,
stderr: "relative_attention_num_buckets": 32,
stderr: "task_specific_params": {
stderr: "summarization": {
stderr: "early_stopping": true,
stderr: "length_penalty": 2.0,
stderr: "max_length": 200,
stderr: "min_length": 30,
stderr: "no_repeat_ngram_size": 3,
stderr: "num_beams": 4,
stderr: "prefix": "summarize: "
stderr: },
stderr: "translation_en_to_de": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to German: "
stderr: },
stderr: "translation_en_to_fr": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to French: "
stderr: },
stderr: "translation_en_to_ro": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to Romanian: "
stderr: }
stderr: },
stderr: "transformers_version": "4.20.0.dev0",
stderr: "use_cache": true,
stderr: "vocab_size": 32128
stderr: }
stderr:
stderr: [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/spiece.model from cache at /root/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
stderr: [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
stderr: [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/added_tokens.json from cache at None
stderr: [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/special_tokens_map.json from cache at None
stderr: [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/tokenizer_config.json from cache at None
stderr: [INFO|configuration_utils.py:659] 2022-06-09 20:27:08,700 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
stderr: [INFO|configuration_utils.py:708] 2022-06-09 20:27:08,701 >> Model config T5Config {
stderr: "_name_or_path": "t5-small",
stderr: "architectures": [
stderr: "T5WithLMHeadModel"
stderr: ],
stderr: "d_ff": 2048,
stderr: "d_kv": 64,
stderr: "d_model": 512,
stderr: "decoder_start_token_id": 0,
stderr: "dense_act_fn": "relu",
stderr: "dropout_rate": 0.1,
stderr: "eos_token_id": 1,
stderr: "feed_forward_proj": "relu",
stderr: "initializer_factor": 1.0,
stderr: "is_encoder_decoder": true,
stderr: "is_gated_act": false,
stderr: "layer_norm_epsilon": 1e-06,
stderr: "model_type": "t5",
stderr: "n_positions": 512,
stderr: "num_decoder_layers": 6,
stderr: "num_heads": 8,
stderr: "num_layers": 6,
stderr: "output_past": true,
stderr: "pad_token_id": 0,
stderr: "relative_attention_max_distance": 128,
stderr: "relative_attention_num_buckets": 32,
stderr: "task_specific_params": {
stderr: "summarization": {
stderr: "early_stopping": true,
stderr: "length_penalty": 2.0,
stderr: "max_length": 200,
stderr: "min_length": 30,
stderr: "no_repeat_ngram_size": 3,
stderr: "num_beams": 4,
stderr: "prefix": "summarize: "
stderr: },
stderr: "translation_en_to_de": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to German: "
stderr: },
stderr: "translation_en_to_fr": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to French: "
stderr: },
stderr: "translation_en_to_ro": {
stderr: "early_stopping": true,
stderr: "max_length": 300,
stderr: "num_beams": 4,
stderr: "prefix": "translate English to Romanian: "
stderr: }
stderr: },
stderr: "transformers_version": "4.20.0.dev0",
stderr: "use_cache": true,
stderr: "vocab_size": 32128
stderr: }
stderr:
stderr: /workspace/transformers/src/transformers/models/t5/tokenization_t5_fast.py:156: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
stderr: For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.
stderr: - Be aware that you SHOULD NOT rely on t5-small automatically truncating your input to 512 when padding/encoding.
stderr: - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.
stderr: - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.
stderr: warnings.warn(
stderr: [INFO|modeling_utils.py:2049] 2022-06-09 20:27:08,937 >> loading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885
stderr: [INFO|modeling_utils.py:2425] 2022-06-09 20:27:09,933 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
stderr:
stderr: [INFO|modeling_utils.py:2433] 2022-06-09 20:27:09,933 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
stderr: If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
stdout: 06/09/2022 20:27:10 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/json/default-fad90dc19745c6b9/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b/cache-94ae3814baee36bf.arrow
stdout: 06/09/2022 20:27:10 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/json/default-fad90dc19745c6b9/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b/cache-f0e9ed0becfce7c3.arrow
Running tokenizer on validation dataset: 100%|██████████| 1/1 [00:00<00:00, 60.72ba/s]
stderr: [INFO|trainer.py:519] 2022-06-09 20:27:10,499 >> Using cuda_amp half precision backend
stdout: [2022-06-09 20:27:10,506] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed info: version=0.6.5, git-hash=unknown, git-branch=unknown
stdout: [2022-06-09 20:27:14,861] [INFO] [engine.py:278:__init__] DeepSpeed Flops Profiler Enabled: False
stdout: Adam Optimizer #0 is created with AVX2 arithmetic capability.
stdout: Config: alpha=0.003000, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1
stdout: [2022-06-09 20:27:15,316] [INFO] [engine.py:1100:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer
stdout: [2022-06-09 20:27:15,321] [INFO] [engine.py:1108:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
stdout: [2022-06-09 20:27:15,321] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
stdout: [2022-06-09 20:27:15,321] [INFO] [logging.py:69:log_dist] [Rank 0] Creating fp16 ZeRO stage 2 optimizer
stdout: [2022-06-09 20:27:15,321] [INFO] [stage_1_and_2.py:133:__init__] Reduce bucket size 200000000.0
stdout: [2022-06-09 20:27:15,321] [INFO] [stage_1_and_2.py:134:__init__] Allgather bucket size 200000000.0
stdout: [2022-06-09 20:27:15,321] [INFO] [stage_1_and_2.py:135:__init__] CPU Offload: True
stdout: [2022-06-09 20:27:15,321] [INFO] [stage_1_and_2.py:136:__init__] Round robin gradient partitioning: False
stdout: Rank: 0 partition count [1] and sizes[(60492288, False)]
stdout: [2022-06-09 20:27:15,709] [INFO] [utils.py:828:see_memory_usage] Before initializing optimizer states
stdout: [2022-06-09 20:27:15,710] [INFO] [utils.py:829:see_memory_usage] MA 0.14 GB Max_MA 0.14 GB CA 0.24 GB Max_CA 0 GB
stdout: [2022-06-09 20:27:15,710] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 6.3 GB, percent = 10.7%
stdout: [2022-06-09 20:27:17,594] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 489
stdout: [2022-06-09 20:27:17,594] [ERROR] [launch.py:184:sigkill_handler] ['/opt/conda/bin/python3', '-u', '/workspace/transformers/examples/pytorch/translation/run_translation.py', '--local_rank=0', '--model_name_or_path', 't5-small', '--train_file', '/workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json', '--validation_file', '/workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json', '--output_dir', '/tmp/tmpboxxylsc', '--overwrite_output_dir', '--max_source_length', '32', '--max_target_length', '32', '--val_max_target_length', '32', '--warmup_steps', '8', '--predict_with_generate', '--save_steps', '0', '--eval_steps', '10', '--group_by_length', '--label_smoothing_factor', '0.1', '--source_lang', 'en', '--target_lang', 'ro', '--report_to', 'none', '--source_prefix', '"translate English to Romanian: "', '--fp16', '--do_train', '--num_train_epochs', '1', '--max_train_samples', '16', '--per_device_train_batch_size', '2', '--learning_rate', '3e-3', '--do_eval', '--max_eval_samples', '16', '--per_device_eval_batch_size', '2', '--deepspeed', '/workspace/transformers/tests/deepspeed/ds_config_zero2.json'] exits with return code = -4
FAILED
=============================================================================================== FAILURES ===============================================================================================
_____________________________________________________________________ TestDeepSpeedWithLauncher.test_basic_distributed_zero2_fp16 ______________________________________________________________________
a = (<test_deepspeed.TestDeepSpeedWithLauncher testMethod=test_basic_distributed_zero2_fp16>,)
@wraps(func)
def standalone_func(*a):
> return func(*(a + p.args), **p.kwargs)
/opt/conda/lib/python3.8/site-packages/parameterized/parameterized.py:533:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/deepspeed/test_deepspeed.py:858: in test_basic_distributed
self.run_and_check(stage=stage, dtype=dtype, distributed=True)
tests/deepspeed/test_deepspeed.py:971: in run_and_check
output_dir = self.run_trainer(
tests/deepspeed/test_deepspeed.py:1070: in run_trainer
execute_subprocess_async(cmd, env=self.get_env())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cmd = ['deepspeed', '--num_nodes', '1', '--num_gpus', '1', '--master_port', ...]
env = {'BASH_ENV': '/etc/bash.bashrc', 'COCOAPI_VERSION': '2.0+nv0.4.0', 'CUBLAS_VERSION': '11.4.1.1026', 'CUDA_CACHE_DISABLE': '1', ...}, stdin = None, timeout = 180, quiet = False, echo = True
def execute_subprocess_async(cmd, env=None, stdin=None, timeout=180, quiet=False, echo=True) -> _RunOutput:
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
_stream_subprocess(cmd, env=env, stdin=stdin, timeout=timeout, quiet=quiet, echo=echo)
)
cmd_str = " ".join(cmd)
if result.returncode > 0:
stderr = "\n".join(result.stderr)
> raise RuntimeError(
f"'{cmd_str}' failed with returncode {result.returncode}\n\n"
f"The combined stderr from workers follows:\n{stderr}"
)
E RuntimeError: 'deepspeed --num_nodes 1 --num_gpus 1 --master_port 10999 /workspace/transformers/examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json --validation_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmpboxxylsc --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 10 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix "translate English to Romanian: " --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3 --do_eval --max_eval_samples 16 --per_device_eval_batch_size 2 --deepspeed /workspace/transformers/tests/deepspeed/ds_config_zero2.json' failed with returncode 252
E
E The combined stderr from workers follows:
100%|██████████| 2/2 [00:00<00:00, 752.81it/s]
E [INFO|configuration_utils.py:659] 2022-06-09 20:27:07,430 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
E [INFO|configuration_utils.py:708] 2022-06-09 20:27:07,444 >> Model config T5Config {
E "_name_or_path": "t5-small",
E "architectures": [
E "T5WithLMHeadModel"
E ],
E "d_ff": 2048,
E "d_kv": 64,
E "d_model": 512,
E "decoder_start_token_id": 0,
E "dense_act_fn": "relu",
E "dropout_rate": 0.1,
E "eos_token_id": 1,
E "feed_forward_proj": "relu",
E "initializer_factor": 1.0,
E "is_encoder_decoder": true,
E "is_gated_act": false,
E "layer_norm_epsilon": 1e-06,
E "model_type": "t5",
E "n_positions": 512,
E "num_decoder_layers": 6,
E "num_heads": 8,
E "num_layers": 6,
E "output_past": true,
E "pad_token_id": 0,
E "relative_attention_max_distance": 128,
E "relative_attention_num_buckets": 32,
E "task_specific_params": {
E "summarization": {
E "early_stopping": true,
E "length_penalty": 2.0,
E "max_length": 200,
E "min_length": 30,
E "no_repeat_ngram_size": 3,
E "num_beams": 4,
E "prefix": "summarize: "
E },
E "translation_en_to_de": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to German: "
E },
E "translation_en_to_fr": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to French: "
E },
E "translation_en_to_ro": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to Romanian: "
E }
E },
E "transformers_version": "4.20.0.dev0",
E "use_cache": true,
E "vocab_size": 32128
E }
E
E [INFO|tokenization_auto.py:384] 2022-06-09 20:27:07,585 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
E [INFO|configuration_utils.py:659] 2022-06-09 20:27:07,723 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
E [INFO|configuration_utils.py:708] 2022-06-09 20:27:07,724 >> Model config T5Config {
E "_name_or_path": "t5-small",
E "architectures": [
E "T5WithLMHeadModel"
E ],
E "d_ff": 2048,
E "d_kv": 64,
E "d_model": 512,
E "decoder_start_token_id": 0,
E "dense_act_fn": "relu",
E "dropout_rate": 0.1,
E "eos_token_id": 1,
E "feed_forward_proj": "relu",
E "initializer_factor": 1.0,
E "is_encoder_decoder": true,
E "is_gated_act": false,
E "layer_norm_epsilon": 1e-06,
E "model_type": "t5",
E "n_positions": 512,
E "num_decoder_layers": 6,
E "num_heads": 8,
E "num_layers": 6,
E "output_past": true,
E "pad_token_id": 0,
E "relative_attention_max_distance": 128,
E "relative_attention_num_buckets": 32,
E "task_specific_params": {
E "summarization": {
E "early_stopping": true,
E "length_penalty": 2.0,
E "max_length": 200,
E "min_length": 30,
E "no_repeat_ngram_size": 3,
E "num_beams": 4,
E "prefix": "summarize: "
E },
E "translation_en_to_de": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to German: "
E },
E "translation_en_to_fr": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to French: "
E },
E "translation_en_to_ro": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to Romanian: "
E }
E },
E "transformers_version": "4.20.0.dev0",
E "use_cache": true,
E "vocab_size": 32128
E }
E
E [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/spiece.model from cache at /root/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
E [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
E [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/added_tokens.json from cache at None
E [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/special_tokens_map.json from cache at None
E [INFO|tokenization_utils_base.py:1781] 2022-06-09 20:27:08,559 >> loading file https://huggingface.co/t5-small/resolve/main/tokenizer_config.json from cache at None
E [INFO|configuration_utils.py:659] 2022-06-09 20:27:08,700 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
E [INFO|configuration_utils.py:708] 2022-06-09 20:27:08,701 >> Model config T5Config {
E "_name_or_path": "t5-small",
E "architectures": [
E "T5WithLMHeadModel"
E ],
E "d_ff": 2048,
E "d_kv": 64,
E "d_model": 512,
E "decoder_start_token_id": 0,
E "dense_act_fn": "relu",
E "dropout_rate": 0.1,
E "eos_token_id": 1,
E "feed_forward_proj": "relu",
E "initializer_factor": 1.0,
E "is_encoder_decoder": true,
E "is_gated_act": false,
E "layer_norm_epsilon": 1e-06,
E "model_type": "t5",
E "n_positions": 512,
E "num_decoder_layers": 6,
E "num_heads": 8,
E "num_layers": 6,
E "output_past": true,
E "pad_token_id": 0,
E "relative_attention_max_distance": 128,
E "relative_attention_num_buckets": 32,
E "task_specific_params": {
E "summarization": {
E "early_stopping": true,
E "length_penalty": 2.0,
E "max_length": 200,
E "min_length": 30,
E "no_repeat_ngram_size": 3,
E "num_beams": 4,
E "prefix": "summarize: "
E },
E "translation_en_to_de": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to German: "
E },
E "translation_en_to_fr": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to French: "
E },
E "translation_en_to_ro": {
E "early_stopping": true,
E "max_length": 300,
E "num_beams": 4,
E "prefix": "translate English to Romanian: "
E }
E },
E "transformers_version": "4.20.0.dev0",
E "use_cache": true,
E "vocab_size": 32128
E }
E
E /workspace/transformers/src/transformers/models/t5/tokenization_t5_fast.py:156: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
E For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.
E - Be aware that you SHOULD NOT rely on t5-small automatically truncating your input to 512 when padding/encoding.
E - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.
E - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.
E warnings.warn(
E [INFO|modeling_utils.py:2049] 2022-06-09 20:27:08,937 >> loading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885
E [INFO|modeling_utils.py:2425] 2022-06-09 20:27:09,933 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
E
E [INFO|modeling_utils.py:2433] 2022-06-09 20:27:09,933 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
E If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
Running tokenizer on validation dataset: 100%|██████████| 1/1 [00:00<00:00, 60.72ba/s]
E [INFO|trainer.py:519] 2022-06-09 20:27:10,499 >> Using cuda_amp half precision backend
src/transformers/testing_utils.py:1447: RuntimeError
```<|||||>The test wrapper gets in the way, you can test directly with the code the test is running (copy-n-pasted from your message):
```
deepspeed --num_nodes 1 --num_gpus 1 --master_port 10999 /workspace/transformers/examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json --validation_file /workspace/transformers/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmpboxxylsc --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 10 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix "translate English to Romanian: " --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3 --do_eval --max_eval_samples 16 --per_device_eval_batch_size 2 --deepspeed /workspace
````
and then it should be easier to debug. In this case I think it's hanging and not returning anything hence there is no traceback from the sub-program.
And yes, `py-spy` is the next step to see where it's hanging.
|
transformers | 17,606 | closed | Adding `top_k` argument to `text-classification` pipeline. | # What does this PR do?
A lot of users are wondering why the API does not return results sorted.
This PR enables `transformers` to do that and enables functionnality
to users without causing any regression.
The API will simply override the default argument to get sorted results.
- Deprecate `return_all_scores` as `top_k` is more uniform with other
pipelines, and a superset of what `return_all_scores` can do.
BC is maintained though.
`return_all_scores=True` -> `top_k=None`
`return_all_scores=False` -> `top_k=1`
- Using `top_k` will imply sorting the results, but using no argument
will keep the results unsorted for backward compatibility.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-08-2022 13:32:28 | 06-08-2022 13:32:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This PR should be cleaned up, it changed the behavior of the text classification pipeline and the documentation is not clear. Can we get more explanations about the way we should expect to modify the code to get the same behavior with the top_k arg or continue supporting the return_all_scores with the same output order?
Passing `return_all_scores=True` is not equivalent to `top_k=1` for example, and setting `top_k=n` is sorting the results which is not the same order as before this change. Please provide better documentation and some way to aleviate the pain of upgrading the transformers library by minimizing the incompatible changes. |
transformers | 17,605 | closed | CLI: Properly detect encoder-decoder models | # What does this PR do?
Micro-PR that does what the title says. Some models have the encoder-decoder structure nested, and we now have access to the config file. | 06-08-2022 12:18:58 | 06-08-2022 12:18:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,604 | closed | OSError: Can't load config for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing a config.json file | ### System Info
```shell
This is my code, simple:
import torch
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-uncased') # Download vocabulary from S3 and cache.
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', './test/bert_saved_model/') # E.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`
And I meet this bug:
Downloading: "https://github.com/huggingface/pytorch-transformers/archive/main.zip" to C:\Users\20247/.cache\torch\hub\main.zip
Traceback (most recent call last):
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\connectionpool.py", line 700, in urlopen
self._prepare_proxy(conn)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\connectionpool.py", line 994, in _prepare_proxy
conn.connect()
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\connection.py", line 364, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\connection.py", line 501, in _connect_tls_proxy
socket = ssl_wrap_socket(
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "C:\Users\20247\anaconda3\envs\new_1\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\20247\anaconda3\envs\new_1\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Users\20247\anaconda3\envs\new_1\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
socket.timeout: _ssl.c:1112: The handshake operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\adapters.py", line 440, in send
resp = conn.urlopen(
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\connectionpool.py", line 785, in urlopen
retries = retries.increment(
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\urllib3-1.26.8-py3.9.egg\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1112: The handshake operation timed out')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\configuration_utils.py", line 595, in _get_config_dict
resolved_config_file = cached_path(
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\file_utils.py", line 1947, in cached_path
output_path = get_from_cache(
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\file_utils.py", line 2150, in get_from_cache
r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\api.py", line 102, in head
return request('head', url, **kwargs)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\requests-2.27.1-py3.9.egg\requests\adapters.py", line 513, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1112: The handshake operation timed out')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\delete.py", line 2, in <module>
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-uncased') # Download vocabulary from S3 and cache.
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\torch\hub.py", line 399, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "C:\Users\20247\anaconda3\envs\new_1\lib\site-packages\torch\hub.py", line 428, in _load_local
model = entry(*args, **kwargs)
File "C:\Users\20247/.cache\torch\hub\huggingface_pytorch-transformers_main\hubconf.py", line 68, in tokenizer
return AutoTokenizer.from_pretrained(*args, **kwargs)
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\models\auto\tokenization_auto.py", line 485, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\models\auto\configuration_auto.py", line 647, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\configuration_utils.py", line 547, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\文件备份\temp_can\transformers-77321481247787c97568c3b9f64b19e22351bab8\src\transformers\configuration_utils.py", line 631, in _get_config_dict
raise EnvironmentError(
OSError: Can't load config for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing a config.json file
Process finished with exit code 1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install torch! And also: pip install tqdm boto3 requests regex sentencepiece sacremoses
And use my code.
### Expected behavior
```shell
Could any expert help me to fix this bug? Also can help other users~Thanks!
```
| 06-08-2022 10:38:30 | 06-08-2022 10:38:30 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,603 | closed | adding encoder/decoder normalize_before to BART models | # What does this PR do?
This PR adds `encoder_normalize_before` and `decoder_normalize_before` parameters to BART models. These parameters control whether or not the `nn.LayerNorm` is applied before each block in the encoder/decoder (not used by all models -- often helps the model learn).
These parameters are already in use in `fairseq` see here for reference how it is used for [`TransformerEncoderLayerBase`](https://github.com/facebookresearch/fairseq/blob/f97cdf76d9cd20b16c3ff51f46382c2ed639bd17/fairseq/modules/transformer_layer.py#L164), [`TransformerDecoderLayerBase`](https://github.com/facebookresearch/fairseq/blob/f97cdf76d9cd20b16c3ff51f46382c2ed639bd17/fairseq/modules/transformer_layer.py#L385), [`TransformerDecoderBase`](https://github.com/facebookresearch/fairseq/blob/f97cdf76d9cd20b16c3ff51f46382c2ed639bd17/fairseq/models/transformer/transformer_encoder.py#L101), and [`TransformerDecoderBase`](https://github.com/facebookresearch/fairseq/blob/f97cdf76d9cd20b16c3ff51f46382c2ed639bd17/fairseq/models/transformer/transformer_decoder.py#L126).
This PR simply implement the same logic that is: when `*_normalize_before` is set to `False` it behave exacly as it was before; when it is set to `True` it applys `nn.LayerNorm` before each block.
This PR is useful to load model trained with `fairseq` with that option active (e.g., [GENRE](https://github.com/facebookresearch/GENRE) -- see [here](https://github.com/facebookresearch/GENRE/blob/115095c3f526a7f12fc23f074f0da3ff7c63a2ad/scripts_mgenre/train.sh#L47)).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten
@patil-suraj
@ola13
| 06-08-2022 09:20:48 | 06-08-2022 09:20:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hmmm not really in favor of this PR as it will greatly reduce readability IMO and does not correspond to the official BART implementation.
@sgugger @LysandreJik what are your thoughts here? <|||||>@patrickvonplaten I'm open to suggestions. Essentially I want to make possible to load some models that currently are not supported because they have been trained with ` --encoder-normalize-before` and `--decoder-normalize-before` flags in `fairseq`. See [here](https://github.com/facebookresearch/GENRE/blob/1fa162308403808c7f226f792677faf566d5524f/scripts_mgenre/train.sh#L47)<|||||>@patrickvonplaten Additionally I want to release an [MBART model](https://dl.fbaipublicfiles.com/GENRE/mbart.cc100.tar.gz) trained on 125 languages that suffers from the exact same problem ie, it has been trained with `--encoder-normalize-before` and `--decoder-normalize-before` flags in `fairseq`.<|||||>You can use the [code in the Hub feature](https://huggingface.co/docs/transformers/custom_models) to share a model with custmoized code, but I agree with @patrickvonplaten here. Transformers is not a modular toolbox and this has nothing to do with the original BART model (activating those new config arguments would give bad predictions with a pretrained checkpoint).<|||||>@nicola-decao - the MBart modeling code is different from the Bart code in Transformers. I think it should work with the MBart code :-)<|||||>@patrickvonplaten Do you suggest incorporating `encoder/decoder normalize_before` directly into the MBart implementation?
I like that option so I can release one of the official FAIR [MBart model](https://dl.fbaipublicfiles.com/GENRE/mbart.cc100.tar.gz) trained on 100 languages.<|||||>> @nicola-decao - the MBart modeling code is different from the Bart code in Transformers. I think it should work with the MBart code :-)
@patrickvonplaten Ahhhh I see what you mean now! In MBart the layer normalization is already inverted! Let me try to import my models with that one and if works I'll cancel this PR.<|||||>@patrickvonplaten It worked! :D Sorry for the bother! |
transformers | 17,602 | closed | Fix link for community notebooks | Fix the link due to the reorganization.
@sgugger @LysandreJik @julien-c | 06-08-2022 08:01:23 | 06-08-2022 08:01:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,601 | closed | Error with inferencing with fine-tuned model loaded from keras load_model | ### System Info
```shell
transformers version: 4.16.2
tensorflow version: 2.9.1
```
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased', num_labels=2)
imdb = load_dataset('imdb')
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True)
tokenized_imdb = imdb.map(preprocess_function, batched=True)
train_ds = tokenized_imdb['train']
val_test_ds = tokenized_imdb['test'].train_test_split(test_size=0.5)
val_ds, test_ds = val_test_ds['train'], val_test_ds['test']
print(len(train_ds), len(val_ds), len(test_ds))
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = 8
tf_train_ds = train_ds.to_tf_dataset(
columns=['input_ids', 'attention_mask', 'token_type_ids'],
label_cols=['label'],
shuffle=True,
drop_remainder=False,
batch_size=batch_size,
collate_fn=data_collator
)
tf_val_ds = val_ds.to_tf_dataset(
columns=['input_ids', 'attention_mask', 'token_type_ids'],
label_cols=['label'],
shuffle=True,
drop_remainder=False,
batch_size=batch_size,
collate_fn=data_collator
)
tf_test_ds = test_ds.to_tf_dataset(
columns=['input_ids', 'attention_mask', 'token_type_ids'],
label_cols=['label'],
shuffle=False,
drop_remainder=False,
batch_size=batch_size,
collate_fn=data_collator
)
num_epochs = 10
batches_per_epoch = len(tf_train_ds)
num_train_steps = batches_per_epoch * num_epochs
num_warmup_steps = int(0.1 * num_train_steps)
init_lr = 2e-5
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
model.save("./best_model")
re_model = tf.keras.models.load_model('./best_model', custom_objects={'AdamWeightDecay': optimizer})
# it works fine with serving() method
for i, j in tf_test_ds:
a = re_model.serving({
'input_ids': tf.cast(i['input_ids'], tf.int32),
'token_type_ids': tf.cast(i['token_type_ids'], tf.int32),
'attention_mask': tf.cast(i['attention_mask'], tf.int32),
})
print(a['logits'].numpy().argmax(axis=1))
break
# it does not work with __call__ method
results = re_model.evaluate(x=tf_test_ds)
for name, value in zip(re_model.metrics_names, results):
print("%s: %.3f" % (name, value))
```
error message:
```
ValueError: Exception encountered when calling layer "tf_bert_for_sequence_classification" (type TFBertForSequenceClassification).
Could not find matching concrete function to call loaded from the SavedModel. Got:
Positional arguments (11 total):
* {'attention_mask': <tf.Tensor 'input_ids:0' shape=(None, None) dtype=int64>,
'input_ids': <tf.Tensor 'input_ids_1:0' shape=(None, None) dtype=int64>,
'token_type_ids': <tf.Tensor 'input_ids_2:0' shape=(None, None) dtype=int64>}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* False
Keyword arguments: {}
Expected these arguments to match one of the following 2 option(s):
Option 1:
Positional arguments (11 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids/input_ids')}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* False
Keyword arguments: {}
Option 2:
Positional arguments (11 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids/input_ids')}
* None
* None
* None
* None
* None
* None
* None
* None
* None
* True
Keyword arguments: {}
Call arguments received by layer "tf_bert_for_sequence_classification" (type TFBertForSequenceClassification):
• args=({'input_ids': 'tf.Tensor(shape=(None, None), dtype=int64)', 'token_type_ids': 'tf.Tensor(shape=(None, None), dtype=int64)', 'attention_mask': 'tf.Tensor(shape=(None, None), dtype=int64)'},)
• kwargs={'training': 'False'}
```
Basically, from what I could tell, the reloaded TFBertForSequenceClassification model seems to accept only `input_ids` as the only input, while before saving the model with keras, it also allows `attention_mask` and `token_type_ids`. Is it something related to input signatures of the __call__ function?
### Expected behavior
```shell
The model saved with keras.save() works normally after loaded from tf.keras.models.load_model(...).
i.e. the __call__ method should recoginize the correct input format as serving() method.
```
| 06-08-2022 07:38:03 | 06-08-2022 07:38:03 | Hi @jamie0725 👋 Thank you for raising this issue. We are aware of problems like this, related to serialization/deserialization of Keras models, and it is in our plans to fix it.
I'm keeping the issue open for tracking purposes :)<|||||>> Hi @jamie0725 👋 Thank you for raising this issue. We are aware of problems like this, related to serialization/deserialization of Keras models, and it is in our plans to fix it.
>
> I'm keeping the issue open for tracking purposes :)
Thanks for notice, please let me know where there is a fix.
Meanwhile, is there any workaround I can do except for using the ``serving()`` method? Because ``serving()`` does not support keras methods like `evaluate()` etc.<|||||>There is, working with TF's concrete functions. Have a look at [this comment](https://github.com/huggingface/transformers/issues/11296#issuecomment-825798041) for an example :)
(We will probably build documentation for it soon, as part of our plans)<|||||>> There is, working with TF's concrete functions. Have a look at [this comment](https://github.com/huggingface/transformers/issues/11296#issuecomment-825798041) for an example :)
>
> (We will probably build documentation for it soon, as part of our plans)
I tried, but it gives the same error. My code is as below:
```
@tf.function
def call_model(input_ids, attention_mask):
return model(input_ids=input_ids, attention_mask=attention_mask)
concrete_function = call_model.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name="input_ids"), tf.TensorSpec([None, None], tf.int32, name="attention_mask"))
model.save('./best_model', signatures=concrete_function)
re_model = tf.keras.models.load_model('./best_model', custom_objects={'AdamWeightDecay': optimizer})
results = re_model.evaluate(x=tf_test_ds)
for name, value in zip(re_model.metrics_names, results):
print("%s: %.3f" % (name, value))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,600 | closed | Added preprocessing.mdx italian translation | ## What does this PR do?
Italian translation of doc related to the preprocessing of :hugs: Transformers.
* updated _toctree.yml
* added preprocessing.mdx
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
See issue: [#17459](https://github.com/huggingface/transformers/issues/17459)
@omarespejel
@sgugger | 06-08-2022 06:46:07 | 06-08-2022 06:46:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @nickprock! Thank you for this translation 🚀🚀
Can I ask you if you can edit these things?
- Un modello non comprende il testo piano --> un modello non comprende un testo grezzo
- CLS and SEP (classifier and separator) - --> CLS e SEP (classificatore e separatore)
- just before the "Audio" chapter there's a problem if you see the doc preview, where it says: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': , 'token_type_ids': , 'attention_mask': } ``` I see the same problem in the English doc, you can take a cue from the Spanish doc where the problem has been solved
- E' importante che la frequenza --> È importante che la frequenza
- il modello Wav2Vec2 msu questo dataset --> il modello Wav2Vec2 su questo dataset
- Usa il metodo in 🤗 Datasets' cast_column per alzare della frequenza di campionamento a 16kHz --> Usa il metodo di 🤗 Datasets cast_column per alzare la frequenza di campionamento a 16kHz
- uno 0 - interpretato come silenzaio --> uno 0 - interpretato come silenzio
- Pad and truncate --> Pad e truncate
- il primo campione ga una sequenza --> il primo campione ha una sequenza
- LCrea una funzione che preprocesserà il dataset --> Crea una funzione che preprocesserà il dataset
- per applicare al volo la trasformzazione --> per applicare al volo la trasformazione
- Il Tokenizer per processarfe i testi. --> Il Tokenizer per processare i testi.
- e tokenizza il in to labels --> e tokenizza il testo in labels
- Fantastico, ora dovreste -> Fantastico, ora dovresti
Thank you 🌈🌈<|||||>@mfumanelli thanks for the review 😊 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I fixed and I'm waiting the review<|||||>cc @omarespejel |
transformers | 17,599 | closed | sharded_ddp with auto_wrap is secretly a no-op | ### System Info
```shell
transformers 26e5e129b43760138aed2dfc1cc3c75b481a95e6
torch 1.11.0
fairscale 0.4.6
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The problem occurs in any scenario where you use Trainer with `--sharded_ddp` and use auto_wrap.
Trainer initializes sharded_ddp ([trainer.py:L1183](https://github.com/huggingface/transformers/blob/2f59ad1/src/transformers/trainer.py#L1183-L1184)), here's how trainer calls auto_wrap
```python
if ShardedDDPOption.AUTO_WRAP in self.args.sharded_ddp:
model = auto_wrap(model)
```
Then, auto_wrap checks if you are .in_autowrap_context ([auto_wrap.py:L224](https://github.com/facebookresearch/fairscale/blob/3b8f445/fairscale/nn/wrap/auto_wrap.py#L224-L229)), which is [False by default](https://github.com/facebookresearch/fairscale/blob/3b8f445/fairscale/nn/wrap/auto_wrap.py#L238)
```python
if ConfigAutoWrap.in_autowrap_context: # <-- False
wrapped_module, remainder = ConfigAutoWrap.recursive_wrap(
module, auto_wrap_policy=auto_wrap_policy, module_is_root=True, **kwargs
)
return wrapped_module
return module # <-- so you get this
```
As a result, auto_wrap does nothing and the model ends up not being wrapped. In this setting, ShardedDDP is just an inferior version of regular DDP with no memory savings. The worst part is that __it doesn't give you any clue that something is wrong__. It just silently does nothing.
__Example scenario:__
1. install the latest transformers and fairscale (see versions above)
3. go to [language-modelling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) example and follow the tutorial for "GPT-2/GPT and causal language modeling"
4. get to the part where you [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
5. add the following parameters: `--fp16 --sharded_ddp="zero_dp_3 auto_wrap"`
### Expected behavior
```shell
I would expect these options to result in optimizer and/or parameters being sharded and result in reduced GPU memory usage, but that does not happen.
(A) if you have time, it would be nice to make auto_wrap actually wrap the model
(B) if you don't, perhaps it would be simpler to add an assert
from fairscale.whatever import WrappedClassNameHere
assert any(isinstance(module, WrappedClassNameHere) for module in model.modules()), "running sharded_ddp, but there are no modules wrapped for sharding."
```
```
| 06-08-2022 04:09:56 | 06-08-2022 04:09:56 | We don't really support fairscale sharded ddp anymore. You should switch to the newer native FSDP integration which we support (doc is [here](https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel)).<|||||>Thanks for the response.
I only mean to say, that it would be great to add some kind of warning, saying that sharded_ddp is not supported - in code and/or the above tutorial.
Otherwise, the code pretends to work, but does nothing which confuse some users.
p.s. as of right now, FSDP does not support --fp16 and does not work with stable pytorch, so we're trying deepspeed as a workaround.
<|||||>Which tutorial advertises fairscale sharded DDP? We haven't really recommended it (we do recommend DeepSpeed which is more mature at this stage).<|||||>> (doc is [here](https://huggingface.co/docs/transformers/main_classes/trainer#pytorch-fully-sharded-data-parallel)).
In the link you suggested, if you scroll up, it says:
> Trainer Integrations
> The [Trainer](https://huggingface.co/docs/transformers/v4.19.2/en/main_classes/trainer#transformers.Trainer) has been extended to support libraries that may dramatically improve your training time and fit much bigger models.
> Currently it __supports__ third party solutions, [DeepSpeed](https://github.com/microsoft/DeepSpeed) __and [FairScale](https://github.com/facebookresearch/fairscale/)__, which implement parts of the paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He](https://arxiv.org/abs/1910.02054).
And then, just before the FSDP section,
> FairScale
> By integrating [FairScale](https://github.com/facebookresearch/fairscale/) the __[Trainer](https://huggingface.co/docs/transformers/v4.19.2/en/main_classes/trainer#transformers.Trainer) provides support__ for the following features from [the ZeRO paper](https://arxiv.org/abs/1910.02054): [list of features]
One may misinterpret this to mean that Trainer supports sharded_ddp in the same way it supports FSDP - or at least, that was how i misinterpreted it.
That said, i fully understand that this issue is low priority.<|||||>Thanks a lot for the pointers! I'll have a look at that doc and update it :-) |
transformers | 17,598 | closed | [WIP] Fix `assert` statements found in modeling files | # What does this PR do?
Fixing unnecessary `assert` statements under modeling code. This was motivated by the fact that contributing to `transformers` often requires a lot of copying from existing code, and while `assert` statements are [undesirable in new PRs](https://github.com/huggingface/transformers/pull/16402#discussion_r881715669), a lot of existing code includes `assert` statements.
I was just going to fix it for `Speech2Text` as referenced above, but found many instances of it throughout the codebase.
It's actually not entirely clear to me where `assert` is not desirable. For utility functions like `convert_*_utils.py` it seems forgivable (used by advanced developers anyway), but correct me if I'm wrong (otherwise there are hundreds of instances to fix).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 06-08-2022 04:07:44 | 06-08-2022 04:07:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17598). All of your documentation changes will be reflected on that endpoint.<|||||>Can you solve the merge conflicts? This should re-trigger the CI and we will see if the errors persist (there were some outage on the Hub this weekend).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,597 | closed | Translation/preprocessing: update | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-08-2022 03:19:55 | 06-08-2022 03:19:55 | |
transformers | 17,596 | closed | Pyramid Vision Transformer | ### Model description
I would like to add the[ Pyramid Vision Transformer model](https://arxiv.org/abs/2102.12122).
#### Paper Abstract
Pyramid Vision Transformer~(PVT), has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks
### Open source status
- [X] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Model Implementation: https://github.com/whai362/PVT
Pretrained model weights for semantic segmentation: https://github.com/whai362/PVT/tree/v2/segmentation (based on ADE20K) | 06-08-2022 02:26:58 | 06-08-2022 02:26:58 | Hey @danielhoshizaki, Can I join you?<|||||>@danielhoshizaki , I would love to contribute for this new model.<|||||>Thanks for offering to help and sorry about the late response.
I'm going to have a go at this model and if I need any help I will let you know.<|||||>Sure @danielhoshizaki <|||||>anyone working on this? |
transformers | 17,595 | closed | BigBird tokenizer - special tokens not being masked during MLM | ### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.27
- Python version: 3.9.12
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ydshieh
The BigBird tokenizer doesn't seem to have the special token mask operation working correctly. Only the [CLS] and [SEP] tokens are being treated as special tokens by the get_special_tokens_mask function
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. `tokenizer = AutoTokenizer.from_pretrained('google/bigbird-roberta-base')`
2. Print the special token ids: `tokenizer.all_special_ids` - This prints out: `[1, 2, 100, 66, 0, 65, 67]`
3. Get the special tokens mask: `tokenizer.get_special_tokens_mask([1, 2, 100, 66, 0, 65, 67], already_has_special_tokens=True)` - There seems to be a bug in this step. The returned mask: `[0, 0, 0, 1, 0, 1, 0]`
### Expected behavior
```shell
At step 3 the returned special tokens mask should have the value 1 at every position,
but it seems to have the value 1 only for the ids 65 & 66.
I followed similar steps for the Roberta base tokenizer and it worked as expected.
Is there something wrong that I'm doing?
```
| 06-07-2022 21:01:50 | 06-07-2022 21:01:50 | Thank you reporting this, @prajwal967
@SaulLu @Narsil
I tried with the following example (adapted from @prajwal967 ), and it looks like this issue happens for `BigBirdTokenizerFast`, but not `BigBirdTokenizer`.
Could you take a look on this issue, please? Are you aware of this issue (potentially for other tokenizers)?
```python
from transformers import AutoTokenizer, BigBirdTokenizer, BigBirdTokenizerFast
print("tokenizer from auto")
auto_tokenizer = AutoTokenizer.from_pretrained('google/bigbird-roberta-base')
print(f"special token ids: {auto_tokenizer.all_special_ids}")
mask = auto_tokenizer.get_special_tokens_mask([1, 2, 100, 66, 0, 65, 67], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
print("bigbird_tokenizer_slow")
bigbird_tokenizer_slow = BigBirdTokenizer.from_pretrained('google/bigbird-roberta-base')
print(f"special token ids: {bigbird_tokenizer_slow.all_special_ids}")
mask = bigbird_tokenizer_slow.get_special_tokens_mask([1, 2, 100, 66, 0, 65, 67], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
print("bigbird_tokenizer_fast")
bigbird_tokenizer_fast = BigBirdTokenizerFast.from_pretrained('google/bigbird-roberta-base')
print(f"special token ids: {bigbird_tokenizer_fast.all_special_ids}")
mask = bigbird_tokenizer_fast.get_special_tokens_mask([1, 2, 100, 66, 0, 65, 67], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
```<|||||>Good for `RobertaTokenizerFast`
```python
from transformers import AutoTokenizer, RobertaTokenizer, RobertaTokenizerFast
print("tokenizer from auto")
auto_tokenizer = AutoTokenizer.from_pretrained('roberta-base')
print(f"special token ids: {auto_tokenizer.all_special_ids}")
mask = auto_tokenizer.get_special_tokens_mask([0, 2, 3, 1, 50264], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
print("bigbird_tokenizer_slow")
bigbird_tokenizer_slow = RobertaTokenizer.from_pretrained('roberta-base')
print(f"special token ids: {bigbird_tokenizer_slow.all_special_ids}")
mask = bigbird_tokenizer_slow.get_special_tokens_mask([0, 2, 3, 1, 50264], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
print("bigbird_tokenizer_fast")
bigbird_tokenizer_fast = RobertaTokenizerFast.from_pretrained('roberta-base')
print(f"special token ids: {bigbird_tokenizer_fast.all_special_ids}")
mask = bigbird_tokenizer_fast.get_special_tokens_mask([0, 2, 3, 1, 50264], already_has_special_tokens=True)
print(f"special_tokens_mask: {mask}")
print("=" * 40)
```<|||||>I wasn't aware of this one! Thanks for pinging me.
However, I am not sure what would be the right behaviour here. To get a signal on how he would like to better standardize these behaviours would it be possible to share with us for which use case you use `get_special_tokens_mask` @prajwal967 ?<|||||>The use case I was looking at was to train a masked language model. There seem to be two approaches to ensure special tokens are **not masked**
Approach 1:
- Set the return_special_tokens_mask = True in the tokenization step: https://github.com/huggingface/transformers/blob/66f893320c6a6935668d7de8bff26bd5a35ae042/examples/pytorch/language-modeling/run_mlm.py#L433
- The data collator checks if we have the tensor mask in the input - and it ensures the special tokens are not masked: https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/data/data_collator.py#L763
Approach 2:
- Set return_special_tokens_mask = False in the tokenization step. I would assume the only benefit for doing this would be you save some memory at the cost of the next step being slower.
- The data collator now creates the special tokens mask on the fly and it ensures the special tokens are not masked: https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/data/data_collator.py#L757-L761
I haven't found any reason to favor *Approach 2* over *Approach 1*. I just happened to come across this issue when I was playing around with the code.
Although, there was a time when I was creating a custom tokenizer and approach 1 wasn't returning the correct mask. It wouldn't return the mask correctly for special tokens within the sequence.
- For example: Input: [CLS, id1, id2, [SEP], id3, SEP], mask: [1, 0, 0, 0, 0, 1]. The value would be 0 for the SEP token within the sequence.
- But for some reason *Approach 2* worked and would give the mask: [1, 0, 0, 1, 0, 1].
- This could just be me not creating the custom tokenizer correctly, I haven't looked further into this!
I haven't found or come across any use case where I know for certain that *Approach 2* would be definitely better. I just happened to come across this issue when I was playing around with the code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,594 | closed | Summarization output different when using pipelines and the on-website inference engine. | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
```
### Who can help?
@Narsil
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See the colab notebook
https://colab.research.google.com/drive/1ALpWzqA0KMJ6K2SUhg8K-NAFhpFpoBm8?usp=sharing
The summary generated is completely different (and lower quality) from the one when I try to do a single inference on the website with the associated model at https://huggingface.co/sshleifer/distilbart-cnn-12-6. (With the text to be summarize at this [link](https://huggingface.co/sshleifer/distilbart-cnn-12-6?text=Andy+Murray+came+close+to+giving+himself+some+extra+preparation+time+for+his+wedding+next+week+before+ensuring+that+he+still+has+unfinished+tennis+business+to+attend+to.%0A%0AThe+world+No+4+is+into+the+semi-finals+of+the+Miami+Open%2C+but+not+before+getting+a+scare+from+21+year-old+Austrian+Dominic+Thiem%2C+who+pushed+him+to+4-4+in+the+second+set+before+going+down+3-6+6-4%2C+6-1+in+an+hour+and+three+quarters.+Murray+was+awaiting+the+winner+from+the+last+eight+match+between+Tomas+Berdych+and+Argentina%27s+Juan+Monaco.%0A%0APrior+to+this+tournament+Thiem+lost+in+the+second+round+of+a+Challenger+event+to+soon-to-be+new+Brit+Aljaz+Bedene.%0A%0AAnd+Murray+has+a+fairly+simple+message+for+any+of+his+fellow+British+tennis+players+who+might+be+agitated+about+his+imminent+arrival+into+the+home+ranks%3A+don%27t+complain.%0A%0AInstead+the+British+No+1+believes+his+colleagues+should+use+the+assimilation+of+the+world+number+83%2C+originally+from+Slovenia%2C+as+motivation+to+better+themselves.%0A%0AAt+present+any+grumbles+are+happening+in+private%2C+and+Bedene%27s+present+ineligibility+for+the+Davis+Cup+team+has+made+it+less+of+an+issue%2C+although+that+could+change+if+his+appeal+to+play+is+allowed+by+the+International+Tennis+Federation.%0A%0AMurray+thinks+anyone+questioning+the+move%2C+now+it+has+become+official%2C+would+be+better+working+on+getting+their+ranking+closer+to+his.%0A%0A%27If+he+was+500+in+the+world+they+wouldn%27t+be+that+fussed+about+it+but+obviously+he+threatens+their+position+a+bit%2C%27+said+the+27+year-old+Scot.+%27+and+he%27s+obviously+the+British+number+two%2C+comfortably.%0A%0A%27So+they+can+complain+but+the+best+thing+to+do+is+use+it+in+the+right+way+and+accept+it+for+what+it+is%2C+and+try+to+use+it+as+motivation+whether+they+agree+with+it+or+not.+He%27s+British+now+so+they%27ve+just+got+to+deal+with+it.%0A%0A%27I+would+hope+that+all+the+guys+who+are+below+him+now+like+James+%28Ward%29+%2C+Kyle+%28Edmund%29+%2C+Liam+%28Broady%29+they+will+use+it+as+motivation.+If+he+becomes+eligible+for+Davis+Cup+then+those+guys+are+going+to+have+to+prove+themselves.%0A%0A%27It+can+only+be+seen+as+a+positive+for+those+guys+using+it+to+try+to+get+better.+He%27s+a+good+player+but+so+are+James+and+Kyle+and+Liam+has+improved.+Aljaz+is+there%2C+he%27s+on+the+tour+every+week%2C+the+other+guys+aren%27t+quite+there+yet.%27%0A%0AFor+the+first+time+Murray%2C+who+has+an+encyclopaedic+knowledge+of+the+top+100%2C+gave+his+opinion+of+Bedene%3A+%27He%27s+a+good+player+with+a+very+good+serve.+He%27s+a+legitimate+top+100+player%2C+when+he+plays+Challengers+he%27s+there+or+thereabouts%2C+when+he+plays+on+the+main+tour+he+wins+matches%2C+it%27s+not+like+he+turns+up+and+always+loses+in+the+first+round.%0A%0A%27He+had+a+bad+injury+last+year+%28wrist%29+but+has+recovered+well.+I+would+imagine+he+would+keep+moving+up+the+rankings+although+I+don%27t+know+exactly+how+high+he+can+go.+I%27ve+practised+with+him+a+couple+of+times%2C+I+haven%27t+seen+him+play+loads%2C+but+when+you+serve+as+well+as+he+does+it+helps.+I+would+imagine+he%27+s+going+to+be+comfortably+in+the+top+70+or+80+in+the+world+for+a+while.%27%0A%0AIt+is+understood+the+Lawn+Tennis+Association+will+give+background+support+to+his+case+regarding+the+Davis+Cup+but+have+made+it+clear+that+the+onus+is+on+him+to+lead+the+way.+An+official+statement+said%3A+%27To+have+another+player+in+the+men%27s+top+100+is+clearly+a+positive+thing+for+British+tennis+and+so+we+very+much+welcome+Aljaz%27s+change+in+citizenship.%27%0A%0AThe+last+comparable+switch+came+twenty+years+ago+when+Greg+Rusedski+arrived+from+Canada.+It+was+by+no+means+universally+popular+but%2C+like+Bedene%2C+he+pledged+that+he+was+in+for+the+long+haul+and%2C+in+fairness+to+him%2C+he+proved+true+to+his+word.%0A%0AMurray+had+to+put+such+matters+aside+as+he+tackled+the+unusually+talented+Thiem%2C+a+delight+to+watch.+Coached+by+Boris+Becker%27s+veteran+mentor+Gunter+Bresnik%2C+he+slightly+resembles+Andy+Roddick+and+hits+with+similar+power+but+more+elegance.+His+single+handed+backhand+is+a+thing+of+rare+beauty.%0A%0AHowever%2C+he+has+had+a+mediocre+season+coming+into+this+event+and+there+was+little+to+forewarn+of+his+glorious+shotmaking+that+seemed+to+catch+Murray+unawares+early+on.%0A%0AThe+world+No+4+looked+to+have+worked+him+out+in+the+second%2C+but+then+suffered+one+of+his+periopdic+mental+lapses+and+let+him+back+in+from+4-1+before+closing+it+out+with+a+break.+After+breaking+him+for+3-1+in+the+decider+the+Austrian+whirlwind+burnt+itself+out.%0A%0A%27He%27s+a+strong+guy+who+hits+the+ball+hard+and+it+became+a+very+physical+match%2C%27+said+Murray.))
### Expected behavior
```shell
I would expect the summary generated from the pipeline to be identical to the one generated from the single inference on the website seen at the link above ```
| 06-07-2022 20:36:26 | 06-07-2022 20:36:26 | Hi @anchit-sadana ,
I couldn't load your colab (invalid credentials)
Summarization is not deterministic by default, so results are not always identical.
If you want identical results you can use `do_sample=False` parameter.
```python
pipe = pipeline(...)
out = pipe(text, do_sample=False)
```
or using the `{"parameters": {"do_sample": False}}` key within the API https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task
You can also play with `min_length` and `max_length` to play with the results of the summarization.
Does that answer your question ?
Cheers,
Nicolas
<|||||>Hi Nicolas,
Thanks for the response! I only posted this because I was consistently getting the same summary when I ran the pipeline object in colab and on the inference on the model card which were consistently different. Passing `do_sample = False` also does not seem to be changing the summary. Here's the fixed link to colab notebook - https://colab.research.google.com/drive/1ALpWzqA0KMJ6K2SUhg8K-NAFhpFpoBm8?usp=sharing. My apologies on the messing up the link earlier
Cheers,
Anchit<|||||>Hi @anchit-sadana ,
I think the issue comes from the fact that your text is adding extra newlines between lines everywhere, so the model is not seeing the same text so summary is degraded.
(Probably didn't see a lot of double newlines during training and it's throwing the model a little off ?)
<|||||>Ah that makes sense. Thanks for the response! Marking the issue as closed |
transformers | 17,593 | closed | TF: Merge PT and TF behavior for Bart when no decoder_input_ids are passed | # What does this PR do?
The main model for TF and PT have a different behavior when no `decoder_input_ids` are passed. Because of that, the same model with the same inputs has different outputs in the two platforms, in this specific input case.
Looking at the git blame, it seems like [this](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1202) `if` branch in PT has added after the TF model was added, and we possibly forgot to port it back.
Notes:
1. All slow tests for Bart were run locally;
2. This was caught with the [updated](https://github.com/huggingface/transformers/pull/17588) `pt-to-tf` CLI, which added much stricter tests. | 06-07-2022 17:57:51 | 06-07-2022 17:57:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,592 | closed | Fix tokenizer type annotation | It should probably accept either `PreTrainedTokenizer` or `PreTrainedTokenizerFast`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-07-2022 16:46:47 | 06-07-2022 16:46:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17592). All of your documentation changes will be reflected on that endpoint.<|||||>Could we put it in brackets, so that the import of the class isn't necessary? IDEs will still get it right, and as we want users to be able to leverage `transformers` without `tokenizers` installed on setups that don't support rust, it would be the least breaking thing to do.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,591 | closed | Use shape_list to safely get shapes for Swin | # What does this PR do?
Refactors some logic in the Swin model to enable `model.fit` calls with a `tf.data.BatchDataset` datasets i.e. datasets returned from `to_tf_dataset` called with `batch_size` argument set, and a common dataset type in TF workflows.
**Changes**
* `.shape` calls are replaced with `shape_list`, which handles dynamic shapes and batched data cleanly.
* Test added which would have caught this
* Adapt `reshape` logic in Deberta model to allow it to be run in graph mode
On the current main, the following error would occur (snippet shown):
```
File "/Users/aroberts/hf_code/transformers/src/transformers/models/swin/modeling_tf_swin.py", line 296, in call *
embeddings, output_dimensions = self.patch_embeddings(pixel_values, training=training)
File "/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
TypeError: Exception encountered when calling layer "patch_embeddings" (type TFSwinPatchEmbeddings).
in user code:
File "/Users/aroberts/hf_code/transformers/src/transformers/models/swin/modeling_tf_swin.py", line 363, in call *
embeddings = tf.reshape(embeddings, (embeddings.shape[0], embeddings.shape[1], -1))
TypeError: Failed to convert elements of (None, 96, -1) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.
```
if running:
```
import tensorflow as tf
from datasets import load_dataset, load_metric
from transformers import AdamWeightDecay, AutoFeatureExtractor, DefaultDataCollator, TFAutoModelForImageClassification
model_checkpoint = "microsoft/swin-tiny-patch4-window7-224" # pre-trained model from which to fine-tune
dataset = load_dataset("imagefolder", data_files="https://madm.dfki.de/files/sentinel/EuroSAT.zip")
metric = load_metric("accuracy")
labels = dataset["train"].features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = i
id2label[i] = label
feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint)
def train_transforms(image):
# Placeholder to demo transform pipeline
image = tf.keras.utils.img_to_array(image)
image = tf.image.resize(
image,
size=(feature_extractor.size, feature_extractor.size),
method=tf.image.ResizeMethod.BILINEAR
)
image = tf.transpose(image, (2, 0, 1))
return image
def preprocess_train(example_batch):
example_batch["pixel_values"] = [
train_transforms(image.convert("RGB")) for image in example_batch["image"]
]
example_batch.pop('image') # Is this OK?
return example_batch
splits = dataset["train"].train_test_split(test_size=0.1)
train_ds = splits['train']
train_ds.set_transform(preprocess_train)
data_collator = DefaultDataCollator(return_tensors="tf")
train_set = train_ds.to_tf_dataset(
columns=["image", "label"],
shuffle=True,
batch_size=16,
collate_fn=data_collator
)
optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
model = TFAutoModelForImageClassification.from_pretrained(
model_checkpoint,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True,
)
model.compile(optimizer=optimizer)
model.fit(train_set, epochs=2)
```
With this refactor the training script will run and the tensorflow model will start training.
Note: the model version on `main` will predict with a batch from the dataset using the `__call__` method e.g.
```
batch = next(iter(train_ds))
model(batch)
```
However, it fails with the same error if called through the keras API
```
batch = next(iter(train_ds))
model.predict(batch)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 06-07-2022 16:43:35 | 06-07-2022 16:43:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(Also, I re-read the line I was complaining about and it's actually fine, just a little unclear, so I deleted that comment) |
transformers | 17,590 | closed | TrainingArguments with pytorch on Mac: AttributeError: module 'torch.distributed' has no attribute 'is_initialized' | ### System Info
```shell
macOS 12.4 running on Apple silicon
python 3.9.0 h4b4120c_5_cpython conda-forge
transformers 4.18.0 py39hca03da5_0
pytorch 1.10.2 cpu_py39h23cb94c_0
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to run a minimal example based on [this documentation](https://huggingface.co/docs/transformers/training#finetune-with-trainer): finetuning for text classification.
```
from datasets import load_metric, load_dataset
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
dataset = load_dataset("yelp_review_full")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples['text'], padding='max_length', truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
training_args = TrainingArguments(output_dir="out")
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
I am seeing the following output:
```
Traceback (most recent call last):
File "/Users/evan/Library/Application Support/JetBrains/PyCharmCE2022.1/scratches/scratch_9.py", line 18, in <module>
training_args = TrainingArguments(output_dir="out")
File "<string>", line 91, in __init__
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/training_args.py", line 865, in __post_init__
and (self.device.type != "cuda")
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 781, in wrapper
return func(*args, **kwargs)
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/training_args.py", line 1099, in device
return self._setup_devices
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/utils/generic.py", line 48, in __get__
cached = self.fget(obj)
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 781, in wrapper
return func(*args, **kwargs)
File "/Users/evan/miniforge3/envs/docbot-server/lib/python3.9/site-packages/transformers/training_args.py", line 1024, in _setup_devices
if torch.distributed.is_initialized() and self.local_rank == -1:
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
```
It looks like torch doesn't expose the `is_initialized` API unless distributed training is supported. Should the `TrainingArguments` class first check `torch.distributed.is_available()` before trying to call that?
Thanks
### Expected behavior
```shell
Fine-tuning on the Yelp dataset using pytorch.
```
| 06-07-2022 16:30:36 | 06-07-2022 16:30:36 | This has been fixed already, you should try again on the latest version.<|||||>Thank you, upgrading to 4.19.2 worked.<|||||>Great to hear! |
transformers | 17,589 | closed | Update MT5Config (`is_gated_act`) | # What does this PR do?
Update MT5Config (`is_gated_act`). Fix #17578. (Just copy-paste from `T5Config`) | 06-07-2022 15:58:46 | 06-07-2022 15:58:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten You forgot to edit the title during merge and it became `fix` in the commit page. No worry, just to be a bit bit slower next time 🙏 Thank you :-) |
transformers | 17,588 | closed | CLI: add stricter automatic checks to `pt-to-tf` | # What does this PR do?
Last week I introduced the `pt-to-tf` CLI (https://github.com/huggingface/transformers/pull/17497), enabling automatic weight conversion followed by PR opening.
This PR makes four changes related to that CLI:
1. Uses the appropriate model class to load the model, to ensure the head's weights also get converted;
2. Adds much stricter checks -- ALL model outputs (with `output_hidden_states=True`) are verified;
3. Adds a flag to create new TF weights, even if they already exist;
4. Updates the docker file for the scheduled tests to install `git lfs` -- I did it for the circleci workflows in the original PR, but forgot to do it for the scheduled tests.
🚨 This also means I will double-check previously open Hub PRs (about 10), to confirm that the model head is present in the TF weights (I suspect it isn't in some cases 😢 ) and that the outputs pass the stricter tests.
For context, if the conversion fails because of a difference in the model outputs, we get a message like this one:
<img width="1013" alt="Screenshot 2022-06-07 at 17 00 12" src="https://user-images.githubusercontent.com/12240844/172427441-e1fd63e4-7887-47bf-8827-9209bc1df78a.png"> | 06-07-2022 15:55:39 | 06-07-2022 15:55:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM, just a few nits if they make sense.
It's indeed very important regarding the `head`, great catch, @gante !
|
transformers | 17,587 | closed | [CLIP] Add padding embeddings | # What does this PR do?
This PR adds optional per-token padding embeddings (not the usual kind of padding, these are closer to postion embeddings in implementation).
These are needed to convert GLIDE text encoders: https://github.com/openai/glide-text2im/blob/18bf97a07446874693263715af6a73a09cbe81e2/glide_text2im/text2im_model.py#L66
| 06-07-2022 15:08:15 | 06-07-2022 15:08:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Nevermind, had to rewrite the whole attention layer for GLIDE anyway, so I'll just add a new model |
transformers | 17,586 | closed | Explicit versions in docker files | # What does this PR do?
@stas00 Here is the thread to discuss the version things in docker files.
- Objective:
- We have more 3rd party libraries that depend on PyTorch version
- Some of them need explicit versions, instead of using something like `$PYTORCH_VER`. For example, we have [1.11.200](https://www.intel.com/content/dam/develop/external/us/en/documents/ipex/whl-stable.html) in `intel_extension_for_pytorch-1.11.200+cpu-cp38-cp38-linux_x86_64.whl`
- @stas00 suggest to specify the versions in the docker file(s), so we have better control.
- Implementation
- The current attempt is not very good once we have more and more docker files, but so far it's OK.
- If this is fine, I can **apply the same change to other docker files**. | 06-07-2022 12:58:38 | 06-07-2022 12:58:38 | @stas00
Do you think we can keep `pip install torch` (latest release), and for 3rd party libraries, we use explicit versions?
(of course, we still need to update them when they have new releases)
Or you really prefer to set all of them (including `torch`) explicitly in docker file, and we update it when new release(s) are available?
WDYT, @LysandreJik <|||||>The reason I proposed to write out the explicit version is because the 3rd parties will fail if they are not adjusted (and of course provided the new builds) - so why not change everything in sync after validating the 3rd parties provide the new builds and not switch to the very latest until this happens.
So this is another good reason actually, if the 3rd party hasn't made a new build yet, after a new pytorch release, we shouldn't update pytorch.
Of course, this requires us to keep up to date with pytorch releases and act on it in a timely manner - e.g. for the last 3 pt cycles I asked torch-scatter owner to make a new release before updating our CI workflow files. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>For the record:
Currently only `docker/transformers-all-latest-gpu/Dockerfile` is changed.
Since we don't install `intel_extension_for_pytorch` in other docker files (so far), and `torch-scatter` is fine with `torch` version, so I just keep as it is for now. |
transformers | 17,585 | closed | Add ONNX support for ResNet | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds ONNX support for ResNet models. This will enable to export a ResNet to the ONNX format.
It deals with #16308 and continue the work done in #16948 (I didn't see this PR when I opened this one).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-07-2022 12:08:00 | 06-07-2022 12:08:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ah sorry I didn't see this PR. I will refer to it in the description of this one.
Thanks for the review and what I forgot @lewtun, that's very helpful because that's the 1st model I add to our base of ONNX models :)
I'll be off tomorrow but I'll do all this on Thursday. |
transformers | 17,584 | closed | ONNX support for gpt-neox | ### Feature request
Is there any possibility that we can convert gpt-neox to onnx using transformer library?
### Motivation
onnx model will be faster.
### Your contribution
- | 06-07-2022 12:05:49 | 06-07-2022 12:05:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @hannan72 we have a guide on adding support to export new architectures to ONNX: https://huggingface.co/docs/transformers/serialization#contributing-a-new-configuration-to-transformers
Would you like to have a go at implementing the ONNX config for this model? It should be similar to what has already been done for other GPT models like `gpt-neo`<|||||>Hi @lewtun
It is totally welcome if I can convert GPT like models to onnx. I did the conversion for gpt-j-6b. The problem was that the inference time improvement was only 5% in comparison to PyTorch on my GPU. Did you have any benchmark for the conversion of GPT models to onnx?<|||||>Hi @hannan72
> It is totally welcome if I can convert GPT like models to onnx. I did the conversion for gpt-j-6b.
Cool, this would be a nice addition!
> The problem was that the inference time improvement was only 5% in comparison to PyTorch on my GPU. Did you have any benchmark for the conversion of GPT models to onnx?
Were you running on ONNXRuntime? We've found that to really speed up generations with GPT-like models, one needs to include ONNXRuntime's [IO bindings](https://onnxruntime.ai/docs/api/python/api_summary.html#iobinding) and [ORTValue](https://onnxruntime.ai/docs/api/python/api_summary.html#ortvalue). I don't have a benchmark for this currently, but it's something we're looking into in `optimum`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,583 | closed | Probable bug with `torch_dtype="auto"` | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.17.8-arch1-1-x86_64-with-glibc2.35
- Python version: 3.9.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.1
- JaxLib version: 0.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Reproducing script:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("Jeevesh8/lecun_feather_berts-96", torch_dtype="auto")
```
This seems to be a bug since the model does not load with error:
```
ValueError: Can't instantiate BertModel model under dtype=torch.int64 since it is not a floating point dtype
```
And the `config.json` actually defines a different `torch_dtype`.
https://huggingface.co/Jeevesh8/lecun_feather_berts-96
https://huggingface.co/Jeevesh8/lecun_feather_berts-96/blob/main/config.json#L29
### Expected behavior
I would expect the model to load regardless, or at the very least not crash while attempting to load with `torch.int64`.
| 06-07-2022 09:03:11 | 06-07-2022 09:03:11 | cc @stas00 @sgugger <|||||>`torch_dtype="auto"` does not look at the config, but looks at the first parameter (which is also what the config saves automatically FYI). In both cases we should probably ignore the first parameter if it has an int dtype and loop until we get to a float or to the end (for quantized models which have an int dtype everywhere).<|||||>This looks like a wrong assumption on my part.
@sgugger, would you like me to work on the fix if you haven't done this already?<|||||>I haven't started working on it, wanted to make sure my suggested solution seemed good to you. If you want to go ahead and implement it, by all means! Otherwise I'll do it sometime later today :-)<|||||>I will work on it. Will diagnose this model and see if we have a tiny model that has a similar arrangement to test with. |
transformers | 17,582 | open | Loading repository after rename does not work (with old name) | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
cc @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Context
```
FROM LOCATION (old location): https://huggingface.co/KB/bert-base-swedish-cased
TO LOCATION (new location): https://huggingface.co/KBLab/bert-base-swedish-cased
```
The following breaks
```py
from transformers import AutoModel
model = AutoModel.from_pretrained("KB/bert-base-swedish-cased")
```
Error
```
OSError: Can't load config for 'KB/bert-base-swedish-cased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'KB/bert-base-swedish-cased' is the correct path to a directory containing a config.json file
```
What works here?
:white_check_mark: going to original path redirects to new path
:white_check_mark: doing git clone
:white_check_mark: using huggingface_hub model_info("KB/bert-base-swedish-cased")
:x: using transformers AutoModel.from_pretrained("KB/bert-base-swedish-cased")
### Expected behavior
```shell
Loading the model with the old name would still work so users loading a transferred model don't have an impacted experience
```
| 06-07-2022 08:09:01 | 06-07-2022 08:09:01 | We have since the issue was opened here duplicated some of the most used repositories and re-uploaded them. The above code example will therefore work now (because we duplicated the repo).
Here is a fresh example of a transferred repository we haven't manually duplicated to replicate the redirection error:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("KB/bert-base-swedish-cased-alpha") # New location is KBLab//bert-base-swedish-cased-alpha
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this issue needs to be addressed still, so I will provide some extra context that may not have been apparent in the original post.
### Context
We previously hosted our models on a user account named [KB](https://huggingface.co/KB). After organization accounts became available, we created an organization called [KBLab](https://huggingface.co/KBLab) where we uploaded our newer models. In the beginning of June, we decided to transfer all our models from the user [KB](https://huggingface.co/KB) to the organization [KBLab](https://huggingface.co/KBLab). Under the "Settings" tab for individual models, it is stated that models will be **automatically redirected** to the new location after the renaming or transferring of the models:

### Expected behavior
We interpret "automatic redirection" being an assurance of the the following functionality working after a rename/transfer:
1. Old URLs to models will redirect to new URL in browser.
2. Git operations on old URLs will redirect to new URL.
3. Using Huggingface Hub functions with old URL will redirect properly.
4. Using `transformers` package with old URL and `AutoModel.from_pretrained("OLD_URL")` will redirect to new URL and successfully load all transferred models.
In reality cases 1, 2 and 3 work well and provide automatic redirection :heavy_check_mark:, whereas case number 4 does *not* provide functioning automatic redirection :x: .
### Reproduction and errors
Cases 1, 2, and 3 from above work fine when transferring a model repository to a new organization. However, case number 4, using the `transformers` package with `Automodel.from_pretrained()`, does *not* redirect the models properly.
To reproduce this error:
* Create a model in any repository you have read/write access to.
* Transfer that same model to another user/organization you also have read/write access to.
* Try loading the model with the old URL using `Automodel.from_pretrained()`.
We can provide an example from our own transfer of models below.
We transferred the repository previously located at [https://huggingface.co/KB/bert-base-swedish-cased-alpha](https://huggingface.co/KB/bert-base-swedish-cased-alpha) to the organization KBLab at [https://huggingface.co/KBLab/bert-base-swedish-cased-alpha](https://huggingface.co/KBLab/bert-base-swedish-cased-alpha). Notice how the first link (KB/bert-base-swedish-cased-alpha) automatically redirects to the second link in the browser (KBLab/bert-base-swedish-cased-alpha).
However, in python, when trying to load the model with the `transformers` package, the old URL does not work:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("KB/bert-base-swedish-cased-alpha")
```
Leads to the error
```
OSError: Can't load config for 'KB/bert-base-swedish-cased-alpha'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'KB/bert-base-swedish-cased-alpha' is the correct path to a directory containing a config.json file
```
Would appreciate if @LysandreJik or someone else could take a look at this. I imagine other users and organizations will expect the transferring and renaming of models to provide automatic redirection when loading the models in the `transformers` package.<|||||>Also cc @sgugger for visibility<|||||>This is because Transformers does not rely on `huggingface_hub` for loading models since its loading mechanism predates that library. We will switch soon to using `huggingface_hub` behind the scenes, which should solve this issue.<|||||>See issue linked above, Transformers now uses `hf_hub_download` behind the scenes, but that function does not handle redirections (yet). The bug will be fully solved once this issue is resolved. |
transformers | 17,581 | closed | could not run distribution cpu training on two CPU sockets using miprun | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.8.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

it does not train one model using two CPU sockets, but each instance trains its own model with below output
[INFO|trainer.py:1472] 2022-06-07 00:51:33,164 >> Instantaneous batch size per device = 12
[INFO|trainer.py:1473] 2022-06-07 00:51:33,164 >> Total train batch size (w. parallel, distributed
& accumulation) = 12
### Expected behavior
```shell
should output following log if two cpu sockets are used for distributed training and the training performance should be doubled comparing with training with one CPU socket.
[INFO|trainer.py:1472] 2022-06-07 00:51:33,164 >> Instantaneous batch size per device = 12
[INFO|trainer.py:1473] 2022-06-07 00:51:33,164 >> Total train batch size (w. parallel, distributed
& accumulation) = 24
```
| 06-07-2022 08:00:12 | 06-07-2022 08:00:12 | |
transformers | 17,580 | closed | resize_token_embeddings in BartForConditionalGeneration doesn't change lm_head size | ### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.6
- Huggingface_hub version: 0.1.2
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
```
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Call resize_token_embeddings with different vocab size
### Expected behavior
```shell
When I call resize_token_embeddings, it resizes embedding but not lm_head even though embedding and lm_head are tied.
Is it intended or a bug?
Or am I using resize_token_embeddings method wrongly?
```
| 06-07-2022 05:11:49 | 06-07-2022 05:11:49 | I am wrong about this issue.
It works fine. |
transformers | 17,579 | closed | Enabling vilt and flava auto feature extractor. | ### System Info
```shell
File ~/anaconda3/envs/vqa/lib/python3.8/site-packages/transformers/models/auto/feature_extraction_auto.py:163, in AutoFeatureExtractor.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
161 if not is_feature_extraction_file and (has_local_config or not is_directory):
162 if not isinstance(config, PretrainedConfig):
--> 163 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
165 kwargs["_from_auto"] = True
166 config_dict, _ = FeatureExtractionMixin.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs)
File ~/anaconda3/envs/vqa/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py:602, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
600 return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
601 elif "model_type" in config_dict:
--> 602 config_class = CONFIG_MAPPING[config_dict["model_type"]]
603 return config_class.from_dict(config_dict, **kwargs)
604 else:
605 # Fallback: use pattern matching on the string.
File ~/anaconda3/envs/vqa/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py:319, in _LazyConfigMapping.__getitem__(self, key)
317 return self._extra_content[key]
318 if key not in self._mapping:
--> 319 raise KeyError(key)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
trying to use vilt and flava feature extractors via AutoFeatureExtractor will result in the mentioned errors.
### Expected behavior
```shell
I get an error when using ato classes with vilt , flava
```
| 06-07-2022 02:32:11 | 06-07-2022 02:32:11 | ViLT is going to be added in #17286 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This seems fixed for FLAVA now as well :) |
transformers | 17,578 | closed | MT5Model(MT5Config()) fails with AttributeError: 'MT5Config' object has no attribute 'is_gated_act' | ### System Info
```shell
current main branch, 4.20.0.dev0
```
### Who can help?
@patrickvonplaten @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import *
model = MT5ForConditionalGeneration(MT5Config())
```
### Expected behavior
no error | 06-07-2022 00:41:16 | 06-07-2022 00:41:16 | I think it was introduced in https://github.com/huggingface/transformers/pull/17420<|||||>cc @patil-suraj as well.
It would also be interesting to understand how it was not caught by any of the tests. |
transformers | 17,577 | closed | Fix circular import in onnx.utils | # What does this PR do?
This PR fixes the circular import in `onnx.utils` that one can experiment right now with:
```py
from transformers.onnx import OnnxConfig
```
It comes form the fact that `onnx.utils` imports `AutoProcessor`, `AutoFeatureExtractor` and `AutoTokenizer`, which then requires all models to be initialized, which in turns requires all configs to be initialized and relies on `OnnxConfig`, hence a circular import.
Supercedes #17576 | 06-06-2022 20:27:55 | 06-06-2022 20:27:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,576 | closed | Remove circular imports in layoutlm/__init__.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
There are unused circular imports in `layoutlm/__init__.py`. The import `from transformers.onnx import OnnxConfig` currently leads to the following error:
```bash
Traceback (most recent call last):
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 878, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/models/layoutlm/__init__.py", line 28, in <module>
from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/models/layoutlm/configuration_layoutlm.py", line 22, in <module>
from ...onnx import OnnxConfig, PatchingSpec
ImportError: cannot import name 'OnnxConfig' from 'transformers.onnx' (/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/onnx/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 878, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/onnx/config.py", line 25, in <module>
from .utils import ParameterFormat, compute_effective_axis_dimension, compute_serialized_parameters_size
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/onnx/utils.py", line 19, in <module>
from .. import AutoFeatureExtractor, AutoProcessor, AutoTokenizer
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 868, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 880, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.auto because of the following error (look up to see its traceback):
cannot import name 'OnnxConfig' from 'transformers.onnx' (/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/onnx/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test_resnet_onnx.py", line 1, in <module>
from transformers.onnx import OnnxConfig
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 868, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 880, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.onnx.config because of the following error (look up to see its traceback):
Failed to import transformers.models.auto because of the following error (look up to see its traceback):
cannot import name 'OnnxConfig' from 'transformers.onnx' (/home/regis/HuggingFace/dev/venv/lib/python3.8/site-packages/transformers/onnx/__init__.py)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-06-2022 19:51:14 | 06-06-2022 19:51:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>That's not an acceptable solution to the problem. Those are not unused imports, but what make the objects accessible in the main init of Transformers.
Will have a deeper look into this.<|||||>No problem, sounds good! |
transformers | 17,575 | closed | Enable auto device map for GPT-NeoX | I'm guessing that the intention was to have the `_no_split_modules` class attribute for `GPTNeoXPreTrainedModel` to be set to `["GPTNeoXLayer"]`, akin to how its set as `["GPTJBlock"]` for `GPTJPreTrainedModel`.
If this is incorrect, please feel free to just close the PR.
Thanks!
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-06-2022 15:46:04 | 06-06-2022 15:46:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,574 | open | [generate] return past_key_values | # What does this PR do?
Allows returning `past_key_values` from `generate` when `use_cache=True`.
Like other returned values, `past_key_values` are also returned as `Tuple`, one element per generated token.
Fixes #17016 | 06-06-2022 14:54:08 | 06-06-2022 14:54:08 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17574). All of your documentation changes will be reflected on that endpoint.<|||||>We'll just need to fix the failing tests now :-) Think you'll have to overwrite this "checking" function in the respective individual test files<|||||>Hey there, sorry to nag, but any chance of moving this along? Anything I can do to help?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>(@patrickvonplaten @patil-suraj should I take over this PR? :) )<|||||>If ok for you @gante this would be amazing!<|||||>Hi, Thank you all for working on this feature! Is this going to be merged into the main branch soon?<|||||>@shunzh I haven't started working on it and it's hard to give estimates -- hopefully less than a month :)<|||||>Was this closed because it's now possible to retrieve `past_key_values` or was there another reason?<|||||>@gilljon it is not closed :)<|||||>@gante I'm sorry for the confusion! Any idea when it will be merged?<|||||>hi @gante . Any idea when this will be merged? Interested in using it and building something on top of it. I'll happy to put on the finishing touches if needed too!<|||||>Hey! Just a friendly reminder. Any chance to get it merged soon? |
transformers | 17,573 | closed | Auto-build Docker images before on-merge if setup.py was changed | # What does this PR do?
This PR introduces a new workflow that will check if the `setup.py` was modified during a pull request merge. If so, it will trigger the docker images to be rebuilt before running the `on-merge` tests.
It also changes `self-push` to be ran on a `workflow_run`, specifically the new `check-dependencies` job. This new job also maintains the same "on-merge" check the previous job had, when it comes to determining if it should be ran and when.
Finally, `build-docker-images` is now also ran on a `workflow_call`, so that `check-dependencies` can trigger it.
This is the same as done in Accelerate recently, with the only difference being additional file filters https://github.com/huggingface/accelerate/pull/424
## Why is this needed?
A frustration I've noticed over the last few months in this repo is the main tests runners are failing for an entire day, due to a new dependency introduced. This solves this problem, since the issue roots from the docker images being used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh @LysandreJik | 06-06-2022 12:03:25 | 06-06-2022 12:03:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Do not merge for now. The push CI in transformers is somehow tricky, after the recent change in #17369.
I will review tomorrow, but I think some changes have to be made.<|||||>@muellerzr Thank you for the PR.
The most direct approach would be
Integrate the check `check-for-setup` and `build-docker-containers` in
https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push-caller.yml
(before the job `run_push_ci`)
Otherwise (if you really want to keep the logic you have), the following block
```
workflow_run:
workflows: ["Check for dependency modification"]
branches: ["main"]
types: [completed]
```
should go in `self-push-caller.yml`.
The main point is to run the actual push CI tests on another branch (`push-ci`), otherwise there will be more than 256 job results shown in the commit history page.
I would prefer the most direct approach (the first one).<|||||>@ydshieh I *believe* I addressed what you wanted, let me know if otherwise 😄 |
transformers | 17,572 | closed | DETR: Add comment regarding backbones | Adds an informative message in DETR to mention that the backbone gets initialized for its architecture and not necessarily for its weights. | 06-06-2022 11:55:51 | 06-06-2022 11:55:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17572). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,571 | closed | Add batchnorm running calc weight to porting script | # What does this PR do?
Add two weight name mappings for PyTorch -> TensorFlow necessary for cross-loading weights for batchnorm layers which have been trained with `tracking_running_stats=True`.
This was necessary for cross-loading weights for the ResNet and RegNet ports. #17536 , #17554
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
NB. I couldn't find tests corresponding to the current weight loading logic.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | 06-06-2022 11:47:41 | 06-06-2022 11:47:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closing as the change was added in this PR :https://github.com/huggingface/transformers/pull/17271
@sgugger I'll open a follow-up PR to address your comments. <|||||>Actually, I misread the file where it's modified 😅
It's fine for the conversion like this, it's the code in modeling_utils that does this I don't want (like [here](https://github.com/huggingface/transformers/blob/9fc34235fa3329c918d5ba67ce09a0cc8f399c59/src/transformers/modeling_utils.py#L432)). Sorry I didn't pay close enough attention. |
transformers | 17,570 | closed | enable cpu distribution training using mpirun | *command like
* mpirun -n 2 python3 run_qa.py --no_cuda --xpu_backend ccl xxxx
*MASTER_ADDR and MASTER_PORT should be set as env
*export MASTER_ADDR=127.0.0.1
*export MASTER_PORT=29500
Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/17581
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-06-2022 09:01:00 | 06-06-2022 09:01:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger please have review or invite someone else.<|||||>I open a issue https://github.com/huggingface/transformers/issues/17581 for it.<|||||>@yao-matrix |
transformers | 17,569 | closed | Microsoft's SpeechT5 for Spoken Language Processing (ASR, TTS, ST...) | ### Model description
Motivated by the success of [T5](https://arxiv.org/abs/1910.10683) for pre-training NLP models, [SpeechT5](https://arxiv.org/abs/2110.07205) explores a cross-modal framework for learning joint contextual representations for speech and text data via a shared encoder-decoder structure.
The model architecture consists of an encoder-decoder transformer module and six modal-specific pre/post nets. The pre-nets convert the input speech $\mathbf{X}^{s} \in \mathcal{D}^{s}$ or text $\mathbf{X}^{t} \in \mathcal{D}^{t}$ to a unified space of hidden representations. The hidden representations are then fed into the shared encoder-decoder to perform the sequence-to-sequence conversion. Finally, the post-nets generate the output in the speech or text modality, based on the decoder output.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
The paper was accepted at the ACL 2022 main conference: https://arxiv.org/abs/2110.07205
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and checkpoints: https://github.com/microsoft/SpeechT5/tree/main/SpeechT5
SpeechT5 combines a transformer encoder-decoder backbone with speech/text specific pre/post-nets. Thus, many of the modules required for the SpeechT5 model are already partially or fully implemented in Transformers.
Model architecture:
1. Transformer encoder block: Wav2Vec2/Hubert-encoder transformer block (Wav2Vec2EncoderLayer) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L725
2. Transformer decoder block: BERT-decoder transformer block (BertLMHeadModel) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/bert/modeling_bert.py#L1157
3. Speech-encoder pre-net: convolutional feature extractor of Wav2Vec2 (Wav2Vec2FeatureEncoder) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L408
4. Speech-decoder pre-net: three fully connected layers with the ReLU activation, fed with the log Mel-filterbank of the speech signal (new, original code: https://github.com/microsoft/SpeechT5/blob/main/SpeechT5/speecht5/models/modules/speech_decoder_prenet.py)
5. Speech-decoder post-net: linear layer fed with the decoder output to predict the log Mel-filterbank $\mathbf{Y}_f = (\mathbf{y}_f , \dots, \mathbf{y}_f )$, followed by five 1-dimensional convolutional layers to refine the predicted $\mathbf{Y}_f$ (new, original code: https://github.com/microsoft/SpeechT5/blob/main/SpeechT5/speecht5/models/modules/speech_decoder_postnet.py)
6. Text pre/post-net: shared embeddings. See https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/bart/modeling_bart.py#L1151 | 06-06-2022 08:39:55 | 06-06-2022 08:39:55 | Piecing these modules together should be a fun challenge! Happy to help with integration here :-)<|||||>Hi @sanchit-gandhi, if compute resources would not be a problem here ( just have a 4GB GPU on my laptop or Google Colab with me here :upside_down_face: ) then I want to help in adding this model as well<|||||>Hey @ayushtues! Lovely to meet you :-) Compute shouldn't be a problem! We can begin with 'dummy' versions of the model in order to verify that our implementations work, then scale up to the full size ones and share resources.
As a starting point, we can start the PR with the 'add-new-model-like' command:
https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the Wav2Vec2 model:
https://github.com/huggingface/transformers/tree/main/src/transformers/models/wav2vec2
This will take care of the Wav2Vec2 Feature Encoder (speech pre-net) and Wav2Vec2 Encoder (speech encoder block) for us! Having automatically created all the files, the next step will be to verify that the feature extractor and speech encoder block match Microsoft's implementation.
Feel free to ping me on Slack (sanchit[at]huggingface.co) if you have any questions!<|||||>Hi @sanchit-gandhi, thanks for the reply, great I'll start reading the paper in detail, checking out its repo, and also Huggingface's contribution documentation, and then start working on the PR.
I am not added on slack, and can't find a public invite link, can you send me an invite if possible?<|||||>Awesome! Let me know how you get on :-)
If you drop me an email at sanchit[at]huggingface.co I can send you over an invite!<|||||>Hey there @sanchit-gandhi @ayushtues, I’m also interested in contributing, if any assistance is still required. Please do let me know!<|||||>Hey @mingboiz! Great to have you on-board! I'll invite you over to the Slack channel too!<|||||>Hey there @sanchit-gandhi @ayushtues, I’m also interested in contributing. @sanchit-gandhi I have sent you a DM on slack. <|||||>Hey @mingboiz and @anuragshas, if you guys could drop me an email at sanchit[at]huggingface.co I can send you over email invites to the Slack channel!<|||||>Great to see so much interest in adding this model! 🔥 Should be a fun collaborative project!<|||||>Hi @sanchit-gandhi, I have send you the rquest to send me the invite link. I would also love to contribute here.
<|||||>I'm also open to helping out with this.<|||||>I created a branch with a model made from wav2vec2.
https://github.com/huggingface/transformers/pull/17982<|||||>Is this still being developed on? I'd be happy to contribute here as well. @sanchit-gandhi <|||||>Hey @kasmith11, sorry for the late response! If you drop me an email at sanchit [at] huggingface.co I can add you to the Slack channel for the model addition. There's plenty of opportunity for contribution!<|||||>Closed via https://github.com/huggingface/transformers/pull/18922 |
transformers | 17,568 | closed | LayoutLMv3 not downloading via official code samples | ### System Info
```shell
Used the official code sample for the microsoft/layoutlmv3-base model, but it is not working
Link to code: https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv3#transformers.LayoutLMv3Model
Error:
KeyError Traceback (most recent call last)
<ipython-input-17-2e4d79cdf031> in <module>()
3 import torch
4
----> 5 processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base")
6 model = AutoModelForSequenceClassification.from_pretrained("microsoft/layoutlmv3-base")
7
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/processing_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
198 )
199 if tokenizer_config_file is not None:
--> 200 with open(tokenizer_config_file, encoding="utf-8") as reader:
201 config_dict = json.load(reader)
202
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
698 " set the option `trust_remote_code=True` to remove this error."
699 )
--> 700 if kwargs.get("revision", None) is None:
701 logger.warning(
702 "Explicitly passing a `revision` is encouraged when loading a configuration with custom code to "
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key)
407 A dictionary that lazily load its values when they are requested.
408 """
--> 409
410 def __init__(self, mapping):
411 self._mapping = mapping
KeyError: 'layoutlmv3'
```
```
### Who can help?
@Lysan
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/11aIci0c_UuId5BK2-U6QwZgEKE3ran9I?usp=sharing
### Expected behavior
```shell
The model downloads and executes with the same behaviour as described on HF
```
| 06-06-2022 07:58:34 | 06-06-2022 07:58:34 | I believe LayoutLM-v3 is not in an official release yet, so you'll need to install from source in order to use it for now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,567 | closed | Permission denied | ### System Info
```shell
transformers==4.19.2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("hkunlp/from_all_T5_large_prefix_spider_with_cell_value2")
model = AutoModel.from_pretrained("hkunlp/from_all_T5_large_prefix_spider_with_cell_value2")
### Expected behavior
```shell
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1850, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 128, in __init__
super().__init__(
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 107, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: Permission denied (os error 13)
```
| 06-06-2022 07:25:39 | 06-06-2022 07:25:39 | Hi, can you also open a Discussion on the model repo, i.e. https://huggingface.co/hkunlp/from_all_T5_large_prefix_spider_with_cell_value2 (and link this issue to/from there)?
Thanks 🙏 |
transformers | 17,566 | closed | Trouble parallelizing GPT-NeoX | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik, @stas00
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I have been trying to implement the `.parallelize()` method for GPT-NeoX. I am aware that it is soon going to be obsolete, but I've been having trouble getting `accelerate` working [with my custom codebase](https://github.com/bigscience-workshop/lm-evaluation-harness/tree/cjlovering/accelerate-2), but `.parallelize()` does work so I figured I would give it a shot. My fork can be found [here](https://github.com/StellaAthena/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py) and is based off the implementation for GPT-J.
Unfortunately, it does not seem like I have implemented it correctly. That said, the error message I am getting seems quite strange to me and is not what I would expect to get. I have verified that my code runs correctly for GPT-J-6B, including parallelism.
```
Traceback (most recent call last):
File "/home/mchorse/bigbio/lm-evaluation-harness/test.py", line 4, in <module>
model.parallelize()
File "/home/mchorse/miniconda3/envs/evalharness/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'parallelize'
```
If you want to run my code, you can do so by following the instructions [here](https://github.com/bigscience-workshop/lm-evaluation-harness#overview). However I do not recommend doing so. Instead, the same error can be generated by running the following basic script:
```python
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
model.parallelize()
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
### Expected behavior
I expect my code to work. | 06-06-2022 04:31:55 | 06-06-2022 04:31:55 | Gently pinging @patil-suraj and maybe also @sgugger regarding `accelerate` here - I think `accelerate` should be used here rather than `parallelize` no?
@LysandreJik @sgugger should we maybe fully deprecate `parallelize()` now?<|||||>Yes the `parallelize` API will be fully deprecated soon (like this week or the next) so there is no point adding support to new models.
> I've been having trouble getting accelerate working [with my custom codebase](https://github.com/bigscience-workshop/lm-evaluation-harness/tree/cjlovering/accelerate-2)
Could you tell us more about the problem you are encountering there maybe?<|||||>Hi @sgugger , I want to do inference on t5-11b model and tried the `parallelize` method. It showed the error like this way
```
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 683, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 950, in _test_impl
results = self._run(model, ckpt_path=self.tested_ckpt_path)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run
self._dispatch()
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1271, in _dispatch
self.training_type_plugin.start_evaluating(self)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 178, in start_evaluating
self.spawn(self.new_process, trainer, self.mp_queue, return_result=False)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 201, in spawn
mp.spawn(self._wrapped_function, args=(function, args, kwargs, return_queue), nprocs=self.num_processes)
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 136, in join
signal_name=name
torch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with signal SIGABRT
python-BaseException
wandb: Waiting for W&B process to finish... (success).
wandb:
wandb: Synced lrgenerative_logic_comp1_v7_1.0_new_seed42_trim_filtered_t5_11b_13_06_2022_ddd9ce1c: https://wandb.ai/soumya_research/lr_dataset/runs/32ujsgo3
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20220613_020333-32ujsgo3/logs
[W CudaIPCTypes.cpp:21] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
```
I tried to find some solutions and also set the `num_workers` of dataset class to 0. But still doesn't work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ZeyiLiao,
please don't use `parallelize` for the `t5-11b` model instead you can load it using the `device_map="auto"` see [here](https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/model#large-model-loading)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,565 | closed | Added translation of index.mdx to Portuguese Issue #16824 | # What does this PR do?
Creates folder pt in docs/source for translating documentation to Portuguese
Currently, only the index.mdx file was translated as of this PR.
Fixes issue #16824
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
| 06-06-2022 00:37:38 | 06-06-2022 00:37:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you again @rzimmerdev for the translation of `index.mdx` and for correcting mistakes in `training.mdx`!
@sgugger looks good to me :). If possible, this would be a good addition to the next release. |
transformers | 17,564 | closed | Shard checkpoint for `tf` and `flax` | ### Feature request
The same sharding capabilities as pytorch should be available to `flax` and `tf`. This is required in order to push the OPT30B model.
### Motivation
Pushing $>45GB$ models (and have the same behaviour as in pytorch ).
### Your contribution
Could start working on that when I will be back from holidays! | 06-05-2022 23:38:45 | 06-05-2022 23:38:45 | |
transformers | 17,563 | closed | Remove RuntimeErrors for NaN-checking in 20B | # What does this PR do?
Fixes # (issue)
https://github.com/huggingface/transformers/issues/17452#issuecomment-1142141196
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 06-05-2022 22:28:14 | 06-05-2022 22:28:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,562 | open | Add a stop_sequence option to text generation pipeline | ### Feature request
A stop sequence option to allow text generation models to stop generating when a specific token is reached.
### Motivation
When I use GPT-J on a slower machine every extra generated token counts. If I need the model to answer a question for example, the only way I can ensure it isn’t cut off is to set the max length well above what I expect the answer to be. That takes a considerable amount of extra processing power for useless data.
### Your contribution
I am only beginning to understand the core workings of model inference, so I’m not sure what I can do to help. I might be able to gather documentation, or test code. | 06-05-2022 21:54:59 | 06-05-2022 21:54:59 | cc @Narsil <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale.
This is actually a great suggestion I've been meaning to add for quite a while.
If you're willing to do a PR here's the high level vision I have for this:
- Add a new parameter `stop_sequence` (Here's some doc on how to add a pipeline which should cover parameters)
- within `_sanitized_parameters` consume `stop_sequence`, tokenize it. Raise a warning if sequence is multiple tokens long (being able to stop on a multiple tokens sequence is not yet covered in `transformers` and would require even more work, we can start small here). Set within the `forward_parameters["generate_kwargs"]` `eos_token_id` to the new stop sequence first token.
- Add the docstring about this new parameter
- Add some tests ( I can help with that as adding tests should be relatively forward, but as the test are really attempting to cover ALL models and all variants, they can fail in odd ways, or worse, the test could easily miss some configurations and fail to see any regression in the future).
Cheers.
I would really like to add such a parameter myself, but at the moment I don't really have the time to dedicate to this, guidance on a PR is the best I can offer 100% !
<|||||>Hi @Narsil I'd love to take this on if it's still open.<|||||>I think it's still open. Thanks for taking this !<|||||>Hi @Narsil is this issue still open? Is there anything else I can help with?<|||||>It seems like this is mostly done, except for documentation [here](https://huggingface.co/docs/transformers/main_classes/pipelines?highlight=transformers.TextGenerationPipeline#transformers.TextGenerationPipeline.__call__), which I'm happy to take. @KMFODA is there a reason why that wasn't added? <|||||>Hey @pruksmhc good spot. I missed adding the docs for this, I'll add it as soon as I can. The PR was merged last year though so the functionality should be available in the main branch. Once docs are added I'll post here so we can close this Issue. |
transformers | 17,561 | closed | TokenClassification with DestillBert does not learn | Hey guys,
I need to classify a sequence of tokens in a given sentence as either 0: **unrelevant** or 1: **relevant**. I tried to follow the tutorial for TokenClassification with DestillBert/Bert from Huggingface for NER, but the transfer does not seem to result in a model that learns to make predictions. I am not sure where the problem lays, does find a pointer to how I could debug the training process with trainer?
Data is of the following format:
**tokens**: List(String) ['I', 'am' 'an' 'example' , '.' ]
**labels**: List(Integer) [0, 1, 1, 1, 0]
Here is the Dataset format I use. As the tokens from the dataset were extracted with Bert, I just convert them to their IDs and stitch them together with the special tokens [CLS] and [SEP]. For the special tokens I assign a label of -100 to ignore them during the loss computation.
```
from torch.utils.data import Dataset
class relDataset(Dataset):
def __init__(self, tokens, labels, tokenizer):
self.tokens = tokens
self.labels = labels
self.tokenizer = tokenizer
def __getitem__(self, idx):
encoding = {}
encoding["input_ids"] = [101] + tokenizer.convert_tokens_to_ids(self.tokens[idx]) + [102]
encoding["attention_mask"] = [1]*len(encoding["input_ids"])
encoding["labels"] = [-100] + self.labels[idx] + [-100]
return encoding
def __len__(self):
return len(self.label)
```
```
train = rel_Dataset(tokens = train_df["tokens"].values,
labels = train_df["labels"].values,
tokenizer = tokenizer)
```
```
test = rel_Dataset(tokens = test_df["tokens"].values,
labels = test_df["labels"].values,
tokenizer = tokenizer)
```
Then I set up tokenizer, model, and trainer
```
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
from torch.nn.parallel import DataParallel
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased", num_labels=2)
device = torch.device("cuda")
model.to(device)
training_args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
num_train_epochs=6,
weight_decay=0.01
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_A8,
eval_dataset=eval_A8,
tokenizer=tokenizer,
data_collator=data_collator,
#compute_metrics=compute_metrics
)
```
Then using Trainer to train the model I achieve the following performance
trainer.train()

The Training loss seems to be static, while the predictions on the test data do not seem to make a difference | 06-05-2022 18:13:22 | 06-05-2022 18:13:22 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) as well?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,560 | closed | Fix some typos. | # What does this PR do?
Fix some typos.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-05-2022 09:25:01 | 06-05-2022 09:25:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,559 | closed | Save a Pytorch Bert finetuned model with custom forward function and heads with Hugginface | ### System Info
I have created my own BertClassifier model, starting from a pretrained and then added my own classification heads composed by different layers. After the training I want to save the model using `model.save_pretrained()` but when I print it after uploading it from pretrained I don't see my classifier head.
The model struture is the following. How can I save the all structure on my model and make it full accessible with
`AutoModel.from_preatrained('folder_path')` ?
. Thanks!
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
class BertClassifier(PreTrainedModel):
"""Bert Model for Classification Tasks."""
config_class = AutoConfig
def __init__(self,config, freeze_bert=True): #tuning only the head
"""
@param bert: a BertModel object
@param classifier: a torch.nn.Module classifier
@param freeze_bert (bool): Set `False` to fine-tune the BERT model
"""
#super(BertClassifier, self).__init__()
super().__init__(config)
# Instantiate BERT model
# Specify hidden size of BERT, hidden size of our classifier, and number of labels
self.D_in = 1024 #hidden size of Bert
self.H = 512
self.D_out = 2
# Instantiate the classifier head with some one-layer feed-forward classifier
self.classifier = nn.Sequential(
nn.Linear(self.D_in, 512),
nn.Tanh(),
nn.Linear(512, self.D_out),
nn.Tanh()
)
def forward(self, input_ids, attention_mask):
# Feed input to BERT
outputs = self.bert(input_ids=input_ids,
attention_mask=attention_mask)
# Extract the last hidden state of the token `[CLS]` for classification task
last_hidden_state_cls = outputs[0][:, 0, :]
# Feed input to classifier to compute logits
logits = self.classifier(last_hidden_state_cls)
return logits
```
```
configuration=AutoConfig.from_pretrained('Rostlab/prot_bert_bfd')
model = BertClassifier(config=configuration,freeze_bert=False)
```
after training
```
model.save_pretrained('path')
```
### Expected behavior
```
If I print the model after model = AutoModel.from_pretrained('path') I have as the last layer the following and missing my 2 linear layer:
(output): BertOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.0, inplace=False)
(adapters): ModuleDict()
(adapter_fusion_layer): ModuleDict()
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
(prefix_tuning): PrefixTuningPool(
(prefix_tunings): ModuleDict()
)
)
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine. | 06-05-2022 08:22:21 | 06-05-2022 08:22:21 | Hi,
I believe in order to load your model via Transformers’ `AutoModel` you need to implement your custom model class into Transofmers’ repo, and “register” it in “models/auto” package.
Hope this helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,558 | closed | Spanish Docs - fix gendered sentence | # What does this PR do?
Fixes a gendered sentence in [es/index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/es/index.mdx).
## Notes
FYI @osanseviero, I believe this was the sentence that was still gendered in the Spanish docs :) | 06-04-2022 23:18:14 | 06-04-2022 23:18:14 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.