repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 18,960 | closed | [Wav2Vec2] Fix `None` loss in docstring for Wav2Vec2ForPreTraining | # What does this PR do?
- [ ] It fixes the Nan value returned in the contrastive loss of [Wav2Vec2ForPreTraining](https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) .
- [ ] Adding sampled_negative_indices as a target allows Wav2Vec2ForPreTraining to calculate the loss.
| 09-09-2022 14:48:24 | 09-09-2022 14:48:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Great! Now that we've established the problem:
> loss is `None` as `sampled_negative_indices` is omitted from the args of the model
and verified the fix:
> forward `sampled_negative_indices` to the model
we can touch-up this PR and get it merged!
We'll need to do two things:
1. Remove the erroneous file [src/transformers/models/test.py](https://github.com/huggingface/transformers/pull/18960/files/e94a4d4e7bb108aeba8daafc3de7b58fc5048838#diff-dc9af87a30bbbc682dfaf00796e992367511d96095a2f0a46a8da06a948a7050)
2. Code quality: you can do this simply by running `make style` from the root of the Transformers repo 🤗
Let me know if you have any questions! Cheers!<|||||>Hello @sanchit-gandhi Thanks for the help , I am still having some issue even when I make sure that I had installed all the necessary packages I am still having this error .
Even `pip install black[jupyter]` is installed and when I run `make style`i have the same error here :
```py
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
would reformat examples/research_projects/lxmert/modeling_frcnn.p
...
```<|||||>Hey @abdouaziz. Thanks for removing the erroneous file :) Don't worry about the `.ipynb` warning as we've not changed any Python notebooks! Looks like something else is up with `make style` though - we have 567 files changed in this PR! There should only be 2 files changed (wav2vec2 and wav2vec2-conformer).
Could you try the following in turn and check whether the number of files changed drops back down to 2 after each step:
1. Rebasing onto main:
```
git fetch upstream
git rebase upstream/main
```
And then running `make style`.
2. Updating HF doc builder https://pypi.org/project/hf-doc-builder/
```
pip install --upgrade hf-doc-builder
```
And then running `make style`.
If that doesn't fix it we can try some other hacks!
<|||||>Hey @abdouaziz - sorry this has been so arduous! Are you still interested in completing this PR? Feel free to open a new one if you wish and we can go from there!<|||||>>
Hello @sanchit-gandhi , Yes i am interesting to completing this RP , but I am still having the same issue here the new PR
https://github.com/huggingface/transformers/pull/19061#issue-1375177421 ,
I am ready for suggestion ?? |
transformers | 18,959 | closed | [CookieCutter] Clarify questions | # What does this PR do?
I've seen quite some mistakes in terms of people answering the questions of the CookieCutter in the wrong way. This is because the questions are sometimes a bit vague, i.e. it's not clear whether one should provide Roberta, RoBERTa or roberta. This was actually not clear for me either.
This PR aims to clarify the questions, making sure contributors understand better what to answer. | 09-09-2022 14:47:24 | 09-09-2022 14:47:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,958 | open | Encoder-decoder model is not working correctly for the latest versions | ### System Info
transformers==4.2.1
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X ] My own task or dataset (give details below)
### Reproduction
Hi,
I'm working with a seq2seq problem, in particular with `EncoderDecoderModel` model. The problem is that I can't have good results with the latest version (**4.21.3**). I also tried with **4.18.0** because of [this](https://github.com/huggingface/blog/issues/292#issuecomment-1122666099) but didn't work either. It is however working when using version **4.2.1**
I have made an [public notebook](https://colab.research.google.com/drive/1obQcmRdX89eWJfk_qUMcxa4pBSPIc53X?usp=sharing) you can run to see the issue. Is an example to train a model to generate the written digits, given a number.
### Expected behavior
You works nicely with version **4.2.1**, but very bad with the most recent versions.
| 09-09-2022 13:38:12 | 09-09-2022 13:38:12 | Thank you for reporting, @miguelwon . I will take a look.<|||||>This might also be interesting for @ArthurZucker if you're very busy at the moment @ydshieh <|||||>I am having a look RN, will tell you when I know more 👍🏻 <|||||>Hey @miguelwon it seems that you are right about the training not converging at all using current version.
However, since loading a trained model in the new versions does not give bad results, I suspect that the issue comes from either the computation of the loss, or the trainer.
I will have a look in more details as I believe this is a pretty important bug 😄
<|||||>Hi @ArthurZucker, just to know if there any news about this issue. Thanks!<|||||>Hey! Sorry not yet, it's pretty tricky, but I hope I'll resolve it soon! 🤗 <|||||>Hello @miguelwon I discussed with @ArthurZucker internally and decided to take a look on this issue.
In the notebook you provided, inside the function `process_data_to_model_inputs`, you prepared:
- decoder_input_ids
- decoder_attention_mask
- labels
and you left a remark
> because BERT automatically shifts the labels, the labels correspond exactly to `decoder_input_ids`.
In fact, for the encoder-decoder architecture, the loss computation is done in `EncoderDecoderModel.forward` rather than in `decoder.forward` (in your case, the decoder is `BertLMHeadModel`), see [here](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L631). Also, see [this warning message](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L41).
Combine [the way `decoder_input_ids` is prepared here](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L611), we don't need to prepare `decoder_input_ids` and `decoder_attention_mask` in your method `process_data_to_model_inputs`.
However, you can still provide them, but in this case, the `decoder_input_ids` should be a shift of `labels`, instead of being the same value (which is the case in your notebook).
For old versions like `4.2.1`, it was using the decoder's code of loss computation, so the notebook works with it. But since v4.12, this is not the recommended way to run encoder-decoder models.
I have updated your notebook (in a copy), you can check [here](https://colab.research.google.com/drive/1WbPtf7OKar7DbTJ-76n67UrLkUnRc_If?usp=sharing), which shows it doesn't generate non-sense results anymore with the above suggested changes.
I hope this answers your question :-)
<|||||>Yes it does! :)
Thanks a lot for you clarification and the updated notebook!
<|||||>Thank you @ydshieh . |
transformers | 18,957 | closed | Wav2Vec2ForPreTraining loss Nan fixed | # What does this PR do?
- [ ] This PR fixe the loss of [Wav2Vec2ForPreTraining](https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) which return Nan values .
- [ ] We fix the error by adding sampled_negative_indices as a target to calculate the loss .
| 09-09-2022 13:00:05 | 09-09-2022 13:00:05 | |
transformers | 18,956 | closed | Latest Wav2Vec2 pretraining script runs on first GPU only | ### System Info
Dear @patrickvonplaten
I've tried different multi GPU setups (RTX 3090 and A5000) but training always runs only on device 0.
**Tried these commands:**
`accelerate launch --num_processes=2 pretrain.py`
`accelerate launch pretrain.py`
Seems there is some bug in the script because old fairseq training runs on all available devices.
Kindly assist to understand where to dig further.
**Script used for training:**
[https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
### Who can help?
@patr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just followed the instruction on the script page
### Expected behavior
Run on multiple GPUs | 09-09-2022 12:04:58 | 09-09-2022 12:04:58 | Got to run accelerate config in order to setup devices. |
transformers | 18,955 | closed | update black target version | Considering that setup.yp requires Python 3.7 or higher:
https://github.com/huggingface/transformers/blob/22f72185601d5167a747104b4aca102d0e92524c/setup.py#L417
it might make sense to also have black target only Python 3.7. Just a suggestion though.
Unsurprisingly, this PR may trigger quality check errors.
Note sure who to tag, so assuming for general repo things: @sgugger and @LysandreJik
| 09-09-2022 12:01:11 | 09-09-2022 12:01:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks like it wants to change a looooot of files. This usually creates a hell of merge conflicts in PRs so I would really like to avoid doing it too much. We will switch to black 2023 in January, so how about we change the target at that point too? We can change your PR to leave a comment for now so we don't forget.<|||||>> Looks like it wants to change a looooot of files. This usually creates a hell of merge conflicts in PRs so I would really like to avoid doing it too much. We will switch to black 2023 in January, so how about we change the target at that point too? We can change your PR to leave a comment for now so we don't forget.
Sure!
I didn't quite understand what you mean with the last sentence though. Do you mean closing this PR and opening an issue instead? That works, whatever is best.<|||||>Just putting a comment next to the black version pinned in the setup, so that we know to update this at the next version change.<|||||>> Just putting a comment next to the black version pinned in the setup, so that we know to update this at the next version change.
Alright. Done. |
transformers | 18,954 | closed | Update decision transformers to gym 0.26 | ### Feature request
We recently published a [new release of gym](https://github.com/openai/gym/releases/tag/0.26.0), which carries with it a bunch of breaking changes.
However, this is the last of the API changes, and it will be stable going forward. So it would be great to update the decision transformers to be compatible with that.
### Motivation
I'd say there are two main reasons to switch to the new API that goes with 0.26:
- The new API "makes sense"- we're still preparing a proper writeup of the rationale behind each decision, but they were deliberately made to support good research, flexibility and reproducibility.
- It will be supported in the future - we have many exciting features on the horizon (e.g. hardware-accelerated environments), which will be predicated on using the new API
### Your contribution
I'll be happy to help with the whole process, including contributing the PR.
From what I can see, most of the code in transformers is rather self-contained, so it would mainly be the the `run_decision_transformer.py` example that needs updating, and then potentially other resources about decision transformers (like the blog), which would be separate PRs naturally
The biggest question would be how you want to handle versioning. My intuition is that it'd be best to update to gym 0.26 together with some "significant" version of transformers, like `4.22 -> 4.23` (or later, depending on how long it takes). | 09-09-2022 11:20:34 | 09-09-2022 11:20:34 | cc @edbeeching <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,953 | closed | create Past CI results as tables for GitHub issue | # What does this PR do?
Update Past CI error statistic report script to produce 2 GitHub issue tables :-) | 09-09-2022 10:18:14 | 09-09-2022 10:18:14 | One table
| no. | error |
|-:|:-|
| 63 | AttributeError: module 'torch.jit._state' has no attribute '_clear_class_state' |
| 38 | RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/cu |
| 3 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for |
| 3 | AssertionError: Couldn't trace module. |
| 3 | RuntimeError: "normal_kernel_cpu" not implemented for 'BFloat16' |
| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. |<|||||>Another one
| model | no. of errors | major error | count |
|-:|-:|-:|-:|
| bloom | 48 | RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAI | 38 |
| data2vec | 9 | AttributeError: module 'torch.jit._state' has no attribute ' | 9 |
| clip | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| blenderbot | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| bart | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| blenderbot_small | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |
| canine | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| bigbird_pegasus | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| convnext | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| beit | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| albert | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| codegen | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| ctrl | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| convbert | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| bert_generation | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |
| cpm | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,952 | closed | Resize position embeddings in PreTrainedModel | ### Feature request
Add a method to resize position embeddings in PreTrainedModel, in the same way as there is `resize_token_embeddings` for word embeddings.
There are several ways to do that:
- retrain everything from scratch
- keep the pretrained embeddings but add new trained from scratch for the new positions (as done in `PreTrainedModel._get_resized_embeddings` if I understand correctly)
- same but initialize new positions by interpolating pretrained ones instead of random init
### Motivation
It would be nice to be able to resize position embeddings when the PreTrainedModel has too small `max_position_embeddings`.
I found several related issues:
- https://stackoverflow.com/questions/69820065/how-to-extend-a-pretrained-transformer-model-configured-with-small-max-position
- https://github.com/huggingface/transformers/issues/1978
### Your contribution
Willing to help :)
From what I can tell, most of the job is already done in `PreTrainedModel._get_resized_embeddings` | 09-09-2022 09:00:28 | 09-09-2022 09:00:28 | I have met the same issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am also facing the same issue<|||||>is this issue resolved? Why was it closed? I need to use different size, not, but I can't do it<|||||>It was closed automatically because no one answered after one month :man_shrugging:
|
transformers | 18,951 | closed | Pipeline GPT-NeoX only returns "BB" from any prompt then nothing for subsequent calls of the pipeline | ### System Info
CPU: AMD 4750G (Onboard video disabled)
OS: Ubuntu Server 20.04.5 LTS x86_64
RAM: 64GB
GPUs: 2x Tesla M40 24GB
Driver Version: 510.47.03
CUDA Version: 11.6
Pytorch stable 1.12.1 (Installed via Anaconda)
Both accelerate and transformers installed through pip
accelerate @ git+https://github.com/huggingface/accelerate@98823de572246d68cb31db94b60f7328ae9d551e
transformers @ git+https://github.com/huggingface/transformers@cfd623a859890c6d106610d3c688064eadc7bd61
### Who can help?
@patil-suraj @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi everyone!
I'm trying to run GPT-NeoX 20B with accelerate and `device-map="auto"` however, I can't seem to get the model to return anything other than "BB" for the first response and then nothing but an empty string after.
Steps to reproduce the behaviour:
1. Use latest git of `accelerate` and `transformers`
2. Setup a pipeline of `EleutherAI/gpt-neox-20b`
3. Call pipeline with any prompt
<details>
<summary>Code I am using</summary>
<pre>
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch, accelerate
print("Loading generator!")
generator = pipeline('text-generation',
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", device_map="auto",
torch_dtype=torch.float16),
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b"),
temperature=0.7, return_full_text=False, max_new_tokens=250)
print("Loaded generator!")
while True:
prompt = input("Enter prompt: ")
response = generator(prompt)
print(response)
</pre>
</details>
<details>
<summary>Example runs</summary>
<pre>
(hf) jai@tesla-server:~$ python gpt-neox-test2.py
Loading generator!
Loaded generator!
Enter prompt: Test!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': 'BB'}]
Enter prompt: Testing again!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
Enter prompt: One more time!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
(hf) jai@tesla-server:~$ python gpt-neox-test2.py
Loading generator!
Loaded generator!
Enter prompt: This is a different prompt.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': 'BB'}]
Enter prompt: yet every time the responses are still the same. :(
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
</pre>
</details>
I'm not sure where this issue belongs as I'm still new to huggingface, but if I make a small change and use a model that can fit into a single card such as `EleutherAI/gpt-j-6B` and remove `device_map="auto"` there is no issue with generation (still with piplines).
I have also tried using the `GPTNeoXTokenizerFast` class with the same results.
There is no errors as in python crashing it just doesn't generate anything meaningful.
### Expected behavior
Response should be at the length of `max_new_tokens` (250) or at least more than one token and relevant to prompt provided.
`[{'generated_text': 'BB'}]` | 09-09-2022 08:05:22 | 09-09-2022 08:05:22 | I noticed also changing the `eos_token_id` to `187` (`\n`) increases time for a response to about 20 seconds (previously ~5s) and there is an increased load on the cards but the response is the same "BB"<|||||>cc @Narsil <|||||>I unfortunately don't have a machine at hand big enough to run that code.
Does this happen with any other (smaller) model that we can try on ?
If not, the ideal thing would be to check if the problem is the `eos_token_id` being generated or not.
Using `pipeline(..., eos_token_id=None)` should deactivate it, and your generation should now actually generate 250 tokens.
The other options is that it DOES generate tokens, but they are somehow removed by the decoding process. In order to check for that I would add a `print` statement directly into the `postprocess` method of `TextGenerationPipeline` and see what's going on here.
Would that help ?<|||||>Hi, I actually found that this was a hardware issue and forgot to close this issue. Any time I ran something that required both cards to work together I would get an `IO_PAGE_FAULT` error and to fix it I needed to disable IOMMU in my motherboard settings now it works. 😅 |
transformers | 18,950 | closed | BertTokenizer slowly on latest versions | ### System Info
Hi, I test the execution time of my program when adopt different transformers version.
In my program, when the transformers version was **4.21.3**, the execution time was **7.12s**, but when I changed the lightgbm to the **4.3.0**, the execution tim was **2.79s**.
I record the system info on different versions.
[4.21.3.txt](https://github.com/huggingface/transformers/files/9532526/4.21.3.txt)
[4.10.0.txt](https://github.com/huggingface/transformers/files/9532528/4.10.0.txt)
[4.3.0.txt](https://github.com/huggingface/transformers/files/9532527/4.3.0.txt)
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1: Code
```python
from transformers import BertTokenizer
import pandas as pd
import time
start = time.time()
train_df = pd.read_csv('train.csv')
train_df.head()
tweets = train_df['text'].values
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
max_len = 50
x_train = []
for tweet in tweets:
vec = encode_sentence(tweet)
x_train.append(vec[:max_len] + [0] * (max_len - len(vec)))
end = time.time()
print("Time:", end-start)
```
2: Dataset
[train.csv](https://github.com/huggingface/transformers/files/9532537/train.csv)
### Expected behavior
The execution time are same on different versions, or the latest version is better than older versions.
|Version|Execution time|
|--|--|
|4.21.3|7.12s|
|4.10.0|8.85s|
|4.3.0|2.79s| | 09-09-2022 05:28:25 | 09-09-2022 05:28:25 | cc @SaulLu <|||||>Hi @PerformanceDetect ,
Thanks for sharing this benchmark with us! Indeed it would be interesting to find out what addition caused this slowdown, do you think you would have some time to investigate further?
Otherwise, if your usage is speed sensitive, I recommend you to use the fast version of the tokenizer :hugs: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,949 | closed | Fix M-CTC-T chunking | ## Summary
Chunking doesn't currently work for the M-CTC-T architecture, which this PR attempts to fix. The change is fairly minor but I am not an expert on how M-CTC-T works, so definitely open to feedback. In my usage, it works as intended.
Paging @patrickvonplaten since I think you originally implemented the chunking mechanism. :slightly_smiling_face:
I will add some PR comments explaining the changes.
## Testing
I added one test to cover the ASR pipeline with M-CTC-T. I also ran these tests:
```shell
RUN_SLOW=True RUN_PIPELINE_TESTS=True pytest \
tests/models/mctct \
tests/pipelines/test_pipelines_automatic_speech_recognition.py \
tests/pipelines/test_pipelines_common.py
```
Some of these tests fail, but I was able to confirm they're also failing on the main branch. Here are the failures:
```
FAILED tests/pipelines/test_pipelines_common.py::CommonPipelineTest::test_iterator_data_tf - tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calling layer...
FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf - tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calli...
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_chunking_fast_with_lm - AssertionError: 'e<s>eh' != '<s> <s'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_large_model_pt_with_lm - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_with_lm_fast - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_with_local_lm_fast - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB tot...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal_batched - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_robust_batched - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76...
```
Some of these look to just be issues with my machine (memory errors).
| 09-09-2022 05:25:01 | 09-09-2022 05:25:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18949). All of your documentation changes will be reflected on that endpoint.<|||||>In addition to running tests, as a sanity check I ran the code snippet from [this blog post](https://huggingface.co/blog/asr-chunking):
```python
from transformers import pipeline
pipe = pipeline(model="facebook/wav2vec2-base-960h")
pipe("very_long_file.mp3", chunk_length_s=10)
```
and it works as expected. I also modified it for M-CTC-T and it still works:
```python
from transformers import AutomaticSpeechRecognitionPipeline, MCTCTForCTC, MCTCTProcessor
model = MCTCTForCTC.from_pretrained("speechbrain/m-ctc-t-large")
processor = MCTCTProcessor.from_pretrained("speechbrain/m-ctc-t-large")
pipe = AutomaticSpeechRecognitionPipeline(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
model=model,
framework="pt",
)
print(pipe("very_long_file.mp3", chunk_length_s=10))
```<|||||>Thanks for the PR - this all looks good to me! cc @sanchit-gandhi and @Narsil for a quick second review :-) <|||||>Thanks for the PR - that's a great addition! Generally this all looks more or less all good to me! Just a bit unsure about the changes in the pipeline, but ok for me since all tests pass. @Narsil what do you think?
Also cc @sanchit-gandhi for info<|||||>Thanks for the PR @samwaterbury! LGTM - happy with the changes to computing the inputs : logits ratio :-) <|||||>@Narsil if you have 10 minutes it'd be super nice to get your review here :-) <|||||>Thanks all! 🙂 @Narsil any chance for your 👀<|||||>> FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_large_model_pt_with_lm - AssertionError: 'ctc' != 'ctc_with_lm'
This is because you don't have `kenlm` installed. Since the pipeline makes this a soft error instead of a strong one, you're seeing a different error.
I think we could update the tests to skip the ones requiring kenlm if you don't have it installed (but `kenlm` is a tricky dependency iirc)<|||||>Hi @samwaterbury ,
Sorry for the long delay before review, I'm pretty far behind on some stuff.
The first and biggest issue I have, is that I am not sure that the approach is **sound**.
For the pipeline to work correctly with chunking/striding we **need** a very big property, which is that every data point in audio space corresponds to a single logits.
This enables to do this: https://huggingface.co/blog/asr-chunking
However, I fear that M-CTC models uses mel spectrogram, which means the spectrograms themselves are distributing single data points into multiple feature points in the sequence length. It probably depends on the parameters of the feature extractor, but it seems like something should be done.
Since there is overlap in the feature space, it become very hard to attribute logits (hence letters) to their origin, and to *stitch* back together the original string when running inference on 2 different blocks with striding.
The current PR is actually I fear quite wrong, and is only correct by accident, because the `inputs_to_logits_ratio` is set to 1 and the audio is rather small, all the input audio ends up being in the first chunk, and since striding is broken, all the first chunk is used, and afterwards none of the chunks are used.
In order to see what I'm talking about, run the example with `batch_size=2` and print the `stride` within `postprocess`. You will see the numbers don't add up.
I created another PR https://github.com/huggingface/transformers/pull/19338 to recreate what you have done here in hopefully a more correct version. Unfortunately, it seems the output is wrong, because the stiching cannot be done properly.
I might have made a mistake in my calculations though so take it with a grain of salt.
But calculating the ratio like I'm doing in the PR is too incorrect, and that's why we rely on `config.inputs_to_logits_ratio` instead for wav2vec2. (The differences should be very minor since it's only about the padding options of convolutions and such, but it can lead to subtle bugs on some splitting).<|||||>Hi @Narsil sorry for the delay and thanks for the long and detailed review and response. What you're saying makes sense and I appreciate the time you took to write it all out!
I'm going to close this PR since it looks like your PR is a better starting point for continued work. (I've also personally moved from M-CTC-T to Whisper since opening this PR 😄) |
transformers | 18,948 | closed | Add support for conditional detr | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added codes and documentations for conditioonal DETR model. The conditional DETR files are created by using the "add-new-model-like" feature of CookieCutter, based on DETR codes. All tests are passed. One thing I want to ask is that, I have converted the pretrained weights, how shoud I give these weights to you?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/Atten4Vis/ConditionalDETR/issues/21
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-09-2022 04:57:54 | 09-09-2022 04:57:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The CI issue is caused by the fact that you have the following lines in src/transformers/models/auto/feature_extraction_auto.py:
```
("detr", "DetrFeatureExtractor"),
("detr", "DetrFeatureExtractor"),
```
=> this should be updated to:
```
("detr", "DetrFeatureExtractor"),
("conditional_detr", "ConditionalDetrFeatureExtractor"),
```<|||||>Thanks a lot for all your work 🤗 merging! |
transformers | 18,947 | closed | About the evaluation_loop function of trainer | ### Feature request
It is recommended to feed the logits of each batch into the compute_metrics function, and then aggregate the results of each batch.
### Motivation
When I use the evaluate function of the trainer, the evaluation_loop function concatenates all the logits and labels on the validation set and sends them to the compute_metrics function for evaluation. The preds_host and labels_host are of torch.tensor type, so it is easy to exceed the gpu memory.
### Your contribution
pass | 09-09-2022 02:52:00 | 09-09-2022 02:52:00 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>You can set `eval_accumulation_steps=100`(or even a smaller number) in TraningArgs to avoid GPU memory exceeding.
However, I also think it is a bad design that the default `eval_accumulation_steps` is infinity actually, also.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,946 | closed | adamw_bnb_8bit is actually Adam8bit and doesn't respect TrainingArguments weight_decay | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.60-1-MANJARO-x86_64-with-glibc2.36
- Python version: 3.10.5
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.13.0.dev20220903+cu116 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (gpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Using GPU in script?: Yes (Ampere)
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/trainer.py#L1137-L1142
That snippet sets the optimizer when using adamw_bnb_8bit, but it uses Adam8bit instead of AdamW8bit. This isn't a problem in and of itself as both implementations in bitsandbytes are the same except for the weight_decay parameter (see [AdamW8bit](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/adamw.py#L38-L64), [Adam8bit](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/adam.py#L46-L72)).
However, the trainer doesn't correctly set the weight_decay parameter for Adam8bit leaving it at 0 and the behavior as Adam instead of AdamW.
```
from transformers import TrainingArguments, Trainer, AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling
from datasets import Dataset
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMaskedLM.from_pretrained("roberta-base")
toy_dataset = Dataset.from_dict({'text': ['a', 'b', 'c']})
toy_dataset = toy_dataset.map(lambda examples: tokenizer(examples['text']))
args = TrainingArguments(output_dir='/tmp/outdir', optim='adamw_bnb_8bit', weight_decay=-0.1)
trainer = Trainer(args=args, model=model, train_dataset=toy_dataset,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer))
trainer.train()
```
The code works even with a negative weight_decay, indicating weight_decay isn't being set. I've confirmed this with a print statement inside the optimizer.
### Expected behavior
I expect the weight_decay parameter to be passed to the constructor of `Adam8bit`.
In the case of the reproduction code, an exception `ValueError: Invalid weight_decay value: -0.1` should be raised from the check [here](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/optimizer.py#L325). But `weight_decay` isn't set correctly and remains set at 0.
| 09-09-2022 00:20:01 | 09-09-2022 00:20:01 | Would you like to make a PR to fix this?<|||||>Actually upon further review, the weight decay is being set correctly. I didn't fully understand the purpose of these lines that set the weight decay for only some parameters. This bypasses the non-negative check in the constructor, but does use the correct value when updating the parameters.
https://github.com/huggingface/transformers/blob/e6f221c8d4829c9a3bca699c18a32043ab21f7a0/src/transformers/trainer.py#L1057-L1066 |
transformers | 18,945 | closed | Removed issue in wav2vec link | Fix connected to [this issue](https://github.com/huggingface/transformers/issues/18944)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-08-2022 20:34:54 | 09-08-2022 20:34:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,944 | closed | wrong wav2vec link in audio classification blog | [This wonderful blog](https://huggingface.co/docs/transformers/main/en/tasks/audio_classification#preprocess) has an issue with the wav2vec model card link. See below:
> 2. Check the sampling rate of the audio file matches the sampling rate of the audio data a model was pretrained with. You can find this information on the Wav2Vec2 `[model card]((https://huggingface.co/facebook/wav2vec2-base))`.
The right link should be https://huggingface.co/facebook/wav2vec2-base
| 09-08-2022 20:24:51 | 09-08-2022 20:24:51 | Closed by #18945 |
transformers | 18,943 | closed | [WIP] Implement LayoutLMv2ForRelationExtraction (continues #15173) | # What does this PR do?
Continues the good work in #15173 to add LayoutLMv2ForRelationExtraction, as implemented in [Microsoft's UniLM repo](https://github.com/microsoft/unilm/blob/152193af4b295ae39cf0c2a492da3ee5cc5abe29/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py#L895-L937)
Tests are not written yet, and I might need help with that.
@NielsRogge | 09-08-2022 19:36:31 | 09-08-2022 19:36:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18943). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
Thanks for your work. However, we don't want to add LayoutLMv2ForRelationExtraction with the design that the Microsoft authors created (as the model returns lists, rather than fixed size tensors). The latter is necessary for the model to work on a distributed environment, and for things like ONNX. See comments at #19120<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,942 | closed | Output scores in TranslationPipeline | ### Feature request
Please consider adding scores to the output dict of TranslationPipeline.
https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/pipelines#transformers.TranslationPipeline.__call__
### Motivation
It'd be nice to see the scores/probabilities of translated sentences rather than just an ordered list of the top k beam search outputs. In many cases the scores can be interpreted as a measure of confidence in the prediction and this is valuable, especially in production.
Pipelines are natively supported by Seldon Deploy so this would also improve that integration.
### Your contribution
I don't currently have capacity to submit a PR. | 09-08-2022 18:33:17 | 09-08-2022 18:33:17 | cc @Narsil <|||||>This seems a nice addition !
Same here, I have limited bandwidth at the moment.
Notes for anyone wanting to implement this.
The goal is NOT to support every single feature `generate` supports in terms of return, only the one that make sense for users not knowing about ML, and not being power users (anyone that knows enough, should be able to drop down from pipelines and using lower level objects to get full control, or override the pipeline by subclassing).
1 score per proposed translation fits that model.
A counter-example would be: `score` per token was asked by users on `bloom` (it's `text-generation` not `translation`, but since it works similarly under the hood I'm leaving breadcrumbs. This for instance is outside of the scope of pipelines, since tokens are a ML construct, and users without any ML background will have trouble understanding what they are. In addition, returning such things change the return type which is never ideal when function/classes return types depend on arguments. (and subclassing should be easy enough to add support for anyone that so desires).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,941 | closed | Update translation requests contact | # What does this PR do?
Updates the contact for translation requests to GuggerSylvain (@sgugger - if you're alright with that!)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Documentation: @sgugger, @osanseviero | 09-08-2022 18:31:20 | 09-08-2022 18:31:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Doesn't look like I have merge privileges on this repo, could you merge it @sgugger?<|||||>Just waiting for the last green tick and will do so! |
transformers | 18,940 | closed | Getting the heat map out of VILT (Figure 4 in the paper) | ### Feature request
I would like to get the heatmap from ViLT (Visualizations of transportation plan of word patch alignment).
### Motivation
It is useful for debug and also might be useful for other applications. It is shown in Figure 4 in the paper: https://arxiv.org/pdf/2102.03334.pdf
### Your contribution
NA | 09-08-2022 17:36:30 | 09-08-2022 17:36:30 | Hi,
Yes it's totally possible to re-create that. Basically it comes down to translating [this script](https://github.com/dandelin/ViLT/blob/master/demo.py) to a Gradio demo.
I'm marking this as "good first issue" as it seems fairly straightforward.<|||||>Hi, I would like to work on it<|||||>@NielsRogge Is this issue still open ? I would to like to work on this. Can you assign this issue to me ?
<|||||>@NielsRogge , is this issue still open I would like to contribute to this <|||||>Yeah, I guess. I think you guys can go ahead and open PR.<|||||>> @NielsRogge , is this issue still open I would like to contribute to this
Hey Rajath ! Would love to work on this issue with you. Do you mind working with me on this? Kinda new to open source contribution.
<|||||>Is anyone still working on this? I can help<|||||>@NielsRogge is this supposed to be implemented as a method on all vilts or a function that takes a vilt model as input and launches the gradio demo? <|||||>You can just implement a Gradio demo and host it on https://huggingface.co/spaces.<|||||>> You can just implement a Gradio demo and host it on https://huggingface.co/spaces.
I've made one https://huggingface.co/spaces/MikailDuzenli/vilt_demo , I just implemented the demo of the model itself for the moment but I'm trying to add the heatmap (help is welcome).<|||||>> > You can just implement a Gradio demo and host it on https://huggingface.co/spaces.
>
> I've made one https://huggingface.co/spaces/MikailDuzenli/vilt_demo , I just implemented the demo of the model itself for the moment but I'm trying to add the heatmap (help is welcome).
I could help<|||||>Very cool demo @MikailINTech! Awesome work. Final step is indeed visualizing the heatmap<|||||>> Very cool demo @MikailINTech! Awesome work. Final step is indeed visualizing the heatmap
Thank you ! Just finished adding the heatmap. Is there a way I can have this issue mark as resolved ?<|||||>Really cool! Although I'd also include the entire image in the result (not just the heat map), to compare.
Then I'll close this issue!<|||||>Thanks @NielsRogge for the suggestion, now one can see the image and the heatmap. I hope that it's what @Ngheissari was looking for <|||||>Awesome! Closing this issue.
I tweeted about it here :) https://twitter.com/NielsRogge/status/1580246704011370496 |
transformers | 18,939 | closed | RFC: Replace custom TF embeddings by Keras embeddings | # What does this PR do?
This is an RFC with a code example in the PR -- my primary goal is not to get the PR approved, but rather to discuss an improvement to our TF codebase, with an example that passes all tests.
## Context
In our TF implementation of models with embedding layers, we rely on two custom-made classes:
1. [`TFSharedEmbeddings`](https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/modeling_tf_utils.py#L2611) -- a custom embedding layer whose added benefit is the ability to also use it as a dense layer;
2. [`TFWrappedEmbeddings`](https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/modeling_tf_utils.py#L2838) -- used to manipulate the scope of the weights, which would normally depend on the layer where the weights are first used in an operation. Used with tied weight embeddings.
Problems with this setup include:
1. Users can't use the expected Keras tools to handle embeddings;
2. Relies on TF1 compatibility to set the right name to the weights (`tf.compat.v1.variable_scope`);
3. Resizing the embeddings, a major source of bugs atm, uses complex logic that consists in manipulating `tf.Variable`.
## Proposed change
The proposal is straightforward: replace `TFSharedEmbeddings` by `tf.keras.layer.Embedding`, remove `TFWrappedEmbeddings`, and make the necessary adaptations. A few details to keep in mind (and that you can browse in the code):
1. There is a whole new code path for resizing the embeddings. Instead of `if/else` in the original functions, changed functions were rewritten with `_v2` prepended to their name (which should also facilitate the transition). You can see that the new functions are simpler than the originals;
2. Giving the right name to the embeddings (so we can load existing weights) was the hardest part. TF had limited maneuverability here. To pull it off, I relied on UNDOCUMENTED behavior of `tf.name_scope`. Normally, `tf.name_scope` appends to the existing scope -- if the scope for the current layer is `foo`, weights are in the form of `foo/weights:0`; if we add a context manager `tf.name_scope("bar")`, weights will be in the form of `foo/bar/weights:0`. However, [if the argument of `tf.name_scope` ends with `/`](https://github.com/tensorflow/tensorflow/blob/359c3cdfc5fabac82b3c70b3b6de2b0a8c16874f/tensorflow/python/framework/ops.py#L6984), then it will be a stand-alone name scope. Taking the previous example, with `tf.name_scope("bar/")`, weights will be in the form of `bar/weights:0`. This behavior has been in the TF codebase since its first commit (>7 yrs), and replacing `TFWrappedEmbeddings` relies on this behavior;
3. The existing TF Bart assumes the input/output embeddings are tied, which PT Bart does not assume. I've not changed this part, so the example you can see in this PR is for models with tied weights;
4. If you open PT Bart and compare side by side, you'll see that the implementations on the two frameworks are now more similar :)
I estimate about a 1-2 weeks worth of work to propagate the change, which includes:
1. Replace all `TFSharedEmbeddings` and `TFWrappedEmbeddings`;
2. Handle edge cases -- `resize_token_embeddings` is not implemented/is broken (and untested) in several recent TF models;
3. Remove/deprecate old code after 1 and 2 are done.
## Pros/cons
(+) Simpler and smaller codebase, especially for the models;
(+) TF model code closer to PT's;
(+) Keras-native embeddings ( = users and contributors can be more productive);
(+) `resize_token_embeddings` usable in all models;
(-) Time spent refactoring is time not spent building new things;
(-) The solution still relies on named scopes for cross-framework weight matching, which is hacky. | 09-08-2022 14:14:19 | 09-08-2022 14:14:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>My thoughts:
- I think the behaviour of `tf.name_scope` is intended and stable, even if it's not documented (TF documentation isn't always great). I think we can rely on that safely, and it's a lot better than using compatibility methods from `v1`.
- I agree that how we're doing this right now isn't great, and this code is a big improvement.
- I think how we use `name_scope` is still a little problematic. However, I don't want to make any big breaking changes there right now because the PT codebase will probably also change soon to use whatever new pickle-free state dict save format the PT devs come up with!
So overall, I think this is a good addition that cleans up a longstanding source of issues in the code, and shouldn't take too long to implement across the codebase. |
transformers | 18,938 | closed | Update default revision for document-question-answering | # What does this PR do?
Prior to this change, users needed to instantiate a tokenizer themselves while using `impira/layoutlm-document-qa` to set the `add_prefix_space=True` parameter. I made this the default in the tokenizer's config [here](https://huggingface.co/impira/layoutlm-document-qa/commit/52e01b37ccf248953eb527c1d96e9ec1750f3c3c), and this change simply updates the pinned revision to reference it.
After this change, the following commands work:
```
In [1]: from transformers import AutoTokenizer, pipeline
In [2]: nlp = pipeline('document-question-answering')
In [3]: nlp(
...: "https://templates.invoicehome.com/invoice-template-us-neat-750px.png",
...: "What is the invoice number?"
...: )
Out[3]: {'score': 0.9998127222061157, 'answer': 'us-001', 'start': 15, 'end': 15}
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge | 09-08-2022 14:07:40 | 09-08-2022 14:07:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Gentle nudge @Narsil @NielsRogge |
transformers | 18,937 | closed | Exit early in load if no weights are in the sharded state dict | # What does this PR do?
As suggested by @stas00 in #18911, this PR checks whether there any parameters in the state dict to load in the current module and exits early if there are none. This might be useful when loading a huge model with a lot of shards.
@stas00 could you try and see if there is a gain with this or not? | 09-08-2022 13:25:34 | 09-08-2022 13:25:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,936 | closed | I render only black screen | ### System Info
I get the following message
`torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E41C550>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E4328B0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 6 files to the new cache system
0%| | 0/6 [00:01<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink`
My GPU NVIDIA GeForce GTX 1660 Ti
Processor: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
16 RAM DDR4
Can you help me please solve it.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have opened the program and checked it
### Expected behavior
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E41C550>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E4328B0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 6 files to the new cache system
0%| | 0/6 [00:01<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink | 09-08-2022 10:41:01 | 09-08-2022 10:41:01 | Hmmm weird error! If you don't have a lot of items in your cache, I would recommend removing it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I noticed that this issue might be stale? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,935 | closed | [ViT] Add note about interpolation of position encodings | ### Feature request
We should add a note to the docs on the fact that, in order to fine-tune ViT on higher resolution (e.g. 512x512), one can set `interpolate_pos_encoding=True` in the forward of the model.
### Motivation
This thread on the forum: https://discuss.huggingface.co/t/fine-tuning-image-transformer-on-higher-resolution/22623/4
### Your contribution
I'll take this! | 09-08-2022 09:50:40 | 09-08-2022 09:50:40 | Hi,
I'm findind hard to achieve to set the interporlate_pos_encoding = True by redefining the model's forward method.
Could you make a brief step by step case on how to do so in order to train the model, and not just make a forward pass?
I thought it was just about modifying the model.config which feeds the module parameters, as one can set the output_attentions = True per example, but I see is not the same case for the inteporlate_pos_encoding.
Thank you!
P.D: I have post this same question at the hugginface forum<|||||>Could you clarify? The only thing you need to change is pass `interpolate_pos_encoding=True` to the forward when training the model (no need to redefine the forward method).
This issue was fixed in #19103, therefore I'm closing this issue.<|||||>>
I fine tune the model by using the Trainer build class from transformers, not directly by calling the forward method, so I'm not finding the way to set the interpolate_pos_encoding to true in that case.
<|||||>Hi @NielsRogge
Is there any method to set the `interpolate_pos_encoding` to `True` while using the Trainer API?
Not able to find a method to pass it as a parameter to the `forward` method without redefining the `forward` method.
Also - if a section to explain this step is added in the example notebooks - would be really helpful for the community.<|||||>Pinging @sgugger here - the question is whether one can set a boolean argument in the forward of a model to `True` when using the Trainer API.<|||||>No, that is not possible. |
transformers | 18,934 | closed | Neptune.ai integration improvements | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- Neptune Run creation for every training, mostly affects HPO
- Logging model checkpoints
- Support for HPO and DDP
- Better accessibility of `neptune` across the codebase (`report_to all` etc.)
- Docs improved - added an entry about `NeptuneCallback`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Neptune: @shnela
Not sure who to call in the context of integrations:
- trainer: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-08-2022 09:21:57 | 09-08-2022 09:21:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,933 | closed | Simplify `is_pad_token_not_equal_to_eos_token_id` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Reduces a bool check which simplifies the expression for `is_pad_token_not_equal_to_eos_token_id`
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 09-08-2022 09:14:07 | 09-08-2022 09:14:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,932 | closed | Fix LayoutXLM wrong link in README | # What does this PR do?
fix LayoutXLM wrong link in README
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-08-2022 06:23:55 | 09-08-2022 06:23:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger could you review this PR please? |
transformers | 18,931 | closed | add DDP HPO support for sigopt. only main_process will have HPO, and … | …pass argument to other process
Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
HPO do not support DDP now. add support in sigopt backend
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
| 09-08-2022 03:41:15 | 09-08-2022 03:41:15 | @sgugger @yao-matrix. please help to review<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks for the review. the torch.distributed.broadcast_object_list only support list(each element is pickable) input, does not support dict class |
transformers | 18,930 | closed | [WIP] Add ZeroShotObjectDetectionPipeline (#18445) | # What does this PR do?
This PR adds the `ZeroShotObjectDetectionPipeline`. It is tested on `OwlViTForObjectDetection` model and should enable the inference following inference API
```
from transformers import pipeline
pipe = pipeline("zero-shot-object-detection")
pipe("cats.png", ["cat", "remote"])
```
This pipeline could default to the [https://huggingface.co/google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) checkpoint
Fixes # (`18445`)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Link to the [Issue](https://github.com/huggingface/transformers/issues/18445)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@alaradirik @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 09-07-2022 22:11:50 | 09-07-2022 22:11:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, just seeing the merge messed up the commit history. There are 377 changes, which is impossible for the review and merge the PR into `main`.
I suggest to reset to the last clean commit locally. Then use `git rebase main` to keep update with `main` (after pulling the latest changes from remote `main` into local `main`). Or any way works (as I am not sure what causes the current git status)<|||||>> Hi, just seeing the merge messed up the commit history. There are 377 changes, which is impossible for the review and merge the PR into `main`.
>
> I suggest to reset to the last clean commit locally. Then use `git rebase main` to keep update with `main` (after pulling the latest changes from remote `main` into local `main`). Or any way works (as I am not sure what causes the current git status)
Hi @ydshieh sorry for that. Was in a hurry to wrap the PR since I was going for vacation. Messed up in rebasing. Have reverted to stable commit. Will add the correct changes once I am back!<|||||>No problem, @sahamrit! I am super happy that you are able to get back to the stable commit 💯 . Have a nice vacation!<|||||>Hi @alaradirik , can you review the changes?<|||||>> Thank you for this PR.
>
> * I suggest to modify the output of the pipeline to be more "natural". (see relevant comment).
> * `text_queries` should be renamed `candidate_labels` to be in line with `zero-shot-classification`.
Hey @Narsil! I suggested using `text_queries` instead because it is a multi-modal model where users query images with free-form text. The queried object is either found or not and the found object's label is not chosen from a selection of candidate labels, so I think it'd make more sense to keep as it is.<|||||>> Hey @Narsil! I suggested using text_queries instead because it is a multi-modal model where users query images with free-form text. The queried object is either found or not and the found object's label is not chosen from a selection of candidate labels, so I think it'd make more sense to keep as it is.
Are you sure ? I just tried your code, and it seems all the labels stem from the text being sent. Meaning I think there is a 1-1 correspondance between `label` and `text_queries` (meaning `candidate_labels` would be a fine name).
```python
from transformers import pipeline
object_detector = pipeline(
"zero-shot-object-detection", model="hf-internal-testing/tiny-random-owlvit-object-detection"
)
outputs = object_detector(
"./tests/fixtures/tests_samples/COCO/000000039769.png",
text_queries=["aaa cat", "xx"],
threshold=0.64,
)
print(outputs)
```<|||||>Hi @Narsil, Sure the output labels are taken **exactly** from the input text_queries. The reason of naming it "text_queries" instead of "candidate_labels" as in case of zero-shot-image-classification is that, in zero-shot-image-classification pipeline, the [candidate labels are wrapped by the hypothesis template](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/zero_shot_image_classification.py#:~:text=candidate_labels%20(%60List%5Bstr,logits_per_image ), whereas here the text_queries are free text queries!
Hope it clarifies<|||||>> Are you sure ? I just tried your code, and it seems all the labels stem from the text being sent. Meaning I think there is a 1-1 correspondance between `label` and `text_queries` (meaning `candidate_labels` would be a fine name).
>
Yes, there is a 1-1 correspondence but I meant only the query text / a single label is evaluated for each object, whereas the label is selected from among multiple candidate labels for `zero-shot-classification`.<|||||>> Yes, there is a 1-1 correspondence but I meant only the query text / a single label is evaluated for each object, whereas the label is selected from among multiple candidate labels for zero-shot-classification.
I still think that `zero-shot` -> `candidate_labels` logic works. If we reuse names, it means that it's easier on users to discover and use pipelines. The fact that they are slightly different doesn't justify in my eyes the use of a different name.
I would even argue that they are exactly the same and the difference in how they are used are cause by `classification` vs `object-detection` not by what `candidate_labels` are.
I personally think using `candidate_labels` would be misleading and confusing given architecture and use case of this model. There have been other zero-shot object detection papers published very recently and it'd be better to get the naming right in order to avoid future breaking changes.<|||||>HI @Narsil @alaradirik, kindly review the changes |
transformers | 18,929 | closed | Starts on a list of external deps required for dev | I've found that I need to install MeCab manually on my AS Mac while working on #18702.
# What does this PR do?
Adds nudge to install MeCab from Homebrew to dev contributing instructions.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
| 09-07-2022 18:47:47 | 09-07-2022 18:47:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,928 | closed | Disable model checkpoint sharding of large models for SageMaker Model Parallel | Disable model checkpoint sharding of large models for SageMaker Model Parallel
* Using large max shard size since can't disable completely from the `save_pretrained()` call
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-07-2022 18:05:50 | 09-07-2022 18:05:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Oh ok, I see. That makes sense, thanks! |
transformers | 18,927 | closed | Skip some doctests in quicktour | The quicktour includes code snippets that instantiate a generic `dataset["train"]` and `dataset["test"]` (in the `Trainer` sections) that's only meant to be an example a user can copy/paste and replace with their own dataset. This causes the tests to fail since no dataset is actually being loaded. This PR adds the `# doctest: +SKIP` directive to skip the affected code snippets (the alternative option is to include a real dataset in the examples that can be loaded). | 09-07-2022 17:41:33 | 09-07-2022 17:41:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM, but it might be better to use an already processed dataset we could host somewhere so the whole thing can run (particularly since this also becomes a notebook).<|||||>I'll merge this for now so the daily CI is happy and then update it later with a processed dataset! |
transformers | 18,926 | open | Follow ups to DocumentQuestionAnswering Pipeline | ### Feature request
PR https://github.com/huggingface/transformers/pull/18414 has a number of TODOs left over which we'd like to track as follow up tasks.
## Pipeline
- [x] Add support for documents which have more than the tokenizer span (e.g. 512) words
- [ ] Add support for multi-page documents (e.g. for Donut, we need to present one image per page)
- [x] Rework use of tokenizer to avoid the need for `add_prefix_space=True`
- [x] Re-add support for Donut
- [ ] Refactor Donut usage in the pipeline or move logic into the tokenizer, so that pipeline does not have as much Donut-specific code
## Testing
- [ ] Enable `test_small_model_pt_donut` once `hf-internal-testing/tiny-random-donut` is implemented
## Documentation / Website
- [x] Add DocumentQuestionAnswering demo to [Hosted Inference API](https://huggingface.co/impira/layoutlm-document-qa) so that model demos work
- [ ] Add tutorial documentation to [Task Summary](https://huggingface.co/docs/transformers/v4.21.3/en/task_summary#question-answering)
### Motivation
These are follow ups that we cut from the initial scope of PR #18414.
### Your contribution
Happy to contribute many or all of these. | 09-07-2022 16:55:54 | 09-07-2022 16:55:54 | cc'ing @Narsil for enabling the model on the inference API, cc'ing @stevhliu for adding tutorial documentation to the task summary<|||||>@NielsRogge because we removed `donut-swin` from `AutoModelForDocumentQuestionAnswering`, you can no longer create a pipeline with donut, i.e.
```
In [2]: p = pipeline('document-question-answering', model='naver-clova-ix/donut-base-finetuned-docvqa')
/Users/ankur/projects/transformers/venv/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
The model 'VisionEncoderDecoderModel' is not supported for document-question-answering. Supported models are ['LayoutLMForQuestionAnswering', 'LayoutLMv2ForQuestionAnswering', 'LayoutLMv3ForQuestionAnswering'].
```
Should we add it back to that list? Or what is the best way to support that?<|||||>Could we re-open this (I don't think I have permissions to)? There are still a few changes necessary to complete all of the checkboxes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ankrgyl Can I ask you if I can work on this?
If I want to work on adding support for multi-page documents (e.g. for Donut, we need to present one image per page), may I ask you where I can start to proceed making contributions?<|||||>Absolutely!
Feel free to start looking here: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/document_question_answering.py<|||||>> * Add support for multi-page documents (e.g. for Donut, we need to present one image per page)
Thank you! I carefully read it! In order to add support for multi-page documents in `document_question_answering.py`, should I modify some methods in that file such as `preprocess()`? Can I create a pull request of the file you provided after modifying those methods?<|||||>@ankrgyl Hello. I would love to contribute to this task : Add tutorial documentation to Task Summary. Is it open and may I get pointers on how to begin working on it?
Thank you.<|||||>@elabongaatuo It seems like the Add tutorial documentation to Task Summary is still open. are you working on it? It seems you need to change starting from [here](https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/src/transformers/pipelines/document_question_answering.py#L103)<|||||>Hello @y3sar , no, I am not working on it at the moment. <|||||>@elabongaatuo then I would like to take it up if there is no problem with you
> Hello @y3sar , no, I am not working on it at the moment.
<|||||>> @elabongaatuo then I would like to take it up if there is no problem with you
>
> > Hello @y3sar , no, I am not working on it at the moment.
@y3sar , sure thing. 😊 no problem.<|||||>@ankrgyl I would Like to work on this Add tutorial documentation to [Task Summary](https://huggingface.co/docs/transformers/v4.21.3/en/task_summary#question-answering) and also in Add support for multi-page documents (e.g. for Donut, we need to present one image per page) |
transformers | 18,925 | closed | pin TF 2.9.1 for self-hosted CIs | # What does this PR do?
Same as #18818, but for docker image build and self-hosted CIs.
| 09-07-2022 15:02:36 | 09-07-2022 15:02:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Related PR -- #18917 |
transformers | 18,924 | closed | Use tiny models for ONNX tests | # What does this PR do?
Uses tiny random models for the ONNX tests to speed up the test suite. Closes #18819
## TODO
- [x] Add tiny model for `deepmind/language-perceiver`
- [x] Add tiny model for `deepmind/vision-perceiver-conv`
- [x] Add tiny model for `hustvl/yolos-tiny`
- [x] Add tiny model for `nvidia/segformer-b0-finetuned-ade-512-512`
- [x] Add tiny model for `google/long-t5-local-base`
- [ ] Ensure slow tests pass
### Hub PRs that need merging to ensure slow test pass
- [x] https://huggingface.co/hf-internal-testing/tiny-random-beit/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-deit/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-deit/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-clip/discussions/1
- [ ] https://huggingface.co/hf-internal-testing/tiny-random-clip/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-convbert/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-xlm-roberta/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-xlm-roberta/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-ibert/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-ibert/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-blenderbot-small/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-blenderbot-small/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-mt5/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-mt5/discussions/2 | 09-07-2022 14:19:29 | 09-07-2022 14:19:29 | Thanks, @lewtun
Let's run the scheduled CI manually (for ONNX tests) before merge :-)<|||||>> Thanks, @lewtun
>
> Let's run the scheduled CI (for ONNX tests) before merge :-)
Yes, this is still WIP because I discovered some slow tests fail with the new tiny models. Will debug and fix on the model side where necessary :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18924). All of your documentation changes will be reflected on that endpoint.<|||||>I left a few comments on Hub PRs. In general, we also like to have small vocab size. But if the current issues persist, no problem for me to use the tokenizers config or files from the original model checkpoint.<|||||>> I left a few comments on Hub PRs. In general, we also like to have small vocab size. But if the current issues persist, no problem for me to use the tokenizers config or files from the original model checkpoint.
Thanks!
As discussed offline, using a small vocab size `v` in the model config requires that `len(tokenizer) == v`. Otherwise the model cannot run inference because the tokenizer will generate out of vocab input IDs and throw an `index out of range` error.
AFAIK the only way to handle this is to train a tokenizer from scratch on a tiny "corpus" and use the resulting vocab size in the model config. This is simple for fast tokenizers, but somewhat painful for slow ones that don't have the `train_from_iterator()` method.
In the end it may not be entirely necessary to optimise the model size this way if the resulting "tiny" models are fast enough for our test suite<|||||>As discussed internally, we'll revert the changes to the model repos and create dedicated `tiny-random-onnx-x` repos for the ONNX tests<|||||>Stable bot begone!<|||||>@gante told me we can use `wip` label to avoid this bot 😃 <|||||>Closing in favour of https://github.com/huggingface/transformers/pull/20333 |
transformers | 18,923 | closed | Attention_mask generation error in generation_utils.py | The original logic to generate the attention_mask (of GPT series models) is wrong. I revised the logic to generate attention_mask.
| 09-07-2022 14:15:41 | 09-07-2022 14:15:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,922 | closed | [WIP] add SpeechT5 model | # What does this PR do?
Add the SpeechT5 model to Transformers. See also https://github.com/huggingface/transformers/issues/17569
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Current status of this PR
### To-do list
We decided not to implement fine-tuning for now. But when we do:
- verify that the ASR model (`SpeechT5ForSpeechToText`) can be fine-tuned on new data
- verify that the TTS model (`SpeechT5ForTextToSpeech`) can be fine-tuned on new data (still implement loss)
- verify that the voice conversion model (`SpeechT5ForSpeechToSpeech`) can be fine-tuned on new data (still implement loss)
We decided not to implement `SpeechT5ForPreTraining` for now.
### Notes
- When `attention_mask` is not all ones, the output is slightly different from the original model at the point where the mask goes from 1 to 0. This is due to a difference in how both models subsample the attention mask (`_get_feature_vector_attention_mask` in `SpeechT5SpeechEncoderPrenet`). This is not a big deal but may cause tiny ripple differences in the outputs elsewhere.
- The original model sets the `attention_mask` to 0 for padding tokens in the decoder `input_ids`. I disabled this because it does not play nice with `model.generate`. So the predictions are slightly different for the timesteps following the padding token (which really only happens when the sequence is complete but other sequences in the same batch have not completed yet).
| 09-07-2022 14:08:06 | 09-07-2022 14:08:06 | Hello @hollance. I'll be helping with the Transformer Encoder-Decoder.<|||||>Hi @hollance I will be helping with TextDecoderPrenet / TextDecoderPostnet<|||||>@anuragshas:
> I will be helping with TextDecoderPrenet / TextDecoderPostnet
Great! Are you also interested in looking at the tokenizer, since I believe the text pre- and post-net need to use that. The original model uses a BPE tokenizer (there is a download link in the README). I'm not sure what the current tokenizer is in the code, it was copied from Wav2Vec2 but I didn't look at it in detail yet.
<|||||>The `SpeechEncoderPrenet` is complete now. It gives the same results as the original model. However, there still are some TODOs in this part of the code to look at later.<|||||>The encoder is complete and verified to work (although there are some parts that possibly could be rewritten, marked with `TODO`). I've started adding the decoder but this doesn't work yet (didn't have time yet to fix it up).<|||||>Thanks for the in-depth review, @sanchit-gandhi!
> Wondering if it would have been better to copy everything from Speech2Text, including for the encoder? I know I pointed you in the direction of W2V2! But it has a whole bunch of functionality that is pretty specific to W2V2 pre-training that isn't used in SpeechT5 (e.g. SpecAug). It might be possible to condense the code by copying the Speech2Text encoder model, rather than that from W2V2.
Aside from pre-training stuff, I did bring the code in line with Speech2Text. It's not _exactly_ the same but a bit of a hybrid between Wav2Vec2 and Speech2Text. ;-)
> We can change the function `_get_feature_vector_attention_mask` to match the original implementation if there's a difference. Better to have correctness here rather than duplicated code from UniSpeech.
I don't think either approach is more "correct" than the other. The question is: at the point where the attention mask goes from 1 to 0, this may happen halfway inside a block of 320 samples (or whatever the frame size is). Does that partially-padded block get included in the predictions or is the entire block considered to be padding and gets excluded? Basically: do we round up or down? SpeechT5 simply makes a different choice here than `_get_feature_vector_attention_mask` but either one works fine.
> Ideally SpeechT5Model should load all the weights for the Transformer backbone, and SpeechT5ForConditionalGeneration all the weights for the Transformer backbone *and* pre-/post-nets. We could try and match the attribute names more closely between SpeechT5Model and SpeechT5ForConditionalGeneration, i.e. always assigning the encoder as self.encoder (rather than self.wrapped_encoder). And then try to make sure the structures follow as closely as possible. Not loading the decoder weights for CTC is fine!
The problem here is that `SpeechT5ForConditionalGeneration` will have `encoder.wrapped_encoder` in the checkpoint while SpeechT5Model only has `encoder`. I could fix this by making a "fake" wrapper that just calls the encoder without applying a pre-net, so that SpeechT5Model also has the `encoder.wrapped_encoder` path. (BartForCausalLM does something similar so it's not unprecedented.) EDIT: implemented this. Now the models load as expected.<|||||>> Aside from pre-training stuff, I did bring the code in line with Speech2Text. It's not exactly the same but a bit of a hybrid between Wav2Vec2 and Speech2Text
That sounds great - this is a new model that sits somewhere between the two (acoustic encoder is more Wav2Vec2-like, but the transformer decoder is similar to Speech2Text), so taking elements from each is no issue!
> Basically: do we round up or down? SpeechT5 simply makes a different choice here than `_get_feature_vector_attention_mask` but either one works fine.
I see, the numerical differences are tiny as you say. Feel free to pick the one you think is more appropriate! I'd opt to bring ours in-line with the 'official' implementation, but you know more about it!
> EDIT: implemented this. Now the models load as expected.
Amazing! With similar logic to `BartForCausalLM` in the end?<|||||>## Design issues & questions
Philosophical question for the Transformers team:
*TL;DR: SpeechT5 is different from the other models in Transformers and doesn't quite fit in with the design of the library. Is the current approach OK, or should we split it up into multiple different, completely independent models?*
Some background on the model: SpeechT5 is a speech-to-text (or ASR) model, but also a text-to-speech (TTS) model, as well as a speech-to-speech model, and even text-to-text. These are four different model types but they all share the same encoder-decoder structure. The only difference is that they have different so-called pre-nets and post-nets.
For example, in the ASR model the encoder pre-net is basically the first set of layers from Wav2Vec2, and the decoder pre- and post-nets are essentially the first and last layers of BART. By swapping in different pre & post-nets, and fine-tuning the model, the same pretrained architecture can handle different tasks.
So far I've implemented only the ASR and TTS model, but there are also checkpoints for voice conversion (speech-to-speech) and pretraining that we might want to add.
Specifically, these are the issues I ran into:
- The current design of Transformers assumes that a model always has one kind of input and one kind of output. This is not true for SpeechT5: some versions of the model have text as input, others speech. Likewise for the output.
- In other seq2seq models, there is a `ForConditionalGeneration` class that does the predictions. Here, we have at least two such classes, so I named them `ForSpeechToText` (ASR) and `ForTextToSpeech` (TTS) instead.
- Normally, we'd have an `Encoder` and a `Decoder` class. In SpeechT5, the encoder and decoder classes also need to run a pre-net. This is why there are wrapper classes such as:
- SpeechT5EncoderWithSpeechPrenet
- SpeechT5EncoderWithTextPrenet
- SpeechT5EncoderWithoutPrenet
- SpeechT5DecoderWithSpeechPrenet
- SpeechT5DecoderWithTextPrenet
- SpeechT5DecoderWithoutPrenet
The `SpeechT5ForSpeechToText` and `SpeechT5ForTextToSpeech` models will instantiate the appropriate encoder and decoder wrapper classes (and also run the post-net).
The base `SpeechT5Model` class needs to have special logic to handle these different wrappers. It shouldn't be used with the "naked" `SpeechT5Encoder` / `SpeechT5Decoder` classes, since they don't have any pre-nets.
This approach works, but it's also trying to shoehorn a model that doesn't quite fit into the design of Transformers.
- One side-effect of having these different pre- and post-nets, is that `SpeechT5Model` cannot know in advance what sort of data it gets as input. The input could be tokens (`input_ids`) or raw speech (`input_values`) or spectrograms (`input_features`).
To allow for this ambiguity, I named the input argument `input_values` everywhere. However, that's the same term that is used for raw audio input. None of the other terms (`input_ids` or `input_features` or `input_embeds`) is really suitable either. Suggestions for a better generic input name that covers the three different modalities are welcome. 😄
- Our seq2seq models combine the preprocessing for the encoder and for the decoder into a `Processor` class. The SpeechT5 ASR model needs a different Processor than the TTS model. So I made `SpeechT5ProcessorForSpeechToText` and `SpeechT5ProcessorForTextToSpeech`. These also use different `FeatureExtractor` objects as they process the audio in different ways.
In the design of Transformers it is assumed each model only has one processor / feature extractor, but here we have two, and we might need a third one (`SpeechT5ProcessorForSpeechToSpeech`) for the voice conversion checkpoint.
Having multiple processors / feature extractors for the some model type doesn't work very well with the `Auto` classes, as this assumes there is always only one.
- The TTS model applies a vocoder to the output of the encoder-decoder model. The weights for this vocoder model are kept separate from the main model and it has its own `Config` object, but the implementation lives in `modeling_speecht5.py`. Currently there is no way to share vocoders between audio models, but they probably should live in their own separate world. (Also affects the WIP SpeechToSpeech and FastSpeech2 models.)
- The `model.generate()` logic works fine for the ASR model but not for the TTS model. It would be nice if the `GenerationMixin` could handle the TTS generation logic as well.
- There is a version of the ASR model that only uses the encoder, which outputs CTC tokens. This uses its own tokenizer, `SpeechT5CTCTokenizer`, that derives from `SpeechT5Tokenizer`. I haven't seen that pattern for any of the other models in the library.
- The `SpeechT5ProcessorForSpeechToSpeech` doesn't really fit in with the design of `ProcessorMixin`. It has two feature extractors and no tokenizer. In principle this works, except saving two feature extractors is not supported, as they overwrite each others properties. (Could fix this by overriding the save/load_pretrained logic to add namespacing to the JSON file.)
- Pipelines don't work. When you do the following, it always tries to instantiate the `ForCTC` model. This happens because we have both a CTC and a Seq2Seq model for ASR, while the pipeline logic assumes there's only one of these.
```python
generator = pipeline(task="automatic-speech-recognition", model="Matthijs/speecht5_asr")
```
Except for fixing some small issues, the implementation of SpeechT5 is mostly complete, so you can look at the source in this PR in case the above sounds a bit vague. 😃
What I'd like to know is: How do you feel about the approach I've taken to make this model fit into Transformers?
Obviously, I wouldn't expect a complete redesign of Transformers just to accomodate SpeechT5, but I would like some feedback on whether you think the above decisions are acceptable. It works but it also kind of breaks some of the conventions that users of the library might expect.
An alternative would be to create completely different models in different folders, such as `speecht5_asr` and `speecht5_tts` and to treat these as unrelated. One of these would largely be a copy of the other, but with different pre- and post-nets. (We could simply ignore the `ForPreTraining` model, as it's unlikely to be in high demand.)
<|||||>@hollance Re `.generate()` not supporting TTS -- `transformers` doesn't have any TTS model, and in fact `.generate()` only supports text (or other sets of integers) output. I'm not sure whether expanding `.generate()` is the way to go, I would have to think about it, but I'd be happy to support in whatever is needed from the conditional generation angle!
@sanchit-gandhi you folks are working on the generation of audio, correct? Do you have plans for `generate()` or anything related to conditional generation?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>## Vocoders
The TTS and voice conversion models use a vocoder to convert the predicted mel spectrogram into an audio waveform. Currently this is implemented as `SpeechT5HiFiGAN` inside the SpeechT5 modeling file.
The vocoder is treated as a separate model (on the Hub under [Matthijs/speecht5_hifigan](https://huggingface.co/Matthijs/speecht5_hifigan)). It has its own weights and config that are separate from the SpeechT5 model.
To generate speech, you optionally pass the vocoder object to `model.generate_speech()`. Without it, this method outputs the spectrogram. With the vocoder, it outputs speech.
This allows the user to provide their own vocoder instead of the pretrained one.
(Note that automapping of `SpeechT5HiFiGANConfig` is not working in this implementation because it has its own config file.)
My suggestion is that we treat vocoders as separate model types, just like feature extractors and tokenizers, and that they are owned by the `processor`, which calls the vocoder in the postprocessing / decoding step.
Note that the [original checkpoint](https://huggingface.co/mechanicalsea/speecht5-vc) for the voice conversion model comes with trained vocoders for the different voices, that do not use the Hi-Fi GAN architecture but the one from Parallel WaveGAN. I did not implement this, since the Hi-Fi GAN vocoder works fine here too.
<|||||>@sanchit-gandhi
> * Is `SpeechT5ProcessorForSpeechToSpeech` working or are the feature extractors still overriding each other?
They are still overriding each other. I think the only way to fix this is to override `save_pretrained` and `from_pretrained` that are inherited from `ProcessorMixin`.
Even though what gets saved into `preprocessor_config.json` is wrong, the processor actually does process the data OK, so we could get away with it — but this is mostly due to both feature extractors using the same property names. And that would be asking for bugs when someone uses different configuration values.
<|||||>## SpeechT5ProcessorForSpeechToSpeech
(Writing this in case we want to fix this issue properly at some point.)
The problem: processor objects are assumed to have a tokenizer and a feature extractor. The config for the feature extractor is saved to `preprocessor_config.json`. However, `SpeechT5ProcessorForSpeechToSpeech` has no tokenizer and two feature extractors. As a result, the second feature extractor overwrites the JSON from the first.
In my opinion, the correct approach here would be to not hardcode the filename for feature extractors. Rather than using the `FEATURE_EXTRACTOR_NAME` constant, each feature extractor would get a class variable `feature_extractor_config_name = FEATURE_EXTRACTOR_NAME`. By default this is `preprocessor_config.json` but a class can override it if necessary. For the S2S model, we'd have `preprocessor_encoder_config.json` and `preprocessor_decoder_config.json`, for example.
However, the above solution would affect all of the models in Transformers, and it may still not work due to certain assumptions being made (i.e. you need to know the class name of the feature extractor so you can look up what its filename should be, which is a chicken-and-egg problem). So making this change just for SpeechT5 seems excessive at this point.
Hacks I've tried to work around this:
* Override `save_pretrained` in `SpeechT5ProcessorForSpeechToSpeech` to save each feature extractor's config in a subdir. This works OK for saving. (It requires some changes to `_upload_modified_files` so that it would pick up the config files inside these subdirs.)
However, it does not work for loading, since the feature extractor's `from_pretrained` does not know that it's supposed to look inside a subdir, and there's no way to tell it to do so. To fix this would require duplicating a lot of code from `FeatureExtractionMixIn`. And even then, it doesn't work with `AutoProcessor`.
* Create a `FeatureExtractionMixinHack` class that extends `FeatureExtractionMixin`. This duplicates the loading and saving code in order to save using different filenames for each feature extractor. The SpeechT5 feature extractors now extend from this. Very messy and brittle. Not even sure if works OK in all situations.
* Save the properties of both feature extractors in the same file, as nested dictionaries. This requires massive changes as the code is currently set up to save one file per object.
For now, the "solution" is not not use `save_pretrained` and `from_pretrained` with `SpeechT5ProcessorForSpeechToSpeech` and pretend everything is fine. 😅 <|||||>To the reviewer: This PR has been reviewed by the audio team several times already, and this is (hopefully 😄) the final review before merging.
The only remaining thing is replacing the checkpoints with official ones. But I'd rather wait with creating these until the PR has been approved.
(We decided not to implement fine-tuning right now.)<|||||>Hi @sgugger, I made the fixes you asked for. There is now just one feature extractor / processor and the auto classes have been removed again.
(There are two tests that seem to fail but they're not related to this PR.)<|||||>@sgugger Hi Sylvain, I made the changes to the feature extractor you asked for. Also, the checkpoints have been updated to point to the microsoft organization on the hub. If all is good with you, feel free to merge this PR at your leisure. Thanks! 😄
(Again there is a failing test but this seems unrelated to this PR.)
EDIT: Unless @sanchit-gandhi wants to give this the final once-over too.<|||||>Good to merge for me too!<|||||>@sgugger I don't have rights to merge this myself. So someone else needs to press that button. 😅 |
transformers | 18,921 | closed | Fixed typo | Fixed typo itmes --> items
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-07-2022 13:39:09 | 09-07-2022 13:39:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,920 | closed | Add Table Transformer | # What does this PR do?
This PR adds [Table Transformer](https://github.com/microsoft/table-transformer) by Microsoft, which are DETR-compatible models for table detection and table structure recognition tasks in unstructured documents.
Note: I'm making some updates to the original DETR implementation, however these are justified by the fact that the original DETR implementation by Facebook AI also includes these things, which I didn't add when first porting DETR. Hence, our DETR implementation is now more aligned with the original one.
To do:
- [ ] transfer checkpoints to the Microsoft organization
- [ ] add link to notebook
| 09-07-2022 12:57:14 | 09-07-2022 12:57:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm very much not in favor of adding a new config parameter that controls where the layernorm is applied. I'm not surprised the original code has it, as Facebook AI usually codes models in a modular way, but not Transformers. We had the same thing with BART and friends, and they are coded as distinct models in the library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this PR in favor of #19614 |
transformers | 18,919 | closed | [VideoMAE] Improve code examples | # What does this PR do?
This PR simplifies the code examples of VideoMAE, and adds a seed to make sure the video classifier always predicts "eating spaghetti" on the video (as, due to the sampling of frames, it may occur the model predicts another class, like "eating ice cream"):
```
1019
1020 >>> inputs = feature_extractor(list(video), return_tensors="pt")
1021
1022 >>> with torch.no_grad():
1023 ... outputs = model(**inputs)
1024 ... logits = outputs.logits
1025
1026 >>> # model predicts one of the 400 Kinetics-400 classes
1027 >>> predicted_label = logits.argmax(-1).item()
1028 >>> print(model.config.id2label[predicted_label])
Expected:
eating spaghetti
Got:
eating ice cream
```
Weirdly, this wasn't caught by the doc test CI. It could have to do with the addition of i`mport numpy as np` to the code snippet. | 09-07-2022 09:43:04 | 09-07-2022 09:43:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,918 | closed | update the train_batch_size in case HPO changes batch_size_per_device | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
get incorrect "total optimization steps" in HPO. since train_batch_size is not updated accordingly
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger | 09-07-2022 09:12:31 | 09-07-2022 09:12:31 | @sgugger @yao-matrix please help review it. the bug in finding when HPO is enabled in example. "Total optimization steps" is incorrect since train_batch_size is not updated accordingly. add the update of this parameter in trainer.train<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,917 | closed | TF: unpin maximum TF version | # What does this PR do?
Unpins TF maximum version.
As in the scheduled run, a few onnx+tf tests broke. I'd say we merge this PR, and put the newly broken tests on our todo list. | 09-07-2022 09:06:23 | 09-07-2022 09:06:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Why why why would we merge a PR with red ticks? Every contributor making a PR this weekend and until this is resolved will wonder what they did wrong.<|||||>I was thinking of the self-hosted scheduled CIs only when reviewing this PR. You are right - we should keep CircleCI / push CI green.<|||||>For the scheduled CI, we can unpin so we know what to fix. But let's do it on Monday if you are ok. |
transformers | 18,916 | closed | facebook/wav2vec2-xls-r-300m-21-to-en TypeError: expected str, bytes or os.PathLike object, not NoneType | ### System Info
transformers == 4.21.3
python == 3.9.2
ubuntu 18
### Who can help?
@patrickvonplaten @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import Speech2Text2Processor
2. processor =Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-21-to-en")
### Expected behavior
the processor should be loaded but got this error instead :
`TypeError: expected str, bytes or os.PathLike object, not NoneType` | 09-07-2022 08:31:12 | 09-07-2022 08:31:12 | Hey @Shiro-LK!
Good catch, we're loading a Wav2Vec2 processor here so need to instantiate the corresponding class accordingly:
```python
from transformers import Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-21-to-en")
```
I've opened a PR to update the example on the model card with these steps: https://huggingface.co/facebook/wav2vec2-xls-r-300m-21-to-en/discussions/3 |
transformers | 18,915 | closed | Add image height and width to ONNX dynamic axes | # What does this PR do?
This PR enables dynamic axes for image height / width of ONNX vision models. This allows users to change the height and width of their inputs at runtime with values different from those used to trace the model during the export (usually 224 x 224 pixels)
Here's an example with ResNet and `optimum`:
```python
import requests
from PIL import Image
from optimum.onnxruntime import ORTModelForImageClassification
from transformers import AutoFeatureExtractor
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
# Raw image size 480 x 640 pixels
image = Image.open(requests.get(url, stream=True).raw)
# Resize image to 40 x 40 pixels
preprocessor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50", do_resize=True, size=40)
model = ORTModelForImageClassification.from_pretrained("onnx")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
logits.shape
```
I've also checked the slow tests pass:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "beit or clip or convnext or data2vec-vision or deit or detr or layoutlmv3 or levit or mobilevit or resnet or vit" -s
``` | 09-07-2022 07:59:07 | 09-07-2022 07:59:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,914 | closed | Cannot Import BigBirdModel | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.2
- Platform: Linux-5.10.133+-x86_64-with-debian-9.9
- Python version: 3.6.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
----> 2 from transformers import BigBirdTokenizer,BigBirdModel
ImportError: cannot import name 'BigBirdTokenizer'
```
```
----> 4 from transformers import (AlbertModel, AlbertTokenizer, BartModel, BigBirdModel, BigBirdTokenizer,
5 BartTokenizer, BertModel, BertTokenizer,
6 CamembertModel, CamembertTokenizer, CTRLModel,
ImportError: cannot import name 'BigBirdModel'
```
### Expected behavior
Model should get imported without the error | 09-07-2022 07:43:54 | 09-07-2022 07:43:54 | I don't think transformers version 3.0.2 contains the BigBird model. I think updating your version of the transformers package should solve the issue.
I also see that you are using Python version 3.6.6. The latest version of the transformers package requires Python >=3.7.0, so I guess you would also need to update your installed Python version.
Hope this helps!<|||||>Which version does
On Wed, 7 Sep, 2022, 1:43 pm Manish Sridhar, ***@***.***>
wrote:
> I don't think transformers version 3.0.2 contains the BigBird model. I
> think updating your version of the transformers package should solve the
> issue.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/18914#issuecomment-1239059464>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJDFNZADFM4WMFVCK4LEDSLV5BFD5ANCNFSM6AAAAAAQGPXTJY>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>@jaideep11061982 I believe the model was added in v4.5.0, but I think either @sgugger or @NielsRogge will be able to better comment on the exact version that you could use.<|||||>Yes, it was introduced in v4.5.0.
In general, please do not open issues without updating to some recent version of Transformers, v3.0.2 is more than two years old and bugs are fixed as continuous development of the new versions, so you need to upgrade to the latest releases to see the fixes anyway.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,913 | closed | Fix XLA fp16 and bf16 error checking | This PR fixed a bug introduced in https://github.com/huggingface/transformers/pull/15022 that will wrongfully throw an error when training with XLA device + fp16. `GPU_NUM_DEVICES` is unset by torch_xla in distributed training [here](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/xla_multiprocessing.py#L229).
Tested using the following scripts:
```sh
GPU_NUM_DEVICES=8 python -m torch_xla.distributed.xla_spawn --num_gpus 8 language-modeling/run_mlm.py \
--model_name_or_path bert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--overwrite_output_dir true \
--output_dir /tmp/test-mlm \
--per_device_train_batch_size 10 \
--do_eval \
--fp16 true \
--do_train \
--num_train_epochs 3 \
--optim adamw_torch_xla
```
Thanks to @Lokiiiiii @comaniac for reporting this issue.
cc @sgugger | 09-07-2022 03:02:46 | 09-07-2022 03:02:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>XLA already identifies the device type and publishes it in the environment variable for distributed training:
```
XRT_MULTI_PROCESSING_DEVICE="device:ordinal"
```
Eg: XRT_MULTI_PROCESSING_DEVICE=GPU:0
Eg: XRT_MULTI_PROCESSING_DEVICE=TPU:0
Refer to relevant device specific setup in PT-XLA: https://github.com/pytorch/xla/blob/master/torch_xla/distributed/xla_multiprocessing.py#L219-L276
Looking into single worker training now.<|||||>There might be a much easier solution:
The presence of environment variables of [TPU_NUM_DEVICES](https://github.com/pytorch/xla/blob/6e42e7cb3af01d9f8909e112a3be0148a87acad0/torch_xla/distributed/xla_multiprocessing.py#L79-L88) or [XRT_TPU_CONFIG](https://github.com/pytorch/xla/blob/e7e7fe406c7f276469d3e47ecd23e8c9423ab1b5/TROUBLESHOOTING.md) indicates a TPU environment.
The presence of environment variable GPU_NUM_DEVICES indicates a GPU environment.
<|||||>The most systematic logic should be like:
```
and not (self.device.type == "xla" and is_torch_tpu_available() and xm.xla_device() == gpu)
```
Inspired by this logic, it might be better to have an API to return the current torch_xla device so that we could use it here:
```
and not(self.device_type "xla" and get_torch_xla_device() != gpu)
```<|||||>I'm fine with both solutions :-)<|||||>I just realized torch_xla already has the API to distinguish different backends.
```
torch_xla._XLAC._xla_real_devices([str(device)])
```
For GPU, it returns
```
['GPU:0']
```
For TPU, it returns
```
['TPU:0']
```
I'll try to implement it with this API.<|||||>It looks like "torch.device" as a type hint can cause CI failure if pytorch is not installed. I've removed it. |
transformers | 18,912 | closed | Failed to import transformers.models.bart.modeling_tf_bart because no module named 'keras' | The following line of code causes an error: `ModuleNotFoundError: No module named 'keras'` whenever I try to initialize a Bart model:
https://github.com/huggingface/transformers/blob/0a632f076d6b275690176b79c64c5559e1240b05/src/transformers/modeling_tf_utils.py#L39
Replaced it with `from tensorflow.python.keras.saving.hdf5_format import save_attributes_to_hdf5_group` and the error was gone.
Is this a bug or I didn't install all necessary packages? | 09-07-2022 01:26:39 | 09-07-2022 01:26:39 | Just had this exact same issue. Thank you for the fix!<|||||>Shouldn't this be open until gets fixed?<|||||>Confirming the issue still exists when using latest TF (2.11). Downgrading TF to 2.9 fix the issue.<|||||>https://stackoverflow.com/questions/74586892/no-module-named-keras-saving-hdf5-format
working for me on TF==2.9 |
transformers | 18,911 | closed | [DeepSpeed ZeRO3] Fix performance degradation in sharded models | When sharded models were added the deepspeed/zero3 branch of model loading of pretrained weights
https://github.com/huggingface/transformers/blob/7d5fde991d598370d961be8cb7add6541e2b59ce/src/transformers/modeling_utils.py#L427-L429
got ~N-shards-x-slower, since it wastefully gathered weights that weren't in the `state_dict`:
**So for example with BLOOM-176 which has 72 shards, the loading was ~70x slower under deepspeed zero3 and nvme offload!**
This fix takes care of the situation with sharded models by finding an intersection of `state_dict` keys and the keys of the parameters of the current submodule that is being loaded to and thus only gathering the weights that get updated. And if there is no intersection skipping the rest of the branch altogether.
The 1-shard use case still works as the intersection should be 100%.
| 09-06-2022 23:31:26 | 09-06-2022 23:31:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, I was thinking about this discovery and my suggestion is that we do this not just for deepspeed.
The thing is - with sharded models like bloom-176 (72 shards) - 90% of the time this code:
https://github.com/tjruwase/transformers/blob/81dbd0ba6c5422dec33dd31cf076e44d96d2d968/src/transformers/modeling_utils.py#L436
is doing nothing since its `state_dict`'s "payload" doesn't match the params of the submodule it's called for, as most of the time they are in another shard.
Not sure of the cost or the actual promised saving, but it will save many unnecessary `module._load_from_state_dict(*args)` calls.
Especially since the code is already there anyway.
Thoughts?<|||||>Yes, we could probably ignore the load when there is no parameter to load indeed. Will make a PR this morning.<|||||>Took a stab at it in #18937. Thinking more of it, I'm not sure if we'll get a sensible gain as I expect the calls to `module._load_from_state_dict` to be mostly noops but we can certainly train and measure the difference!<|||||>I followed up here: https://github.com/huggingface/transformers/pull/18937#pullrequestreview-1100945027
|
transformers | 18,910 | closed | Accelerator end training | Add `accelerator.end_training()` to the ends of the example scripts. This ensures that trackers call their ending/finishing functions.
@sgugger @muellerzr | 09-06-2022 23:12:50 | 09-06-2022 23:12:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,909 | closed | Fixed typos in comments of OPTDecoderLayer | # What does this PR do?
Fixed typos in comments of OPTDecoderLayer ( used in Meta OPT models )
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
@sgugger | 09-06-2022 21:24:51 | 09-06-2022 21:24:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18909). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,908 | closed | [New Model] Add TimeSformer model | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/18724
- [x] Create a working environment for successful inference with the original source code
- [x] Create a debugging script for the original source code
- [x] Separate original model from original preprocessing pipeline
- [x] Test the original model with transformers/VideoMAEFeatureExtractor preprocessing pipeline
- [x] Port TimeSformer to HuggingFace/transformers
- [x] Adds tests for transformers/TimeSformer implementation
- [x] Update variable names to be more explicit
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@NielsRogge
| 09-06-2022 19:39:42 | 09-06-2022 19:39:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge I have added some tests, variable names require some work but maybe I can update names during PR review?<|||||>Hi @NielsRogge @fcakyon do you need help with finishing this and merging it? I'm happy to add the finishing touches<|||||>Currently I am a bit busy with my phd qualification exam, barely finding any free time. If I cannot find any time in the following weeks you may continue @Darktex <|||||>@NielsRogge can you review it again, please? Tried to address all your concerns 👍 <|||||>> Thanks for your work, looks great to me.
Thanks for all the constructive feedback!
<|||||>Hello @NielsRogge, thanks a lot for all the help you have provided. I have opened multiple PRs for the config and model files of all timesformer variants:
https://huggingface.co/facebook/timesformer-base-finetuned-k400/discussions/1
https://huggingface.co/facebook/timesformer-hr-finetuned-ssv2/discussions/1
https://huggingface.co/facebook/timesformer-hr-finetuned-k600/discussions/1
https://huggingface.co/facebook/timesformer-hr-finetuned-k400/discussions/1
https://huggingface.co/facebook/timesformer-base-finetuned-k600/discussions/2
https://huggingface.co/facebook/timesformer-base-finetuned-ssv2/discussions/2
Is there anything I should do about this PR?<|||||>Looks great on my side and ready to merge! Will let @NielsRogge double-check on last time and merge if he's happy :-)<|||||>Thanks for all your work!
Feel free to share on social media, we'll amplify ;)<|||||>Thanks a lot @NielsRogge, will share it after preparing a space :)<|||||>> Thanks for all your work!
>
> Feel free to share on social media, we'll amplify ;)
I have shared it on [Twitter](https://twitter.com/fcakyon/status/1599305469017067521) and [Linkedin](https://www.linkedin.com/posts/fcakyon_timesformer-is-the-first-transformer-based-activity-7005071269373095936-Sb9-?utm_source=share&utm_medium=member_desktop) with the space demo link 🚀 |
transformers | 18,907 | closed | Fix tflongformer int dtype | TFLongformer had a lot of `int32` dtypes in its code, caused by `tf.convert_to_tensor()` defaulting to int32 when passed a list of ints, as well as some explicit `int32` lines. We prefer `int64` across our models, so I've converted everything to use that.
Fixes #13632 | 09-06-2022 18:33:51 | 09-06-2022 18:33:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>But it doesn't look to good to CI 😅 <|||||>Working on that bit!<|||||>Quick update: I did a lot of dtype casting which should resolve the remaining issues. Because TFLED has some sections copied from TFLongFormer, TFLED got updated as well. However, the TFLongformerEmbeddings were copied from TFRobertaEmbeddings, but I broke this connection because we have to do some extra casting in TFLongformer due to things like the `global_attention_mask`, and I don't really want to mess with TFRoberta when it doesn't have issues, because it's a heavily-used model.<|||||>Looks good! 👍
Thanks for taking care of this one |
transformers | 18,906 | closed | Fix incorrect size of input for 1st strided window length in `Perplexity of fixed-length models` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18887
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 09-06-2022 17:33:09 | 09-06-2022 17:33:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,905 | closed | Add checks for more workflow jobs | # What does this PR do?
Apply the same change in #18583, so we can have a slack message saying something goes very wrong in CIs. | 09-06-2022 16:38:53 | 09-06-2022 16:38:53 | |
transformers | 18,904 | closed | Add BART DLM PyTorch pretraining example | Implements a pretraining example for BART (denoising language model). Big focus on getting the data denoising as close to the original fairseq as possible but instead of on the dataset level on the dataloader level.
Heavily inspired by the fairseq implementation and the FLAX implementation. (See `HF (Flax), fairseq, and current implementation`.) Looking for some feedback. Please see `Questions/Uncertainties`.
# Some notes
## Default values
The defaults are set to the [given BART args](https://github.com/facebookresearch/fairseq/issues/1899#issuecomment-1069429320). This differs from the Flax defaults in one respect, namely `poisson_lambda`, which is now set to `3.5` instead of `3.0`.
## HF (Flax), fairseq, and current implementation
There are some differences in implementation between fairseq, the HF FLAX example, and this PyTorch implementation.
- `argwhere` in the Flax example
[in this position](https://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L319)
is not the same as what is happening in fairseq. [In fairseq](https://github.com/facebookresearch/fairseq/blob/a6a63279422f846a3c2f6c45b9c96d6951cc4b82/fairseq/data/denoising_dataset.py#L230)
we check explicitly that the previous token was not a "full stop" (padding token) but in HF we just check whether the
current token is a full stop. In the current example I also explicitly check that the next token is not a full stop,
in case of padding. (However, in practice that should be a non-issue since all batches/samples should have the
same sequence length and there should not be any padding.)
- I found that the result of sentence permutation was not consistent in terms of where the separating pad token ended
up ([bug report](https://github.com/facebookresearch/fairseq/issues/4695)), so I have reimplemented that method so
that sentences in a sequence are still separated by a padding token, even after permutation.
- In HF FLAX, the token_mask is restricted to [non-special and non-padding tokens](https://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L361).
In Fairseq, by default, only the first and last tokens are excluded and [all others](https://github.com/facebookresearch/fairseq/blob/1bba712622b8ae4efb3eb793a8a40da386fe11d0/fairseq/data/denoising_dataset.py#L241)
are prone to masking. The HF implementation seems sensible so I follow that. `get_special_tokens_mask` includes the
padding token, though, so no need to add that separately.
- The Flax example does not include methods to add more noise. I have ported those as well.
- However, I did not adapt `add_insertion_noise` to work well with padded sequences. So the inserted noise may occur
ANYWHERE. It is unclear whether this is intended behavior.
Alternatively, we could implement all this processing on the dataset level and use `Dataset.map`. This has some
advantages:
- more true to fairseq implementation (sample level rather than batch level);
- cached.
... and disadvantages:
- potentially slower (not batched), although we can integrate a batched approach. But as discussed above, this will be
less true to the original fairseq implementation in `add_insertion_noise`
- every sample is always processed the same. So in small datasets which are seen multiple times by the model, the
same sample will always be processed the same. In a dataloader, that will not be the case because the processing
occurs on every iteration rather than once before training.
## Questions/Uncertainties
- Do the padding tokens still serve a purpose after permutation? (Teaching the model to learn to detect sentence boundaries?) They _can_ get masked and noised.
- It seems that `add_insertion_noise` can insert noise _anywhere_ (also in fairseq), which means that it will also overwrite special
tokens and that sequence don't necessarily end with a EOS token. Is that a problem?
- I have now added auxiliary scripts for config/tokenizer creation when pre-training. Should I remove those? In the FLAX example, these steps are [described inline](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#bart-denoising-language-modeling) but without a given script. So we could also just do that.
- I have explicitly added fingerprints (hashed) because in the past I've come to encounter issues when using spaCy and Dataset.map (every time you load a spaCy model, it has a different hash so the processing will happen every time). I don't see a better way but feel free to share ideas. Maybe someone of the `datasets` team can chime in, too.
# Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/5096#issuecomment-1237227809
- [x] Did you make sure to update the documentation with your changes?
# Who can review?
- bart: @patrickvonplaten @patil-suraj
- maintained examples (not research project or legacy): @patil-suraj
- flax implementation authors: @sanchit-gandhi @duongna21
| 09-06-2022 16:00:26 | 09-06-2022 16:00:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18904). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @BramVanroy! Thanks for making a start on this PR. In general, we aim to mirror the original repo's functionality as closely as possible. In this case, porting from fairseq is the way to go! So great to see your comments regarding consistency with fariseq, and yes to all of them! If indeed these changes are required, we'll need to update the Flax example accordingly.
We can batch samples with datasets.map by passing the `num_workers` arg. To pre-process samples on a specified number of CPU workers concurrently:
```python
dataset = dataset.map(map_fn, num_workers=data_args.preprocessing_num_workers)
```
This I think is the way to go for processing the dataset being the closest to fariseq.
Adding auxiliary scripts for config/tokenizer creation is a great idea - all for it! Makes it far easier to reproduce and run the example :-) |
transformers | 18,903 | closed | TF: final bias as a layer in seq2seq models (replicate TFMarian fix) | # What does this PR do?
This PR replicates the exact same change as in https://github.com/huggingface/transformers/pull/18833 (applied to TFMarian) to the other seq2seq TF models. **_The change is exactly the same for all models._**
In essence, weights that are not in layers are not stored/loaded with `.save_weights()` and `.load_weights()`, the functions we use to store to/load from the hub. These changes move `final_logits_bias` to a layer. Many models do NOT use this bias, but some do.
⚠️ Prior to this change, existing TF models from `Helsinki-NLP` (`TFMarian`) were wrong (and new conversions failed the automatic checks). I will revisit the canonical models using these architectures to ensure they are okay, and open PRs with weights if not. | 09-06-2022 15:09:11 | 09-06-2022 15:09:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,902 | closed | Generate: add model class validation | # What does this PR do?
Fixes #18210
This PR adds model class validation at the start of generate (all model classes inherit `GenerationMixin`, but few can use `generate()`). It also adds an exception that attempts to redirect the users to the right classes. | 09-06-2022 11:49:00 | 09-06-2022 11:49:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @patrickvonplaten I've requested a re-review of this PR. As per @patrickvonplaten's suggestion, the PR was upgraded to contain the exact class the user should use in the exception (as opposed to pointing to all generate-compatible auto classes).
In the process of building it, I've noticed that the previous version of this PR was incorrect anyways -- PT and TF had a default `prepare_inputs_for_generation`, so we couldn't rely on its existence. Only 1 model was using this default, so I removed it and implemented it in the missing model. The default `prepare_inputs_for_generation` was a public method, but since this PR blocks the use of `generate()` with classes that are not intended to be used with it anyways, removing the public method should have little impact. Nevertheless, it is a point to consider in the review!
_______________________________
Here's an example with the current version of the PR:
```
>>> from transformers import AutoModel
>>> model = AutoModel.from_pretrained("distilgpt2")
>>> model.generate("foo")
TypeError: The current model class (GPT2Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'GPT2LMHeadModel'}
``` |
transformers | 18,901 | closed | unpin slack_sdk version | # What does this PR do?
The issue in Slack SDK 3.18.2 was fixed in 3.18.3, so no longer need to pin 3.18.1.
For more details, see
https://github.com/slackapi/python-slack-sdk/pull/1259#issuecomment-1237731450
https://github.com/slackapi/python-slack-sdk/issues/1261 | 09-06-2022 08:59:29 | 09-06-2022 08:59:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,900 | closed | Converting ruGPT3 model (based on GPT2) to ONNX format | ### System Info
## Environment info
Use google colab with turned on GPU
- `transformers` version: 4.21.3
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no (at least not call it directly)
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
@lewtun
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to export ruGPT3 model to ONNX format in google colab notebook
Code based on:
https://huggingface.co/docs/transformers/main/en/serialization#exporting-a-model-to-onnx
```
! pip install transformers[onnx] datasets sentencepiece >> pip_log.txt
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, GPT2LMHeadModel
from onnxruntime import InferenceSession
# Load model and tokenizer
name = 'sberbank-ai/rugpt3medium_based_on_gpt2'
tokenizer = AutoTokenizer.from_pretrained(name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.sep_token = tokenizer.eos_token
model = GPT2LMHeadModel.from_pretrained(name)
# Save to disk
tokenizer.save_pretrained("local-pt-checkpoint")
model.save_pretrained("local-pt-checkpoint")
# Run ONNX converter
! python -m transformers.onnx --model=local-pt-checkpoint onnx/ --atol=2e-5
```
I'm getting error
> Some weights of the model checkpoint at local-pt-checkpoint were not used when initializing GPT2Model: ['lm_head.weight']
> - This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
> - This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
> Using framework PyTorch: 1.12.1+cu113
> Overriding 1 configuration item(s)
> - use_cache -> False
> /usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py:808: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if batch_size <= 0:
> Traceback (most recent call last):
> File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> "__main__", mod_spec)
> File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
> main()
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 94, in main
> args.output,
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 336, in export
> return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 199, in export_pytorch
> opset_version=opset,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py", line 365, in export
> export_modules_as_functions,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 178, in export
> export_modules_as_functions=export_modules_as_functions,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 1084, in _export
> dynamic_axes=dynamic_axes,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 727, in _model_to_graph
> graph, params, torch_out, module = _create_jit_graph(model, args)
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 602, in _create_jit_graph
> graph, torch_out = _trace_and_get_graph_from_model(model, args)
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 518, in _trace_and_get_graph_from_model
> model, args, strict=False, _force_outplace=False, _return_inputs_states=True
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph
> outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 132, in forward
> self._force_outplace,
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 118, in wrapper
> outs.append(self.inner(*trace_inputs))
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
> result = self.forward(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 844, in forward
> inputs_embeds = self.wte(input_ids)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
> result = self.forward(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
> self.norm_type, self.scale_grad_by_freq, self.sparse)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2199, in embedding
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
> IndexError: index out of range in self
### Expected behavior
Link to the model
[ruGPT3 medium based on GPT2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2)
Given that the model architecture is based on GPT2, I assumed that the script will be able to convert it automatically.
Unfortunately, the error logs are not too detailed and I don't know if it's even possible to solve this problem at the HuggingFace level. Can I convert it by setting custom ONNX configuration? Or probably there is another workaround? | 09-05-2022 12:23:42 | 09-05-2022 12:23:42 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, I meet the same situation, have you solved it? @Gooogr |
transformers | 18,899 | closed | NaN when training t5-large with bf16 on multiple GPUs | ### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.15.0-1017-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.4.4 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm getting `nan` immediately when training `t5-large` using `bfloat16` on multiple GPUs, but when I run the same script on a single GPU it's fine. I've made a small example below, which I'm running on a machine with 2 A100s. If I do `CUDA_VISIBLE_DEVICES=0 python script.py` the loss is fine, but if I just do `python script.py` I get `nan` from the first iteration.
```
from typing import List, Tuple
import torch
from torch.utils.data import Dataset, DataLoader
import transformers
class MyDataset(Dataset):
def __init__(
self,
data: List[List[str]],
tokenizer: transformers.PreTrainedTokenizerFast,
) -> None:
super().__init__()
self._data = data
self._tokenizer = tokenizer
def __len__(
self,
) -> int:
return len(self._data)
def __getitem__(
self,
index: int
) -> List[str]:
return self._data[index]
def collate_fn(
self,
batch: List[List[str]],
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
prompts = [b[0] for b in batch]
targets = [b[1] for b in batch]
prompts_tokenized = self._tokenizer(
text=prompts,
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
prompts_input_ids = prompts_tokenized["input_ids"]
prompts_attention_mask = prompts_tokenized["attention_mask"]
targets_tokenized = self._tokenizer(
text=targets,
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
targets_input_ids = targets_tokenized["input_ids"]
targets_attention_mask = targets_tokenized["attention_mask"]
return (
prompts_input_ids,
prompts_attention_mask,
targets_input_ids,
targets_attention_mask,
)
if __name__ == "__main__":
model = transformers.T5ForConditionalGeneration.from_pretrained(
"t5-large",
)
tokenizer = transformers.T5TokenizerFast.from_pretrained(
"t5-large",
)
device = (
torch.device("cuda:0")
if torch.cuda.is_available()
else torch.device("cpu")
)
multi_gpu = torch.cuda.device_count() > 1
if multi_gpu:
model = torch.nn.DataParallel(model)
model = model.to(device)
optimizer = transformers.Adafactor(
params=model.parameters(),
lr=1e-4,
scale_parameter=False,
relative_step=False,
)
grad_scaler = torch.cuda.amp.GradScaler(
enabled=True,
)
my_data = [
[f"This is sentence {i}.", f"This is sentence {i + 1}."]
for i in range(1000000)
]
dataset = MyDataset(
data=my_data,
tokenizer=tokenizer,
)
dataloader = DataLoader(
dataset=dataset,
batch_size=8,
shuffle=True,
collate_fn=dataset.collate_fn,
)
for batch in dataloader:
with torch.autocast(
enabled=True,
device_type=device.type,
dtype=torch.bfloat16,
):
batch = [b.to(device) for b in batch]
(
prompts_input_ids,
prompts_attention_mask,
targets_input_ids,
targets_attention_mask,
) = batch
loss = model(
input_ids=prompts_input_ids,
attention_mask=prompts_attention_mask,
labels=targets_input_ids,
).loss
if multi_gpu:
loss = loss.mean()
grad_scaler.scale(loss).backward()
grad_scaler.step(optimizer)
grad_scaler.update()
optimizer.zero_grad()
print(f"Loss = {loss.item()}")
```
### Expected behavior
No `nan`s when training `t5-large` using `bfloat16` on multiple GPUs. | 09-05-2022 11:10:51 | 09-05-2022 11:10:51 | @LysandreJik perhaps you could suggest someone who can help with this please?<|||||>I believe @stas00 has some experience around bfloat16 and nans and may have an idea of where the issue may be coming from<|||||>I have tried t5-large, tested your script to work fine with t5-small - need to find a box with a few large gpus to test t5-large.
Meanwhile, we should revisit the scaling.
the main benefit of using bf16 over fp16 is that there is very little risk of overflow - since bf16's numerical range is the same as of fp32, so no down scaling is needed here.
But perhaps we are hitting underflow here. There is a special tool we have for that - you can try to plug it in and observe where (most likely) underflow is happening
https://huggingface.co/docs/transformers/debugging#underflow-and-overflow-detection
But then underflow would just lead to no learning and not really nan I think.
I will try to experiment more with it once I'm able to run t5-large.
<|||||>Thanks @stas00 - I had a go at using the underflow/overflow detection tool but actually when I switched from `DataParallel` to `DistributedDataParallel` I didn't get nans with this toy example! I'll try to do some experiments with some real data next week and let you know if this solves it.<|||||>oh, great, then I don't need to look for a set of large GPUs :) Thank you for this update, @harshil-shah!
Indeed please do let us know when you get a chance to experiment<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,898 | closed | Fix `test_tf_encode_plus_sent_to_model` for `LayoutLMv3` | # What does this PR do?
The recently added `TFLayoutLMv3Model` triggered the test `test_tf_encode_plus_sent_to_model`, which needs to prepare an extra argument `boxes` when calling tokenizer methods, otherwise we get an error
```bash
> for word, box in zip(text, boxes):
E TypeError: 'NoneType' object is not iterable
```
[Currently failed test](https://github.com/huggingface/transformers/runs/8173669915?check_suite_focus=true) | 09-05-2022 09:47:17 | 09-05-2022 09:47:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>A question -- PT has an equivalent tokenizer test, yet I don't see this test being overwritten in PT's `layoutlmv3`. Do you know why that happens? 🤔 <|||||>> A question -- PT has an equivalent tokenizer test, yet I don't see this test being overwritten in PT's layoutlmv3. Do you know why that happens?
Hi @gante I guess you are talking about
https://github.com/huggingface/transformers/blob/dae0bfc525dd9867a2c9f5917cbf551fb9cc1732/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L1154
(I didn't realized there is a PT version for this test until now)
That test method uses `boxes` (and overwrites the common one), see
https://github.com/huggingface/transformers/blob/dae0bfc525dd9867a2c9f5917cbf551fb9cc1732/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L1185
I can change this PR to be more like that PT test.<|||||>Oh, right! I was looking for the test in the wrong file :) |
transformers | 18,897 | closed | fixes bugs to handle non-dict output | # What does this PR do?
Fixes OWL-ViT's failing slow tests: `test_torchscript_simple`, `test_torchscript_output_attentions`, `test_torchscript_output_hidden_state`.
The failures were due to explicitly calling output keys instead of calling by the index. The bugs were introduced in this [PR](https://github.com/huggingface/transformers/pull/18734). Switching to indexing to fix the issue: `output.last_hidden_state` -> `output[0]`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-05-2022 09:25:25 | 09-05-2022 09:25:25 | Hi, @alaradirik Thank you for the fix.
However, I am wondering if we can let `self.owlvit` returns `OwlViTOutput` instead of `tuple` here. As you can see (I believe), it's not easy to understand the code when the tuple indices are used, and it also becomes more difficult for debugging in general. Let me know your thought on this 🙏 , thanks.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>H
> Hi, @alaradirik Thank you for the fix.
>
> However, I am wondering if we can let `self.owlvit` returns `OwlViTOutput` instead of `tuple` here. As you can see (I believe), it's not easy to understand the code when the tuple indices are used, and it also becomes more difficult for debugging in general. Let me know your thought on this 🙏 , thanks.
Hi @ydshieh, self.owlvit already returns `OwlViTOutput`, which has a `return_dict` argument, the failing tests set `return_dict=False`. I think it'd be better to keep it as it is for consistency as OwlViTModel is almost identical to CLIPModel.<|||||>This line
https://github.com/huggingface/transformers/blob/44471422502a4f8cb606f6a8f8d9ae41207f2c2a/src/transformers/models/owlvit/modeling_owlvit.py#L1270
could pass `return_dict=True`, and we can keep using the named outputs in the code.
This doesn't change the method's input and output, but make things easier to read/understand.
Of course, the method `image_text_embedder` itself returns a tuple, and `OwlViTForObjectDetection.forward` will need to handle the tuple as it calls `image_text_embedder`. I am totally fine with this.
This is merely a suggestion (for the readability/debugging in the future). Let's see if @sgugger has any comment, and I let you make the final call :-)
<|||||>> This line
>
> https://github.com/huggingface/transformers/blob/44471422502a4f8cb606f6a8f8d9ae41207f2c2a/src/transformers/models/owlvit/modeling_owlvit.py#L1270
>
>
> could pass `return_dict=True`, and we can keep using the named outputs in the code.
> This doesn't change the method's input and output, but make things easier to read/understand.
>
That makes sense, and it's only a single line of code. I updated the PR, could you take a second look @ydshieh ?
<|||||>LGTM! Thanks @alaradirik for the fix :-)<|||||>Thanks. Sorry about that, @alaradirik, we have to change it back to tuple sadly due to the limitation of `torchscript`.<|||||>> You can't force `return_dict=True` in any part of the model as this mode is not compatible with torchscript (see [here](https://github.com/huggingface/transformers/blob/f85acb4d73a84fe9bee5279068b0430fc391fb36/src/transformers/configuration_utils.py#L385)).
>
> So this change will make Owl-ViT irremediably incompatible with torchscript I believe.
Thank you @sgugger, I'm reverting to my previous commit then<|||||>I don't mean to bother here (i.e. not saying we should change again): but @sgugger I tried the commit that with `return_dict=True`, and `test_torch_fx_xxx` and `test_torchscrip_xxx` all pass under torch 1.12.1 and 1.11.0.
However it I changed `configs_no_init.return_dict = False` to `True` in the tests, it will fail.
It looks like the trace will fail only if `dict` is used at the **final** outputs, but not in the intermediate computation.
(just FYI only)<|||||>Oh in that case, feel free to use the dict outputs!<|||||>@alaradirik Let's not change again (back to `dict`), and feel free to merge as it is.
I can open a separate PR to use `dict` after a more thorough verification.<|||||>> @alaradirik Let's not change again (back to `dict`), and feel free to merge as it is.
>
> I can open a separate PR to use `dict` after a more thorough verification.
I'm merging this for now but yes, `return_dict`would only be set to True for the intermediate computation in this case |
transformers | 18,896 | closed | Correct naming pegasus x | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-05-2022 09:08:34 | 09-05-2022 09:08:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,895 | closed | [wip: testing doc raises] | testing: https://github.com/huggingface/doc-builder/pull/141/ | 09-05-2022 09:02:15 | 09-05-2022 09:02:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,894 | closed | Mention TF and Flax checkpoints in the Auto model tutorial | Loading TF and Flax checkpoints within the PyTorch architecture circumvents the security risk. | 09-05-2022 08:13:31 | 09-05-2022 08:13:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,893 | closed | update docs word error | Word error in this PR update document | 09-05-2022 03:41:22 | 09-05-2022 03:41:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,892 | closed | README_zh-hans.md Document Correction | this 预训练 not 亿训练
@sgugger | 09-05-2022 02:13:29 | 09-05-2022 02:13:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,891 | closed | Adds image-guided object detection support to OWL-ViT | This adds support for doing object detection with OWL-ViT using query image(s)
For https://github.com/huggingface/transformers/issues/18748
cc: @alaradirik | 09-04-2022 21:31:31 | 09-04-2022 21:31:31 | Hi @alaradirik I added an initial version for the image-guided obj detection. I still have to add tests and some other cleanup, however, I've some doubts right now
1. Is the handling of query_embedding correct, while doing the mean and finding out the most dissimilar embedding?
2. How should the postprocessor handle this case, when there are no labels as such for this?
3. Any other implementation details I may have missed<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18891). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @alaradirik, I made the changes as per the review comments, could you check if they're fine?
I'm working on test cases currently. In the file [here](https://github.com/huggingface/transformers/blob/main/tests/models/owlvit/test_modeling_owlvit.py#L530), is it okay if I reuse `pixel_values` itself for `query_pixel_values`?
So the above line would return
```
return config, pixel_values, input_ids, attention_mask, pixel_values
```
and be re-used as
```
config_and_inputs = self.prepare_config_and_inputs()
config, pixel_values, input_ids, attention_mask, query_pixel_values = config_and_inputs
```
And apart from the test cases, are there any other changes that I need to make?
<|||||>> Hi @alaradirik, I made the changes as per the review comments, could you check if they're fine?
>
> I'm working on test cases currently. In the file [here](https://github.com/huggingface/transformers/blob/main/tests/models/owlvit/test_modeling_owlvit.py#L530), is it okay if I reuse `pixel_values` itself for `query_pixel_values`?
>
> So the above line would return
>
> ```
> return config, pixel_values, input_ids, attention_mask, pixel_values
> ```
>
> and be re-used as
>
> ```
> config_and_inputs = self.prepare_config_and_inputs()
> config, pixel_values, input_ids, attention_mask, query_pixel_values = config_and_inputs
> ```
>
> And apart from the test cases, are there any other changes that I need to make?
Hi @unography, thank you for the contribution once again!
As for your question regarding the tests, yes, it'd make sense to return `config, pixel_values, input_ids, attention_mask, pixel_values` with `OwlViTForObjectDetectionTester.prepare_config_and_inputs()`.
We can add a line to this function to create `query_pixel_values` as follows:
`query_pixel_values = floats_tensor([self.batch_size, self.num_channels, self.query_image_size, self.query_image_size])`<|||||>> Thank you for the contribution once again! The code seems to be in great shape and I just left a couple of comments regarding minor style corrections and docstrings.
>
> The only issue is the following tests fail:
>
> * OwlViTVisionModelTest.test_model
> * OwlViTVisionModelTest.test_model_outputs_equivalence
> * OwlViTModelTest.test_model_outputs_equivalence
>
> I believe this is due to making `pixel_values` the main argument in `OwlViTForObjectDetection.forward()` but I couldn't pinpoint the exact issue. @ydshieh could you take a look at the test scripts when you have time?
Hi, I couldn't see any test being run by CI. Could you share the error messges?
@unography Could you follow the instruction below to refresh your CircleCI permission
https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-
so that the CI could be triggered. Thanks.<|||||>Sure @alaradirik , I'll go through the review comments and make the changes. And actually, on my local, I'm able to get the test cases passed, on running
```
RUN_SLOW=1 pytest tests/models/owlvit/test_modeling_owlvit.py
```
I'll check once more
Hi @ydshieh , I'm not able to refresh the permission for some reason, I get an error `Something Unexpected Happened` on going to `https://app.circleci.com/settings/user`
I don't have a CircleCI account linked to my Github actually, not sure how to reset the token and run the tests<|||||>> Hi, I couldn't see any test being run by CI. Could you share the error messges?
@ydshieh, of course, here is the full error log. `return_dict` argument is causing the errors but there hasn't been any changes in the modeling or test files to cause this error.
```
―――――――――――――――――――――――――――――――――――――――――――――― OwlViTVisionModelTest.test_model ――――――――――――――――――――――――――――――――――――――――――――――
self = <tests.models.owlvit.test_modeling_owlvit.OwlViTVisionModelTest testMethod=test_model>
def test_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_model(*config_and_inputs)
tests/models/owlvit/test_modeling_owlvit.py:181:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/models/owlvit/test_modeling_owlvit.py:123: in create_and_check_model
self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, num_patches + 1, self.hidden_size))
E AssertionError: torch.Size([12, 32]) != (12, 257, 32)
tests/models/owlvit/test_modeling_owlvit.py ⨯✓s✓ 13% █▍
―――――――――――――――――――――――――――――――――――― OwlViTVisionModelTest.test_model_outputs_equivalence ――――――――――――――――――――――――――――――――――――
self = <tests.models.owlvit.test_modeling_owlvit.OwlViTVisionModelTest testMethod=test_model_outputs_equivalence>
def test_model_outputs_equivalence(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
def set_nan_tensor_to_zero(t):
t[t != t] = 0
return t
def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}):
with torch.no_grad():
tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)
dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple()
def recursive_check(tuple_object, dict_object):
if isinstance(tuple_object, (List, Tuple)):
for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif isinstance(tuple_object, Dict):
for tuple_iterable_value, dict_iterable_value in zip(
tuple_object.values(), dict_object.values()
):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif tuple_object is None:
return
else:
self.assertTrue(
torch.allclose(
set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5
),
msg=(
"Tuple and dict output are not equal. Difference:"
f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:"
f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has"
f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}."
),
)
recursive_check(tuple_output, dict_output)
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
tuple_inputs = self._prepare_for_class(inputs_dict, model_class)
dict_inputs = self._prepare_for_class(inputs_dict, model_class)
> check_equivalence(model, tuple_inputs, dict_inputs)
tests/test_modeling_common.py:1548:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_common.py:1512: in check_equivalence
tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)
/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/owlvit/modeling_owlvit.py:950: in forward
return self.vision_model(
/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl
return forward_call(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = OwlViTVisionTransformer(
(embeddings): OwlViTVisionEmbeddings(
(patch_embedding): Conv2d(3, 32, kernel_size=(2, ..., elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
pixel_values = tensor([[[[0.6554, 0.4061, 0.0338, ..., 0.4825, 0.8356, 0.8248],
[0.3508, 0.3514, 0.2522, ..., 0.1101, 0.8...07, 0.7844, 0.0197, ..., 0.9217, 0.2872, 0.7545],
[0.6380, 0.8504, 0.1550, ..., 0.4501, 0.0423, 0.5167]]]])
output_attentions = False, output_hidden_states = False, return_dict = False
@add_start_docstrings_to_model_forward(OWLVIT_VISION_INPUTS_DOCSTRING)
@replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=OwlViTVisionConfig)
def forward(
self,
pixel_values: torch.FloatTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPooling]:
r"""
Returns:
"""
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
hidden_states = self.embeddings(pixel_values)
hidden_states = self.pre_layernorm(hidden_states)
encoder_outputs = self.encoder(
inputs_embeds=hidden_states,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_state = encoder_outputs[0]
pooled_output = last_hidden_state[:, 0, :]
pooled_output = self.post_layernorm(pooled_output)
return BaseModelOutputWithPooling(
last_hidden_state=last_hidden_state,
pooler_output=pooled_output,
> hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
)
E AttributeError: 'tuple' object has no attribute 'hidden_states'
src/transformers/models/owlvit/modeling_owlvit.py:903: AttributeError
tests/models/owlvit/test_modeling_owlvit.py ⨯sssss✓s✓✓✓✓✓ss✓✓✓✓sssss✓✓✓s✓sss✓✓✓✓✓✓✓✓✓✓✓s✓✓✓s✓✓sssss✓s✓✓✓✓✓ss✓✓ 47% ████▋
✓✓sssss✓s✓sss✓✓✓✓✓✓✓✓✓s✓s✓✓✓ss✓ 63% ██████▍
――――――――――――――――――――――――――――――――――――――― OwlViTModelTest.test_model_outputs_equivalence ―――――――――――――――――――――――――――――――――――――――
self = <tests.models.owlvit.test_modeling_owlvit.OwlViTModelTest testMethod=test_model_outputs_equivalence>
def test_model_outputs_equivalence(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
def set_nan_tensor_to_zero(t):
t[t != t] = 0
return t
def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}):
with torch.no_grad():
tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)
dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple()
def recursive_check(tuple_object, dict_object):
if isinstance(tuple_object, (List, Tuple)):
for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif isinstance(tuple_object, Dict):
for tuple_iterable_value, dict_iterable_value in zip(
tuple_object.values(), dict_object.values()
):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif tuple_object is None:
return
else:
self.assertTrue(
torch.allclose(
set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5
),
msg=(
"Tuple and dict output are not equal. Difference:"
f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:"
f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has"
f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}."
),
)
recursive_check(tuple_output, dict_output)
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
tuple_inputs = self._prepare_for_class(inputs_dict, model_class)
dict_inputs = self._prepare_for_class(inputs_dict, model_class)
> check_equivalence(model, tuple_inputs, dict_inputs)
tests/test_modeling_common.py:1548:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_common.py:1512: in check_equivalence
tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)
/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/owlvit/modeling_owlvit.py:1132: in forward
vision_outputs = self.vision_model(
/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl
return forward_call(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = OwlViTVisionTransformer(
(embeddings): OwlViTVisionEmbeddings(
(patch_embedding): Conv2d(3, 32, kernel_size=(2, ..., elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
pixel_values = tensor([[[[0.4672, 0.5573, 0.4972, ..., 0.3060, 0.1213, 0.4710],
[0.1233, 0.0373, 0.8195, ..., 0.5669, 0.8...20, 0.2224, 0.6059, ..., 0.2634, 0.5912, 0.3576],
[0.1761, 0.1272, 0.9066, ..., 0.9368, 0.1087, 0.4829]]]])
output_attentions = False, output_hidden_states = False, return_dict = False
@add_start_docstrings_to_model_forward(OWLVIT_VISION_INPUTS_DOCSTRING)
@replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=OwlViTVisionConfig)
def forward(
self,
pixel_values: torch.FloatTensor,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPooling]:
r"""
Returns:
"""
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
hidden_states = self.embeddings(pixel_values)
hidden_states = self.pre_layernorm(hidden_states)
encoder_outputs = self.encoder(
inputs_embeds=hidden_states,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_state = encoder_outputs[0]
pooled_output = last_hidden_state[:, 0, :]
pooled_output = self.post_layernorm(pooled_output)
return BaseModelOutputWithPooling(
last_hidden_state=last_hidden_state,
pooler_output=pooled_output,
> hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
)
E AttributeError: 'tuple' object has no attribute 'hidden_states'
src/transformers/models/owlvit/modeling_owlvit.py:903: AttributeError
```<|||||>@alaradirik
https://github.com/huggingface/transformers/blob/bb61e30962c0a6cf866e7e8e5a75b7d86d8589c2/src/transformers/models/owlvit/modeling_owlvit.py
From the file, it looks like the latest version in this PR is different from the version that produced the error you provided above.
See
https://github.com/huggingface/transformers/blob/bb61e30962c0a6cf866e7e8e5a75b7d86d8589c2/src/transformers/models/owlvit/modeling_owlvit.py#L893-L902
where there is
```
if not return_dict:
return (last_hidden_state, pooled_output) + encoder_outputs[1:]
```
but not in your error message.<|||||>> Sure @alaradirik , I'll go through the review comments and make the changes. And actually, on my local, I'm able to get the test cases passed, on running
>
> ```
> RUN_SLOW=1 pytest tests/models/owlvit/test_modeling_owlvit.py
> ```
>
> I'll check once more
>
> Hi @ydshieh , I'm not able to refresh the permission for some reason, I get an error `Something Unexpected Happened` on going to `https://app.circleci.com/settings/user` I don't have a CircleCI account linked to my Github actually, not sure how to reset the token and run the tests
I triggered it :) <|||||>@ydshieh great, thank you! I hadn't pulled the latest changes on this branch.
@unography we can merge this PR once the remaining minor issues are addressed, thank you again for the clean implementation :)<|||||>Hi @unography! Just wanted to ask if you'll have time to work on this PR this week?
The OWL-ViT paper will be presented at ECCV in less than 2 weeks and I can work on the remaining issues / code clean-ups if you don't have the time.<|||||>Hi @alaradirik and @unography , thanks for working on this.
I hope this helps development, I was playing with this branch using the following code
```python
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
image = Image.open("./images/image.jpeg").convert("RGB")
query = Image.open("./images/query.png").convert("RGB")
inputs = processor(query_image=query, images=image, return_tensors="pt")
outputs = model(**inputs)
```
Unfortunately, it gives the following error
```
File "/workspace/demo-hf.py", line 17, in <module>
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/owlvit/modeling_owlvit.py", line 1578, in forward
query_embeds = self.embed_image_query(query_image_feats, query_feature_map)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/owlvit/modeling_owlvit.py", line 1404, in embed_image_query
mean_sim = torch.einsum("d,id->i", mean_embeds, selected_embeddings)
File "/opt/conda/lib/python3.8/site-packages/torch/functional.py", line 360, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): the number of subscripts in the equation (2) does not match the number of dimensions (1) for operand 1 and no ellipsis was given
```
This is quite surprising since tests are green. The code fails inside (see stack trace above) `OwlViTForObjectDetection.embed_image_query` I am lacking the deep knowledge of the model/original code base, but to embed an image query should we just take the first token of the last layer?
The code fails at line `1404` since (with my example)` both inputs are 1D tensors
```python
mean_sim = torch.einsum("d,id->i", mean_embeds, selected_embeddings) # both are 1D tensors
```
I am also curios to ask what is exactly going on in this function, maybe I am able to help somehow.
Thanks!
<|||||>> Hi @unography! Just wanted to ask if you'll have time to work on this PR this week?
>
> The OWL-ViT paper will be presented at ECCV in less than 2 weeks and I can work on the remaining issues / code clean-ups if you don't have the time.
Hi @alaradirik, I'll work on this today, but if I'm unable to continue I'll ping you and let you know. My apologies for the delay with this.<|||||>Hi @alaradirik, I've added a few fixes, but unfortunately, I'm unable to find time to contribute more right now. I hope my current changes are clear enough that you can update with remaining changes <|||||>> ```python
> from PIL import Image
> import torch
>
> from transformers import OwlViTProcessor, OwlViTForObjectDetection
>
> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
>
> image = Image.open("./images/image.jpeg").convert("RGB")
> query = Image.open("./images/query.png").convert("RGB")
>
> inputs = processor(query_image=query, images=image, return_tensors="pt")
>
> outputs = model(**inputs)
> ```
Hi @FrancescoSaverioZuppichini, sorry for my late reply and thanks for your input! Could you be running a previous version of the code? If not, could you add your system info? I can't replicate this error on my local.<|||||>> Hi @alaradirik, I've added a few fixes, but unfortunately, I'm unable to find time to contribute more right now. I hope my current changes are clear enough that you can update with remaining changes
Hi @unography, sorry for the delay! And of course, I'd be happy to finish this up, could you add me to your transformers repo as a collaborator?<|||||>I think the PR is good to go, @NielsRogge @sgugger could you do a final review when you have the time?
I will update the model card and the OWL-ViT notebooks demo with this [PR](https://github.com/huggingface/notebooks/pull/256) after this PR is merged.<|||||>> > ```python
> > from PIL import Image
> > import torch
> >
> > from transformers import OwlViTProcessor, OwlViTForObjectDetection
> >
> > processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
> > model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
> >
> > image = Image.open("./images/image.jpeg").convert("RGB")
> > query = Image.open("./images/query.png").convert("RGB")
> >
> > inputs = processor(query_image=query, images=image, return_tensors="pt")
> >
> > outputs = model(**inputs)
> > ```
>
> Hi @FrancescoSaverioZuppichini, sorry for my late reply and thanks for your input! Could you be running a previous version of the code? If not, could you add your system info? I can't replicate this error on my local.
Yes :) That was fixed by @unography with a later commit<|||||>So I've tested the current branch using the same image and (more or less) the same query from the [official notebook (that uses `ViT-B/16`)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#inference-playground).
Params:
`min_confidence = 0.6`
`nms_threshold = 0.3`
<img width="1512" alt="Screenshot 2022-10-21 at 15 55 29" src="https://user-images.githubusercontent.com/15908060/197212918-2170543d-6cdc-41f0-a426-f80e0212e14e.png">
I am not sure how to replicate the parameters with your implementation, but I am not able to get anything close to it
```python
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch16")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16")
image = Image.open("./images/image.jpeg").convert("RGB")
query = Image.open("./images/query.png").convert("RGB")
inputs = processor(query_image=query, images=image, return_tensors="pt")
outputs = model(**inputs)
w, h = image.size
outputs = processor.post_process(outputs, torch.tensor([[h, w]]))
print(outputs[0]['scores'].mean()) # 9.3531e-08
print(torch.where(outputs[0]['scores'] > .3)) # empty
```
Pasted below `image` and `query`

<img width="209" alt="query" src="https://user-images.githubusercontent.com/15908060/197228062-33044dd2-17aa-4419-aee6-a811935ea7ce.png">
Hope it helps :)
<|||||>> So I've tested the current branch using the same image and (more or less) the same query from the [official notebook (that uses `ViT-B/16`)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#inference-playground).
>
> Params: `min_confidence = 0.6` `nms_threshold = 0.3`
>
> <img alt="Screenshot 2022-10-21 at 15 55 29" width="1512" src="https://user-images.githubusercontent.com/15908060/197212918-2170543d-6cdc-41f0-a426-f80e0212e14e.png">
>
> I am not sure how to replicate the parameters with your implementation, but I am not able to get anything close to it
>
> ```python
> from PIL import Image
> import torch
>
> from transformers import OwlViTProcessor, OwlViTForObjectDetection
>
> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch16")
> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16")
>
> image = Image.open("./images/image.jpeg").convert("RGB")
> query = Image.open("./images/query.png").convert("RGB")
>
> inputs = processor(query_image=query, images=image, return_tensors="pt")
> outputs = model(**inputs)
>
> w, h = image.size
> outputs = processor.post_process(outputs, torch.tensor([[h, w]]))
> print(outputs[0]['scores'].mean()) # 9.3531e-08
> print(torch.where(outputs[0]['scores'] > .3)) # empty
> ```
>
> Pasted below `image` and `query`
>
>  <img alt="query" width="209" src="https://user-images.githubusercontent.com/15908060/197228062-33044dd2-17aa-4419-aee6-a811935ea7ce.png">
>
> Hope it helps :)
Same problem. A working demo would be helpful to users.<|||||>Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this? Or shall I add image guided object detection as a method of the `OwlViTForObjectDetection` class?<|||||>> Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this?
Why? If the model is structured the same way, the weights should be loaded from the checkpoint with no issue.
<|||||>> > Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this?
>
> Why? If the model is structured the same way, the weights should be loaded from the checkpoint with no issue.
I meant creating separate repos for the same checkpoint to load the `OwlViTForImageGuidedObjectDetection` head but yes, it seems doable.
With that said, I ended up restructuring image guided detection as a method of the `OwlViTForObjectDetection` class since this is more consistent with the original work and `OwlViTForImageGuidedObjectDetection` would not be a trainable class as it just repurposes the pretrained zero-shot detection model.
I fixed the detection issues (redundant normalization of visual embeddings + lack of postprocessing) and created a separate [PR](https://github.com/huggingface/notebooks/pull/256) to update the [demo notebook](https://github.com/alaradirik/notebooks/blob/update-owlvit-demo/examples/zeroshot_object_detection_with_owlvit.ipynb).
@NielsRogge @sgugger could you re-review when you have the time?<|||||>Hi @alaradirik, sorry my notifications got messed up, I was able to go through the comments only now. Do I need to change anything for merging? Upstream url or anything else?<|||||>> Hi @alaradirik, sorry my notifications got messed up, I was able to go through the comments only now. Do I need to change anything for merging? Upstream url or anything else?
Hey @unography no problem at all! I'm about to merge a clean [PR](https://github.com/huggingface/transformers/pull/20136) with the correct upstream. Could you give me your email address so that I can add you as the co-author to my commits? <|||||>@alaradirik sure, this is my email - [email protected]<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,890 | closed | BART example does not produce expected masks | ### System Info
- `transformers` version: 4.21.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
### Who can help?
@sgugger @patil-suraj
The example was added by @duongna21 and @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a minimal example that I extracted from [the example](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py) (turned into functions, added example data).
```python
import math
from itertools import chain
from typing import Dict, List
import numpy as np
from transformers import AutoTokenizer, BatchEncoding, PreTrainedTokenizerBase
def shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray:
"""
Shift input ids one token to the right.
"""
shifted_input_ids = np.zeros_like(input_ids)
shifted_input_ids[:, 1:] = input_ids[:, :-1]
shifted_input_ids[:, 0] = decoder_start_token_id
shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids)
return shifted_input_ids
def collate(examples: List[Dict[str, List[int]]], tokenizer: PreTrainedTokenizerBase, decoder_start_token_id=2, permute_sentence_ratio=1.0,
mask_ratio=0.3, poisson_lambda=3.5) -> BatchEncoding:
# convert list to dict and tensorize input
batch = BatchEncoding(
{k: np.array([examples[i][k] for i in range(len(examples))], dtype=int) for k, v in examples[0].items()}
)
batch["labels"] = batch["input_ids"].copy()
batch["decoder_input_ids"] = shift_tokens_right(
batch["labels"], tokenizer.pad_token_id, decoder_start_token_id
)
# permuting sentences
do_permute = False
if permute_sentence_ratio > 0.0:
batch["input_ids"] = permute_sentences(batch["input_ids"], tokenizer, permute_sentence_ratio=permute_sentence_ratio)
do_permute = True
# masking span of tokens (text infilling in the paper)
if mask_ratio:
batch["input_ids"], batch["labels"] = span_mask_tokens(
batch["input_ids"], batch["labels"], tokenizer, do_permute=do_permute, poisson_lambda=poisson_lambda, mask_ratio=mask_ratio
)
# ignore pad tokens
batch["attention_mask"] = (batch["input_ids"] != tokenizer.pad_token_id).astype(int)
batch["decoder_attention_mask"] = (batch["decoder_input_ids"] != tokenizer.pad_token_id).astype(int)
return batch
def permute_sentences(input_ids, tokenizer, permute_sentence_ratio=1.0):
"""
Shuffle sentences in each document.
"""
results = input_ids.copy()
# find end locations of sentences
end_sentence_mask = input_ids == tokenizer.pad_token_id
sentence_ends = np.argwhere(end_sentence_mask)
sentence_ends[:, 1] += 1
example_has_multiple_sentences, num_sentences = np.unique(sentence_ends[:, 0], return_counts=True)
num_sentences_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_sentences)}
num_to_permute = np.ceil(num_sentences * permute_sentence_ratio).astype(int)
num_to_permute_map = {
sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_to_permute)
}
sentence_ends = np.split(sentence_ends[:, 1], np.unique(sentence_ends[:, 0], return_index=True)[1][1:])
sentence_ends_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, sentence_ends)}
for i in range(input_ids.shape[0]):
if i not in example_has_multiple_sentences:
continue
substitutions = np.random.permutation(num_sentences_map[i])[: num_to_permute_map[i]]
ordering = np.arange(0, num_sentences_map[i])
ordering[substitutions] = substitutions[np.random.permutation(num_to_permute_map[i])]
# write shuffled sentences into results
index = 0
for j in ordering:
sentence = input_ids[i, (sentence_ends_map[i][j - 1] if j > 0 else 0) : sentence_ends_map[i][j]]
results[i, index : index + sentence.shape[0]] = sentence
index += sentence.shape[0]
return results
def span_mask_tokens(input_ids, labels, tokenizer, do_permute=True, poisson_lambda=3.5, mask_ratio=0.3):
"""
Sampling text spans with span lengths drawn from a Poisson distribution and masking them.
"""
special_tokens_mask_labels = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
special_tokens_mask_inputs = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in input_ids.tolist()
]
special_tokens_mask_labels = np.array(special_tokens_mask_labels, dtype=bool)
special_tokens_mask_inputs = np.array(special_tokens_mask_inputs, dtype=bool)
# determine how many tokens we need to mask in total
is_token_mask = ~(input_ids == tokenizer.pad_token_id) & ~special_tokens_mask_inputs
num_tokens_to_mask = int(math.ceil(is_token_mask.astype(float).sum() * mask_ratio))
if num_tokens_to_mask == 0:
return input_ids, labels
# generate a sufficient number of span lengths
span_lengths = np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))
while np.cumsum(span_lengths, 0)[-1] < num_tokens_to_mask:
span_lengths = np.concatenate(
[span_lengths, np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))]
)
# remove all spans of length 0
# note that BART inserts additional mask tokens where length == 0,
# which we do not implement for now as it adds additional complexity
span_lengths = span_lengths[span_lengths > 0]
# trim to about num_tokens_to_mask tokens
cutoff_idx = np.argmin(np.abs(np.cumsum(span_lengths, 0) - num_tokens_to_mask)) + 1
span_lengths = span_lengths[:cutoff_idx]
# randomly choose starting positions for masking
token_indices = np.argwhere(is_token_mask == 1)
span_starts = np.random.permutation(token_indices.shape[0])[: span_lengths.shape[0]]
# prepare mask
masked_indices = np.array(token_indices[span_starts])
mask = np.full_like(input_ids, fill_value=False)
# mask starting positions
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
# fill up spans
max_index = input_ids.shape[1] - 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
while np.any(remaining):
masked_indices[remaining, 1] += 1
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
# place the mask tokens
mask[np.where(special_tokens_mask_inputs)] = False
input_ids[np.where(mask)] = tokenizer.mask_token_id
if not do_permute:
labels[np.where(mask == 0)] = -100
else:
labels[np.where(special_tokens_mask_labels)] = -100
# remove mask tokens that are not starts of spans
to_remove = (mask == 1) & np.roll((mask == 1), 1, 1)
new_input_ids = np.full_like(input_ids, fill_value=tokenizer.pad_token_id)
for i, example in enumerate(input_ids):
new_example = example[~to_remove[i]]
new_input_ids[i, : new_example.shape[0]] = new_example
return new_input_ids, labels
def group_texts(examples, max_seq_length):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= max_seq_length:
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
def main():
pass
if __name__ == '__main__':
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
text = [f"I have never seen a man eating such a large cookie in one sitting.{tokenizer.pad_token}Wow!",
"In the evening he often breaks into the bakery to much on cookies and milk"]
# For the example we first need to work with a batch (because group_texts is batched)
# and then convert it back to a list of samples (to test the the collator)
encoded = tokenizer(text, padding=True, return_attention_mask=False)
max_len = max(len(e) for e in encoded["input_ids"])
encoded = group_texts(encoded, max_len//2)
print(f"input_ids after group (len: {max_len//2})", tokenizer.batch_decode(encoded["input_ids"]))
# Convert batch back to list of samples and collate
pre_collate = [{k: seq} for k, v in encoded.items() for seq in v]
expected_mask_ratio = 0.3
batch = collate(pre_collate, tokenizer, mask_ratio=expected_mask_ratio)
print("input_ids after collate", tokenizer.batch_decode(batch["input_ids"]))
# Calculate mask coverage
unique, counts = np.unique(batch["input_ids"], return_counts=True)
maskcounts = dict(zip(unique, counts))[tokenizer.mask_token_id]
# Count the number of tokens but exclude special tokens
ntokens = sum([len([t for t in e if t not in tokenizer.all_special_ids]) for e in encoded["input_ids"]])
print(f"produced masks coverage (expected {expected_mask_ratio})",
maskcounts / ntokens, f"({maskcounts}/{ntokens})")
ntokens_after_collate = sum([len([t for t in e if t not in tokenizer.all_special_ids]) for e in batch["input_ids"]])
print(f"no. tokens before {ntokens}, no. tokens after collate {ntokens_after_collate}")
```
The output will be something like this:
```
input_ids after group (len: 10) ['<s>I have never seen a man eating such a', ' large cookie in one sitting.<pad>Wow!</s>', '<s>In the evening he often breaks into the bakery', ' to much on cookies and milk</s><pad><pad><pad>']
input_ids after collate ['<s>I have never seen a<mask> eating<mask><pad>', ' large cookie<mask> one sitting.<pad>Wow!</s>', '<s>In the evening he<mask> the bakery<pad><pad>', '<pad><pad> to much on cookies and milk</s><pad>']
produced masks coverage (expected 0.3) 0.125 (4/32)
no. tokens before 32, no. tokens after collate 25
```
### Expected behavior
Am I wrong in expecting a masking coverage that is closer to 0.3 (here it is only 0.125)?
In addition, I find it very odd that the last sequence suddenly has _prepended_ padding tokens?! I found that this does not always happen (there is some random sampling happening so results vary). But I don't think this should every happen? | 09-04-2022 14:20:42 | 09-04-2022 14:20:42 | Seems that it was a combination of: too short sequence length and not the best way of calculating the actual coverage (not accounting for span masks). Revised example that yields the expected results:
```python
import math
from itertools import chain
from typing import Dict, List
import numpy as np
from transformers import AutoTokenizer, BatchEncoding, PreTrainedTokenizerBase
def shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray:
"""
Shift input ids one token to the right.
"""
shifted_input_ids = np.zeros_like(input_ids)
shifted_input_ids[:, 1:] = input_ids[:, :-1]
shifted_input_ids[:, 0] = decoder_start_token_id
shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids)
return shifted_input_ids
def collate(examples: List[Dict[str, List[int]]], tokenizer: PreTrainedTokenizerBase, decoder_start_token_id=2, permute_sentence_ratio=1.0,
mask_ratio=0.3, poisson_lambda=3.5) -> BatchEncoding:
# convert list to dict and tensorize input
batch = BatchEncoding(
{k: np.array([examples[i][k] for i in range(len(examples))], dtype=int) for k, v in examples[0].items()}
)
batch["labels"] = batch["input_ids"].copy()
batch["decoder_input_ids"] = shift_tokens_right(
batch["labels"], tokenizer.pad_token_id, decoder_start_token_id
)
# permuting sentences
do_permute = False
if permute_sentence_ratio > 0.0:
batch["input_ids"] = permute_sentences(batch["input_ids"], tokenizer, permute_sentence_ratio=permute_sentence_ratio)
do_permute = True
# masking span of tokens (text infilling in the paper)
if mask_ratio:
batch["input_ids"], batch["labels"] = span_mask_tokens(
batch["input_ids"], batch["labels"], tokenizer, do_permute=do_permute, poisson_lambda=poisson_lambda, mask_ratio=mask_ratio
)
# ignore pad tokens
batch["attention_mask"] = (batch["input_ids"] != tokenizer.pad_token_id).astype(int)
batch["decoder_attention_mask"] = (batch["decoder_input_ids"] != tokenizer.pad_token_id).astype(int)
return batch
def permute_sentences(input_ids, tokenizer, permute_sentence_ratio=1.0):
"""
Shuffle sentences in each document.
"""
results = input_ids.copy()
# find end locations of sentences
end_sentence_mask = input_ids == tokenizer.pad_token_id
sentence_ends = np.argwhere(end_sentence_mask)
sentence_ends[:, 1] += 1
example_has_multiple_sentences, num_sentences = np.unique(sentence_ends[:, 0], return_counts=True)
num_sentences_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_sentences)}
num_to_permute = np.ceil(num_sentences * permute_sentence_ratio).astype(int)
num_to_permute_map = {
sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_to_permute)
}
sentence_ends = np.split(sentence_ends[:, 1], np.unique(sentence_ends[:, 0], return_index=True)[1][1:])
sentence_ends_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, sentence_ends)}
for i in range(input_ids.shape[0]):
if i not in example_has_multiple_sentences:
continue
substitutions = np.random.permutation(num_sentences_map[i])[: num_to_permute_map[i]]
ordering = np.arange(0, num_sentences_map[i])
ordering[substitutions] = substitutions[np.random.permutation(num_to_permute_map[i])]
# write shuffled sentences into results
index = 0
for j in ordering:
sentence = input_ids[i, (sentence_ends_map[i][j - 1] if j > 0 else 0) : sentence_ends_map[i][j]]
results[i, index : index + sentence.shape[0]] = sentence
index += sentence.shape[0]
return results
def span_mask_tokens(input_ids, labels, tokenizer, do_permute=True, poisson_lambda=3.5, mask_ratio=0.3):
"""
Sampling text spans with span lengths drawn from a Poisson distribution and masking them.
"""
special_tokens_mask_labels = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
special_tokens_mask_inputs = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in input_ids.tolist()
]
special_tokens_mask_labels = np.array(special_tokens_mask_labels, dtype=bool)
special_tokens_mask_inputs = np.array(special_tokens_mask_inputs, dtype=bool)
# determine how many tokens we need to mask in total
is_token_mask = ~(input_ids == tokenizer.pad_token_id) & ~special_tokens_mask_inputs
num_tokens_to_mask = int(math.ceil(is_token_mask.astype(float).sum() * mask_ratio))
if num_tokens_to_mask == 0:
return input_ids, labels
# generate a sufficient number of span lengths
span_lengths = np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))
while np.cumsum(span_lengths, 0)[-1] < num_tokens_to_mask:
span_lengths = np.concatenate(
[span_lengths, np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))]
)
# remove all spans of length 0
# note that BART inserts additional mask tokens where length == 0,
# which we do not implement for now as it adds additional complexity
span_lengths = span_lengths[span_lengths > 0]
# trim to about num_tokens_to_mask tokens
cutoff_idx = np.argmin(np.abs(np.cumsum(span_lengths, 0) - num_tokens_to_mask)) + 1
span_lengths = span_lengths[:cutoff_idx]
# randomly choose starting positions for masking
token_indices = np.argwhere(is_token_mask == 1)
span_starts = np.random.permutation(token_indices.shape[0])[: span_lengths.shape[0]]
# prepare mask
masked_indices = np.array(token_indices[span_starts])
mask = np.full_like(input_ids, fill_value=False)
# mask starting positions
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
# fill up spans
max_index = input_ids.shape[1] - 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
while np.any(remaining):
masked_indices[remaining, 1] += 1
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
# place the mask tokens
mask[np.where(special_tokens_mask_inputs)] = False
input_ids[np.where(mask)] = tokenizer.mask_token_id
if not do_permute:
labels[np.where(mask == 0)] = -100
else:
labels[np.where(special_tokens_mask_labels)] = -100
# remove mask tokens that are not starts of spans
to_remove = (mask == 1) & np.roll((mask == 1), 1, 1)
new_input_ids = np.full_like(input_ids, fill_value=tokenizer.pad_token_id)
for i, example in enumerate(input_ids):
new_example = example[~to_remove[i]]
new_input_ids[i, : new_example.shape[0]] = new_example
return new_input_ids, labels
def group_texts(examples, max_seq_length):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= max_seq_length:
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
def main():
pass
def get_n_mask_tokens(tokens, mask_token_id):
unique, counts = np.unique(tokens, return_counts=True)
counter = dict(zip(unique, counts))
return counter[mask_token_id]
def get_n_nonspecial_tokens(tokens, all_special_ids):
return len([t for t in tokens if t not in all_special_ids])
if __name__ == '__main__':
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
text = "On september 2nd the Group of Seven (G7) countries launched a new attempt to regain the advantage in the" \
" West’s energy confrontation with Russia: imposing a price cap on purchases of Russian oil and oil" \
" products, probably to take effect on December 5th. "
encoded = tokenizer(text)
input_ids = encoded["input_ids"]
n_input_toks = get_n_nonspecial_tokens(input_ids, tokenizer.all_special_ids)
print("DECODED INPUT", tokenizer.decode(input_ids))
processed = collate([encoded], tokenizer)
input_ids_out = processed["input_ids"].squeeze().tolist()
n_output_toks = get_n_nonspecial_tokens(input_ids_out, tokenizer.all_special_ids)
print("DECODED OUTPUT", tokenizer.decode(input_ids_out))
n_masks_out = get_n_mask_tokens(input_ids_out, tokenizer.mask_token_id) + (n_input_toks-n_output_toks)
print(f"MASK RATIO ({n_masks_out}/{n_input_toks})", n_masks_out/n_input_toks)
```
Output:
```
DECODED INPUT <s>On september 2nd the Group of Seven (G7) countries launched a new attempt to regain the advantage in the West’s energy confrontation with Russia: imposing a price cap on purchases of Russian oil and oil products, probably to take effect on December 5th. </s>
DECODED OUTPUT <s>On september 2nd the<mask>) countries launched a new attempt to regain the advantage in the West’s<mask> price cap on purchases of Russian oil and oil products, probably to take effect on December<mask>th. </s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
MASK RATIO (17/57) 0.2982456140350877
```<|||||>Hey @BramVanroy! Thanks for posting this issue with such clear and concise code-snippets! In your opinion, would it be worth implementing these changes in the example script? Or are they specific to your use case? Perhaps you could summarise quickly the changes you made to computing the actual coverage! Thanks!<|||||>Hey @sanchit-gandhi. The issue was with how I calculated the coverage for reporting, so there was a mistake on my end not on yours! |
transformers | 18,889 | closed | Larger Logits != Larger Probability | ### System Info
transformers version: 4.22.0.dev0
Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10
Python version: 3.8.13
Huggingface_hub version: 0.8.1
PyTorch version (GPU?): 1.12.1+cu113 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
### Who can help?
@gante @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import BartTokenizer,BartForConditionalGeneration
model_path = "/data/pretrained_model/bart_base"
toker = BartTokenizer.from_pretrained(model_path)
model = BartForConditionalGeneration.from_pretrained(model_path)
input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much",
"transformers is so good"]
batch_size = 2
num_beams = 10
max_length = 5
num_return_sequences = 5
input_ids = toker(input_tokens,return_tensors='pt',padding=True).input_ids
output = model.generate(input_ids,max_length=max_length,num_beams=num_beams,num_return_sequences=num_return_sequences,
return_dict_in_generate=True,output_scores=True)
def get_logits_and_probs(output,num_return_sequence,batch_size,eos_token_id):
"""
using for-loop to get positional-wise logits and probability
"""
import torch
total = num_return_sequence * batch_size
token_logits = [[] for _ in range(total)]
token_probs = [[] for _ in range(total)]
continue_or_not = [True for _ in range(total)]
for time_step in range(len(output.scores)):
cur_scores = output.scores[time_step] ## num_beam,vocab_size
for idx in range(total):
cur_beam = output.beam_indices[idx][time_step]
cur_token = output.sequences[idx][time_step+1] ## decoder_start_token_id
if continue_or_not[idx]:
token_probs[idx].append(torch.softmax(cur_scores[cur_beam],dim=-1)[cur_token].item())
token_logits[idx].append(cur_scores[cur_beam][cur_token].item())
if cur_token==eos_token_id:
continue_or_not[idx]=False
return token_logits,token_probs
token_logits,token_probs = get_logits_and_probs(output,num_return_sequences,batch_size,toker.eos_token_id)
def avg(ls):
return sum(ls)/len(ls)
## check if my get_logits_and_probs function is correct by compare it with output.sequences_scores
for idx in range(num_return_sequences*batch_size):
if idx == num_return_sequences:
print("*"*20)
print(avg(token_logits[idx]),output.sequences_scores[idx].item())
print("probability")
for idx in range(num_return_sequences*batch_size):
if idx == num_return_sequences:
print("*"*20)
print(avg(token_probs[idx]))
```


### Expected behavior
I find that the order given by beam search is determined by `sum(logits)` rather than `sum(probability)`. I am not sure if it is correct, intuitively, the probability is a relative value that is comparable between tokens generated from different time step, but logits are not.
the example above shows that the 5th sequence given by beam search actually has higher probability than 2nd sentence. | 09-04-2022 10:25:48 | 09-04-2022 10:25:48 | Hi @Hannibal046 👋
The sentences are sorted by the sum of the scores (i.e. logits with potential modifiers on top), as you wrote. However, if you want to map it back to probabilities, the operator to apply is the product, not the sum :) The sum of logarithms corresponds to the logarithm of the product. Applying the product instead, you'll see the examples are correctly sorted.
For more info, see [our blog post](https://huggingface.co/blog/how-to-generate)
___________________________
Since the original question is solved, I'm closing the issue. Feel free to reopen if you find related bugs! |
transformers | 18,888 | closed | Longformer TF int32 vs int64 | ### System Info
Transformers Version: 4.20.0.dev0
Ubuntu 20
Python 3.8
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi,
@ibeltagy
I am trying an example of fine-tuning longformer and got the error of
`TypeError: Input 'updates' of 'TensorScatterUpdate' Op has type int32 that does not match type int64 of argument 'tensor'.
`
Not sure what's going on. Here is my code example. Any help would be great:
`
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
from datasets import Dataset
import tensorflow as tf
import pickle
import numpy as np
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=2, gradient_checkpointing=True)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': np.array([0, 0, 1], dtype=np.int64)}
dataset = Dataset.from_dict(my_dict)
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
`
### Expected behavior
Not giving the error | 09-04-2022 06:24:18 | 09-04-2022 06:24:18 | I believe you'll find the solution in [#13632](https://github.com/huggingface/transformers/issues/13632)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,887 | closed | Incorrect size of input for 1st strided window length in `Perplexity of fixed-length models` | ### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cpu (False)
- Tensorflow version (GPU?): 2.6.4 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
### Who can help?
@sgugger, @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. [The example script for finding the perplexity of fixed length](https://huggingface.co/docs/transformers/perplexity) model using strided windows takes an shorter window size for the 1st set of inputs.
For demo, let's us take a subset of test data in the script and print the `begin_loc`, `end_loc`, and `trg_len`. The example script picks a window of [0: 512] for the `input_ids` in 1st pass and then picks [0: 1024] in 2nd pass. The size of `input_ids` in the 1st pass is shorter than expected which should have been [0: 1024] in the 1st pass itself as the model's max length is 1024.
This leads to higher overall PPL as the output in the 2nd pass get a smaller context window size. This can be seen in the shared notebook which prints these stats for the **example script** and the **proposed script** for comparison: https://www.kaggle.com/code/ekagra/perplexity-contrib-small?scriptVersionId=104864785
2. The PPL for GPT2-large when running the example script comes to be 16.44 instead of ~~19.64~~ 16.53. Maybe improvement was made in the tokenizer or in the GPT2 model definition after the example script was written which improves the PPL?
Sharing notebook for PPL computed on the entire test data for [example script](https://www.kaggle.com/code/ekagra/perplexity-contrib/notebook?scriptVersionId=104846069) and [proposed script](https://www.kaggle.com/code/ekagra/perplexity-contrib?scriptVersionId=104846058). Both approaches obtain the same PPL of 16.44 as we are taking average over a lot of numbers so the small difference in NLLs in the 1st 2 window size vanishes. The difference however can be seen in the previous notebook which finds PPL over a smaller test data.
If this makes sense then I could raise a PR for this.
EDIT 1: replaced 19.64 with 16.53 which is the right metric to look for stride size of 512. | 09-04-2022 06:21:36 | 09-04-2022 06:21:36 | Yes, it looks like the PPL is not 19.64 as advertised. Would you like to make a PR with the suggested changes? It all looks good to me.<|||||>Sure @sgugger. On it. |
transformers | 18,886 | closed | While trying to train seq2seq distillation model i am getting an error message that __init__() got an unexpected keyword argument 'weights_summary'. | ### System Info
Transformer version : 4.21.2
pytorch-lightning 1.7.4
Used colab to test
Please find below colab link
https://colab.research.google.com/drive/1INGpr5nV2qnb8wKf2FAc0VqgPQWSPFdZ#scrollTo=DC8Tj2qtlfiZ
### Who can help?
@sgugger
_No response_
### Information
- https://github.com/huggingface/transformers/blob/main/examples/research_projects/seq2seq-distillation/distillation.py
### Tasks
- XSUM dataset with seq2seq distillation
### Reproduction
Kindly try to train the below snippet.
!python /content/transformers/examples/research_projects/seq2seq-distillation/distillation.py \
--teacher facebook/bart-large-xsum \
--data_dir /content/transformers/examples/research_projects/seq2seq-distillation/xsum \
--tokenizer_name facebook/bart-large-xsum \
--student_decoder_layers 6 --student_encoder_layers 12 \
--freeze_encoder --freeze_embeds \
--learning_rate=3e-4 \
--do_train \
--do_predict \
--fp16 --fp16_opt_level=O1 \
--val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \
--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \
--model_name_or_path IGNORED \
--alpha_hid=3. \
--train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 \
--sortish_sampler \
--num_train_epochs=6 \
--warmup_steps 500 \
--output_dir distilbart_xsum_12_6 \
--weights_summary None \
"$@"
### Expected behavior
Is it something related to pt lightning version that i installed?
Error stacktrace:
Global seed set to 42
Traceback (most recent call last):
File "/content/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 454, in <module>
main(args)
File "/content/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 429, in main
logger=logger,
File "/content/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 387, in generic_train
**train_params,
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 2449, in from_argparse_args
return from_argparse_args(cls, args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/argparse.py", line 72, in from_argparse_args
return cls(**trainer_kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
| 09-04-2022 03:56:04 | 09-04-2022 03:56:04 | This example is not maintained anymore. It was written for an older version of PyTorch Lightning, so you probably need to downgrade to what was the last release at the time around its release (roughly 2 years ago).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,885 | closed | Can't find ‘romanian_postprocessing.md’ file | ### System Info
In this model card: https://huggingface.co/facebook/mbart-large-en-ro, it says ' Instructions in romanian_postprocessing.md'. But I cannot find romanian_postprocessing.md.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A | 09-03-2022 23:32:00 | 09-03-2022 23:32:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,884 | closed | Generating with Flax fails when using Causal Language models | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
- Using GPU in script?: Yes (Nvidia A100)
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten
@Narsil
@patil-suraj
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code snippet:
```python
from jax import numpy as jnp
import transformers
model = transformers.FlaxAutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = transformers.AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
entence = "Paris is one of the densest populated areas in Europe."
input_ids = tokenizer(sentence, return_tensors="jax")["input_ids"]
model.generate(input_ids)
```
### Expected behavior
Expected behavior is that the model generates completions for the given input id.
Observed behavior is that the following error is thrown:
```
File ~/.conda/envs/lm-extraction/lib/python3.10/site-packages/jax/_src/lax/lax.py:4577, in _check_same_dtypes(name, ignore_fp_precision, *ttypes)
4575 equiv = _JNP_FUNCTION_EQUIVALENTS[name]
4576 msg += f" (Tip: jnp.{equiv} is a similar function that does automatic type promotion on inputs)."
-> 4577 raise TypeError(msg.format(name, ", ".join(map(str, types))))
TypeError: lax.dynamic_update_slice requires arguments to have the same dtypes, got int32, float32.
```
This seems to be a type mismatch error | 09-03-2022 21:28:11 | 09-03-2022 21:28:11 | Hey, I just ran into the same issue.
For me it does not occur when I explicitly specify the `pad_token_id` as in this example:
```
model.generate(
prompt_tokenized,
params=model.params,
pad_token_id=50256,
)
```
I'm also using the `"EleutherAI/gpt-neo-1.3B"` model, one might need to adjust the `pad_token_id` for different models / tokenizers.
You can also use the workaround as in the test here: https://github.com/huggingface/transformers/blob/a541d97477a8901e37e5f850f2cd707ffc82445b/tests/generation/test_generation_flax_utils.py#L83-L85
-> just set `model.config.pad_token_id = model.config.eos_token_id` before generating.<|||||>Hey @SamKG!
As @maxidl has kindly pointed out, the `pad_token_id` needs to be specified for autoregressive generation.
You can set this to `tokenizer.pad_token_id` **if** the tokenizer has a `pad_token_id` defined:
```python
if tokenizer.pad_token_id is not None:
model.config.pad_token_id = tokenizer.pad_token_id
```
Otherwise, setting it to the `eos_token_id` is possible:
```python
model.config.pad_token_id = model.config.eos_token_id
```
The `pad_token_id` should always either be passed to the `.generate()` method or specified in the `config`.
@patrickvonplaten IMO it's worth a warning message when the `pad_token_id` is omitted! Would prevent hidden errors such as this one.
Also cc'ing @patil-suraj who I believe has come across this issue before with GPT-J models<|||||>@sanchit-gandhi added to the `generate` to do list 👍 <|||||>Thanks @gante <|||||>#21009 Fixes it -- Flax now assumes the value of `pad_token_id` when it is `None` and `eos_token_id` is not `None`, like TF and PT do. This should also be the case in the examples above.
@SamKG @maxidl I'm closing this issue as it seems to be solved, but feel free to reopen it with further queries :) |
transformers | 18,883 | closed | BART decoder output length changes | ### System Info
Hi, BART for conditional generation, the output sequence length changes?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
m = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
t = AutoTokenizer.from_pretrained("facebook/bart-base")
b = t(["This is test"] ,
padding='longest',
truncation=True,
is_split_into_words=False,
max_length=512,
return_tensors='pt')
print("input data: ", b)
print("model logit output shape: ", m(**b.data).logits.shape)
```
```text
input data: {'input_ids': tensor([[ 0, 713, 16, 1296, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])}
model logit output shape: torch.Size([1, 5, 50265])
```
The sequence length 5 seems to vary based on the input size. Doesn't this mean that the output can never be longer than the output?
In addition is affects how the `y` target sequence length is determined. The y target tokenise cannot be set to `max_length`, neither to longest cos then when the loss function is computed the predicted seq length doesn't match target.
```
batch_y = t(
text=label_texts,
padding='max_length',
truncation=True,
is_split_into_words=False,
max_length=self._max_length,
return_tensors='pt',
)
```
### Expected behavior
I am not sure how varying output length works with having to compute the loss function | 09-03-2022 19:09:17 | 09-03-2022 19:09:17 | Hi @elangovana ,
> The sequence length 5 seems to vary based on the input size. Doesn't this mean that the output can never be longer than the output?
The output can be longer than the input because Bart is an encoder-decoder model so the decoder can output variable length of tokens independent of the len of input tokens. This is because the decoder works in an auto-regressive manner.
Bart is an [encoder-decoder model](https://huggingface.co/blog/encoder-decoder) so the length of 5 for your example is something which will be consumed by the encoder [art of Bart. The length of target is something which the decoder part of Bart will be concerned about. Bart like any other model accepts `input_ids` and `labels`. The `input_ids` are input to the encoder whereas the `labels` are the expected output for the decoder.
Now how do we get the `input_ids` for the decoder of Bart?
* During training: you would provide the model with labels which will be shifted internally by HF and will be fed as input to the decoder (for teacher forcing style of decoding)
* During Inference/test: you would generate the decoder output **auto-regressively** wherein the output of the decoder in previous time step becomes the decoder input to the current time step
How long can the decoder output be? This is decided by your choice of stopping criteria of the auto-regressive decoding, e.g., :
* stop when the number of tokens in decoded sequence reach a max limit
* stop when all the decoded sequences in a batch have outputted a terminating token like EOS
[Check this out ](https://huggingface.co/blog/how-to-generate) for more info on `generate` function in HF and auto-regressive decoding.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,882 | closed | Tried running Stable Diffusion GRisk GUI.exe and no go. | ### System Info
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001A24A9D15E0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001A24A9E4940>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 10 files to the new cache system
0%| | 0/10 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink
Not sure how to proceed..
@LysandreJik
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tried running Stable Diffusion GRisk GUI.exe and no go.
### Expected behavior
Expected the GUI to load.. | 09-03-2022 18:23:02 | 09-03-2022 18:23:02 | Sorry the GUI did open on another monitor I had turned off but the terminal did mention some issues. |
transformers | 18,881 | closed | Flax BART training fails when evaluating | ### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.9.1
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
### Who can help?
@sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. make a dir `"./my_bart_model`
2. Train a tokenizer (let's use a small, Dutch corpus). **Note**: the repo README uses `tokenizer.save` but that only saves tokenizer.config and not the merges, so I think this is a second issue that should be fixed. Below I use `save_model` instead.
```python
from datasets import load_dataset
from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer
# load dataset
dataset = load_dataset("dbrd", "plain_text", split="train")
# Instantiate tokenizer
tokenizer = ByteLevelBPETokenizer()
def batch_iterator(batch_size=1000):
for i in range(0, len(dataset), batch_size):
yield dataset[i: i + batch_size]["text"]
# Customized training
tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
# Save files to disk
tokenizer.save_model("./my_bart_model")
```
3. Create a BART config for it
```python
from transformers import BartConfig
config = BartConfig.from_pretrained("facebook/bart-base", vocab_size=50265)
config.save_pretrained("./my_bart_model")
```
4. Train the model with a quick evaluation (command from the root of the transformers lib)
```sh
python examples/flax/language-modeling/run_bart_dlm_flax.py --output_dir ./my_bart_model --config_name ./my_bart_model --tokenizer_name ./my_bart_model --dataset_name dbrd --dataset_config_name plain_text --max_seq_length 128 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 1e-4 --warmup_steps 100 --overwrite_output_dir --logging_steps 200 --save_steps 500 --eval_steps 200
```
This leads to the following error (also note the VisibleDeprecation, although that might be unrelated to the triggered error):
```
transformers/examples/flax/language-modeling/run_bart_dlm_flax.py:288: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
{k: np.array([examples[i][k] for i in range(len(examples))]) for k, v in examples[0].items()}
Evaluating ...: 99%|██████████████████████████████████████████████████████████████████▏| 77/78 [00:08<00:00, 9.03it/s]Training...: 14%|█████████▍ | 200/1429 [00:54<05:32, 3.69it/s]Epoch ... : 0%| | 0/3 [00:54<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 964, in <module>
main()
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 896, in main
model_inputs = data_collator(samples)
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 291, in __call__
batch["decoder_input_ids"] = shift_tokens_right(
File "/home/bram/.local/share/virtualenvs/bart-tTDq1jwG/lib/python3.8/site-packages/transformers/models/bart/modeling_flax_bart.py", line 228, in shift_tokens_right
shifted_input_ids[:, 1:] = input_ids[:, :-1]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
```
### Expected behavior
No errors and preferably no deprecation warnings. | 09-03-2022 17:43:05 | 09-03-2022 17:43:05 | This seems to originate when this basic collate function is unsuccessful in creating valid numpy arrays, i.e., with one unspecified dimension, like `(2,)` instead of `(2, 16)` (bsz x seq_len).
https://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L287-L289
I haven't dug further in why this occurs here, but presumably because the tokenizer is not explicitly padding to max length so in edge cases the sequences might not be of the same length? (The error mentioned above happened in the last batch (as you can see in the progress bar, 77/78)) so probably the last batch contained the last sequence which was not of the expected size.<|||||>Hey @BramVanroy! Thank you for posting this issue; the Flax BART training example is fresh off the press, so there might well be some small issues to fix.
Based on the traceback, I would agree that this is likely an issue related to tokenizer padding. Have you tried padding to max length? This seems like a very sensible next step!
Keep me posted with how you go on this! Happy to help if this remains a road-block.<|||||>I haven't figured this out yet. From reading the code, all blocks should be of size sequence length and the small remainder dropped:
https://github.com/huggingface/transformers/blob/cfd623a859890c6d106610d3c688064eadc7bd61/examples/flax/language-modeling/run_bart_dlm_flax.py#L657-L660
So I haven't looked further how this is being caused.
By the way, also getting these UserWarnings:
```
Some donated buffers were not usable: ShapedArray(float32[1,50265]), ShapedArray(float32[1026,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[1026,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[50265,768]).
See an explanation at https://jax.readthedocs.io/en/latest/faq.html#buffer-donation.
warnings.warn(f"Some donated buffers were not usable: {', '.join(unused_donations)}.\n{msg}")
```
I am working on transposing `fairseq`'s data implementation to PyTorch and adding a full training example in transformers. Would you be open to code-reviewing it when it's finished? The train script itself will probably borrow a lot from your code here if that's okay!<|||||>> From reading the code, all blocks should be of size sequence length and the small remainder dropped:
Certainly, that should ideally be the case!
> By the way, also getting these UserWarnings:
Are those UserWarnings being thrown in the parameter update step? It suggests to me a mis-match between the update and parameter dtypes!
> I am working on transposing fairseq's data implementation to PyTorch and adding a full training example in transformers.
More than happy to perform a code-review when finished!
Also cc'ing @duongna21 who must take all credit for implementing the Flax BART training example!<|||||> Hi @BramVanroy, I'm the main author of `run_bart_dlm_flax`. It's unfortunate that I cannot reproduce your bug. Ran your code and things work as expected except for the `nan` loss bug that you can handle [like this](https://github.com/huggingface/transformers/pull/18458#discussion_r976303806).

After set `drop_last=True`, val loss is fine:

<|||||>@duongna21 Thanks for chiming in! I am going to close this as I am not working on this directly any more. I assume that drop_last should indeed fix the issue. |
transformers | 18,880 | closed | Are there any higher version transformers compatible with transformers==3.0.2 | ### System Info
Hi. I am working code that only can run on Transformers==3.0.2, however there are other method which only can run on higher version. So I want to ask if there are higher version that is compatible with transformers==3.0.2 or with little revision? Many thanks!
### Who can help?
@NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
Code of facebook research [FiD](https://github.com/facebookresearch/FiD)
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
NaturalQuestions | TriviaQA
### Reproduction
Run the [FiD code](https://github.com/facebookresearch/FiD) on a different version of transformers
### Expected behavior
Attribute error | 09-03-2022 16:36:25 | 09-03-2022 16:36:25 | Hey @CaffreyR! We would recommend you migrate your codebase from v3.0.2 to v4.x, which should then be compatible with all newer methods.
What is your problem when trying to upgrade?<|||||>Hi @LysandreJik , I run the [code](https://github.com/facebookresearch/FiD) in transformers 3.0.2, it works well. But in 4.21.3, it went wrong! Many thanks!
```
/tokenization_t5.py:220: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.
f"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated"
/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5.py:220: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.
f"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated"
Traceback (most recent call last):
File "train_reader.py", line 212, in <module>
checkpoint_path
File "train_reader.py", line 55, in train
labels=labels.cuda()
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/FiD/src/model.py", line 45, in forward
**kwargs
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1686, in forward
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
AttributeError: 'tuple' object has no attribute 'last_hidden_state'
```

<|||||>You can fix the code to make it work like this on the latest version:
```train_loss = model(input_ids=context_ids.cuda(), attention_mask=context_mask.cuda(), labels=labels.cuda(), return_dict=False )[0]```
As per the official docs, the forward method takes this argument which determines the format of the output:
>return_dict (bool, optional) — Whether or not to return a [ModelOutput](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.<|||||>Hi @saradhix , as we fixed this problems, a new problem occurred. Seems to be another version problem
```
Traceback (most recent call last):
File "train_reader.py", line 213, in <module>
checkpoint_path
File "train_reader.py", line 71, in train
dev_em = evaluate(model, eval_dataset, tokenizer, collator, opt)
File "train_reader.py", line 114, in evaluate
max_length=50
File "/home/user/FiD/src/model.py", line 54, in generate
max_length=max_length
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/generation_utils.py", line 1163, in generate
inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(inputs, bos_token_id, model_kwargs)
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/generation_utils.py", line 412, in _prepare_model_inputs
and self.encoder.main_input_name != self.main_input_name
File "/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1208, in __getattr__
type(self).__name__, name))
AttributeError: 'EncoderWrapper' object has no attribute 'main_input_name'
```
<img width="571" alt="image" src="https://user-images.githubusercontent.com/84232793/188636847-4d3b0b3d-538e-4d38-ba36-7b26a9fff938.png">
<|||||>The `main_input_name` attribute is something we introduced in a later version (in order to make the `generate` method work with text, vision and speech models, i.e. several modalities). It refers to the main input name of a model, like "input_ids" for text models, or "pixel_values" for vision models.
Each model defines this, see for instance [here](https://github.com/huggingface/transformers/blob/09178705101b9803e7b9ea7f79a46b4c242dd4bf/src/transformers/models/resnet/modeling_resnet.py#L252) for ResNet.<|||||>Hi @NielsRogge , so what should I do? Should I define it myself in the model?<|||||>Here's the PR that introduced it: https://github.com/huggingface/transformers/pull/14803
Yes, it's set to "input_ids" by default (and overwritten by vision and speech models)<|||||>Hi @NielsRogge @saradhix , it is very interesting that when I do this, it says warning.
```
/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5.py:174: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.
- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.
- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.
- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.
FutureWarning,
```
Is there any problem with this warning? Many thanks!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi @NielsRogge , so what should I do? Should I define it myself in the model?
I meet the same problem. And I fixed it by adding main_input_name = "input_ids"
<img width="1199" alt="image" src="https://user-images.githubusercontent.com/74954034/236134400-01fd31cd-d156-4d41-865c-69dfd66b4425.png">
|
transformers | 18,879 | closed | Token type ids generation func. for Bert-like models | # What does this PR do?
For a text sequence input (separated with comma), Bert tokenizer nicely create an input_ids with [SEP] token inside and proper token_type_ids as well. However for input_ids with already separated with [SEP] token id inside, couldn't find any function to generate proper token_type_ids.
So, I made a simple function which create proper token_type_ids with padding.
Input : list or tensor / pad_to_multiple_of (target length) / tokenizer (to recognize sep token id and add proper pad token id)
output : tensor of token_type_ids with pad id
Hope this might be used for those who want to override DataCollator class.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [V] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- new function for Bert-like models collator : @LysandreJik
-->
| 09-03-2022 14:34:49 | 09-03-2022 14:34:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18879). All of your documentation changes will be reflected on that endpoint. |
transformers | 18,878 | closed | Can't disable INTEGRATION_TO_CALLBACK | ### System Info
I use transformers.4.20.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the huggingface Trainer, I set `report_to=['none'] ` in training_args to disable wandb logging as the doc says but a value error will be raised. I notice it's because the problem in the following code of `transformers/integrations.py`:
```python
def get_reporting_integration_callbacks(report_to):
for integration in report_to:
if integration not in INTEGRATION_TO_CALLBACK:
raise ValueError(
f"{integration} is not supported, only {', '.join(INTEGRATION_TO_CALLBACK.keys())} are supported."
)
return [INTEGRATION_TO_CALLBACK[integration] for integration in report_to]
```
No 'none' logits is dealed with, nor any other disable methods?
### Expected behavior
I don't know if it's expected, but it's confusing me and causing inconvenience. So i would like to get the answer :) | 09-03-2022 05:11:38 | 09-03-2022 05:11:38 | cc @sgugger<|||||>No, this code is not called when you set `report_to="none"` or `report_to=["none"]` since it is filtered out [here](https://github.com/huggingface/transformers/blob/6678350c01629b848aa9c41e169da5d6b8d9e7e9/src/transformers/training_args.py#L1155).
Setting `report_to=["none"]` in `TrainingArugments` works perfectly on my end, please make sure you are using the latest Transformers version and if the issue persists, please give us a code reproducer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,877 | closed | Update no trainer scripts to include gather for metrics | # What does this PR do?
Update run_wav2vec_pretraining_no_trainer example to include accelarator.gather_metrics
Related to #18437
I ran the tests for tests for 'wav2vec_pretraining_no_trainer' in 'test_pytorch_examples.py' files locally.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr , @sgugger , @pacman100
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 09-03-2022 03:16:25 | 09-03-2022 03:16:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,876 | closed | BigBirdTokenizer Serialization Error on Spark | ### System Info
Hello! Running `transformers-cli env` throws the following error for me on the databricks cluster that I'm using currently (but it works locally).
```python
/databricks/conda/envs/default/lib/python3.8/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.10) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "/databricks/conda/envs/prism_ai/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 26, in <module>
from .user import UserCommands
File "/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/transformers/commands/user.py", line 20, in <module>
from huggingface_hub.hf_api import HfFolder, create_repo, login, logout, whoami
ImportError: cannot import name 'login' from 'huggingface_hub.hf_api' (/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/huggingface_hub/hf_api.py)
```
I can provide some of the info that I know (we maintain the container that we use):
```
transformers: 4.17.0
python: 3.8.12
pt version: 1.11.0+cu113
tf version: 2.8.0
flax: N/A
jax: N/A
jaxlib: N/A
using gpu: nope
using distributed: yes, via pyspark udf
```
The reprex below this produces the following error:
```bash
---------------------------------------------------------------------------
...
/databricks/spark/python/pyspark/sql/dataframe.py in count(self)
668 2
669 """
--> 670 return int(self._jdf.count())
671
672 def collect(self):
/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
121 # Hide where the exception came from that shows a non-Pythonic
122 # JVM exception message.
--> 123 raise converted from None
124 else:
125 raise
PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__
self.sp_model.Load(self.vocab_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2'. Full traceback below:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__
self.sp_model.Load(self.vocab_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2
During handling of the above exception, another exception occurred:
pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads
return pickle.loads(obj, encoding=encoding)
File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__
self.sp_model.Load(self.vocab_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2
```
I believe this is related to #15982. If this is related, it seems we need fast tokenizers for `Marian` and now also `BigBird`. Is anyone working on that currently? If not I could take a stab at `BigBird` (or maybe we just need to add `BigBird` functionality to `PreTrainedTokenizerFast`?)
### Who can help?
BigBird: @ydshieh
Marian: @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is a (simplified) reprex of what my function(s) look like and I'm running this on a multi-node databricks cluster on azure.
```python
import functools
from pyspark.sql import functions as F, types as T
from transformers import BigBirdTokenizer
tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base")
return_type = T.ArrayType(T.IntegerType())
def create_tokens(tokenizer, text):
return tokenizer.encode_plus(text)["input_ids"]
create_tokens_partial = functools.partial(create_tokens, tokenizer=tokenizer)
create_tokens_udf = F.udf(lambda text: create_tokens_partial(text=text, returnType=return_type)
df = df.withColumn("InputIds", create_tokens_udf(F.column("Text")))
df.cache().count()
```
### Expected behavior
Serialization of the `BigBirdTokenizer` that can be shipped off to worker nodes. | 09-03-2022 02:16:40 | 09-03-2022 02:16:40 | Hey @jmwoloso Thank you for reporting the issue. I have a few questions though: I see there is [fast big_bird tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/tokenization_big_bird_fast.py) in `transformers`. Have you tried it?<|||||>Hi @ydshieh. No I havne't tried that directly. I tried the `AutoTokenizer` which uses the Fast tokenizers by default (so I assume the behavior will be the same), but the problem with the fast tokenizers is that you cannot feed them pre-tokenized text like you can the vanilla tokenizers, and I need to do that for my use case.<|||||>So it seems we don't need a fast tokenizer implementation, but the fast tokenizers actually need feature parity with the vanilla tokenizers.<|||||>Unless using the `BigBirdTokenizerFast` directly is the solution, rather than `AutoTokenizer`?<|||||>Possibly related [#1045](https://github.com/huggingface/tokenizers/issues/1045)<|||||>I was able to come up with a workaround @ydshieh, but it still feels like the fast tokenizers should behave the same as the standard tokenizers (i.e. accept tokenized inputs)
the solution I came up with tries to load the tokenizer from the local node where the udf is being run. if it doesn't exist, we download it, and save it for future calls to make use of.
expanding upon `create_tokens` in my reprex above:
```python
def create_tokens(text):
from transformers import BigBirdTokenizer
model = "google/bigbird-roberta-base"
try:
# try to access it locally first
tokenizer = BigBirdTokenizer.from_pretrained(f"/{model}")
except Exception:
tokenizer = BigBirdTokenizer.from_pretrained(model)
tokenizer.save_pretrained(f"/{model}")
return tokenizer.encode_plus(text)["input_ids"]
```<|||||>@jmwoloso I am glad you found a workaround!
But for me to understand the problem a bit better, on the remote nodes (hopefully this term makes sense - I am not really familiar with Spark ecosystem), they are not able to download/load the tokenizer from `google/bigbird-roberta-base`?
And if we download and save it (from the local node where the udf is being run), it could be loaded (through a local directory?) when the code is running on the remote nodes?
----
Regarding the pre-tokenization in fast tokenizers, probably there are some limitations in `tokenizers` (which is written in `rust`) and is designed to be so. I am not very familiar with that libraries. If you would like to, maybe you can open a feature request in [tokenizers](https://github.com/huggingface/tokenizers), with a description of the current limitation you encounter.<|||||>Hi @ydshieh thank you for the quick reply. The issue I linked to above in the tokenizers library captures the limitation I believe, so I can follow up on that thread to see about what is possible there.
And yes "remote nodes" is a fine term to use, we typically think about Spark clusters in terms of the driver node (the node where our notebooks run) and the worker nodes (which take a serialized form of the commands we specify on the driver node and use those to do work on their slice of the data).
Spark also has user-defined functions (UDFs) which are python functions that can be serialized and sent to the worker nodes so they have what they need to operate on data. I have tried instantiating the tokenizer within the udf, but a spark cluster has 200 partitions (jobs) by default and you can adjust that to suit your workload. The problem is that each job will call it's version of the udf which results (in this case) in 200 calls to download the tokenizer and we get an error that we've made too many requests to the site.
The alternative to this is to use partial application with the udf. So create the tokenizer a single time on the driver node and then we use partial application to apply the tokenizer to the udf so that the tokenizer gets shipped along with the udf to each of the workers.
This works just fine for (as far as I can tell) the non-sentencepiece based tokenizers. The spiece tokenizers however, seem to try and reload the model after they've been sent to the workers. The problem is that the cache exists on the driver node, but not on the worker nodes, so the tokenizer tries to load its config again and the cache doesn't exist so the job fails.
My solution above does exactly what you suggest, I try to access the tokenizer from the local file system of the worker node first and if that fails, I download it and save the config on the worker node so that subsequent calls to the udf use the locally saved tokenizer instead of making the api call to download it.<|||||>Thank you @jmwoloso for the patience and effort to answer my question!
Actually, I was not giving suggestion in my last reply, but just to understand better your workaround/solution :-)
> The problem is that each job will call it's version of the udf which results (in this case) in 200 calls to download the tokenizer and we get an error that we've made too many requests to the site.
> I download it and save the config on the worker node so that subsequent calls to the udf use the locally saved tokenizer instead of making the api call to download it.
If you still have a bit time, one last question (as I am a bit confused here):
all those ~200 jobs can access the locally saved tokenizer, downloaded only once on a particular worker? Or each worker still has to download it (so totally ~200 downloads) separately, but subsequent calls on each node won't download it again?
From the description, I believe you mean the former. But I am a bit surprised that all workers can access the same file system.
Finally, I am going to close the issue as you already provide a working solution (and the issue is not really a bug in the library). Hope this is OK for you.
<|||||>> all those ~200 jobs can access the locally saved tokenizer, downloaded only once on a particular worker?
Hi @ydshieh Yes, this. One download per worker that any jobs on that worker can then access if my solution above is used.
> Finally, I am going to close the issue as you already provide a working solution (and the issue is not really a bug in the library). Hope this is OK for you.
I'm torn on this because I would expect that all tokenizers behave the same and could be interoperable in a distributed setting without having to make the fix above, but the `spiece` ones behave differently. But having said that, the solution above would work for all tokenizers `spiece` and otherwise. It doesn't feel so much like a bug, just unexpected/inconsistent behavior among tokenizers within the library. Is this something I could add to like a "recipes" section in the documentation as a pattern to be used in distributed settings? I'd be happy to work on that if you think it would be useful, but otherwise, I'm fine with closing this issue. I appreciate your help!
<|||||>Thanks again, @jmwoloso . Let's keep this issue open for now, and I will discuss with 2 colleagues regarding the documentation.<|||||>Sounds great @ydshieh, thank you!<|||||>Hi, @jmwoloso
After a discussion with one of my colleagues @Narsil, we think this scenario is too specific to be documented in the doc. I think, for now, having your workaround on this GitHub issue page is already very good and helpful for others having the same issue :-) Thanks again.
Regarding slow/fast tokenizer, I left [a comment](https://github.com/huggingface/tokenizers/issues/1045#issuecomment-1257802082) :-)
<|||||>Thank you @ydshieh! |
transformers | 18,875 | closed | Disable model checkpoint sharding of large models for SageMaker Model Parallel | Disable model checkpoint sharding of large models for SageMaker Model Parallel
* SageMaker Model Parallel does not support loading these
# What does this PR do?
This PR disables the automatic model checkpoint sharding done in `PreTrainedModel` for SageMaker Model Parallel as SMP does not support loading these.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-03-2022 02:06:30 | 09-03-2022 02:06:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @philschmid <|||||>Ok, sounds good! I opened a new PR with changes in Trainer instead. #18928 |
transformers | 18,874 | closed | Remove unused `cur_len` in generation_utils.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes unused `cur_len` in `generation_utils.py`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-02-2022 19:54:31 | 09-02-2022 19:54:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @patrickvonplaten - please review this one when you get some time.<|||||>Nice! |
transformers | 18,873 | closed | LayoutLMV3 Tokenizer Inserts Odd Characters | ### System Info
(This is on a fresh google colab instance)
- `transformers` version: 4.21.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import LayoutLMv3TokenizerFast
tok = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base')
tok.tokenize('Hello world')
>>> ['ĠHello', 'Ġworld']
```
### Expected behavior
The special characters are not expected when calling tokenize(). This doesn't happen when using the LayoutLMV2 tokenizer. | 09-02-2022 16:39:09 | 09-02-2022 16:39:09 | @logan-markewich -- have a look at this [entry in our forum](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=joaogante)<|||||>@gante Yea ok, that gives some context for what is actually going on
I guess I will just need to remove those characters myself? A little annoying, but oh well<|||||>Hi,
No that's just the way tokenization happens. The tokenization is based on Byte Pair Encoding (BPE) algorithm. This is also used by RoBERTa for instance.
You can check out the Huggingface course if you want to learn about this: https://huggingface.co/course/chapter6/5?fw=pt |
transformers | 18,872 | closed | we want to submit a PR about Source-Free compression training on huggingface NLP model,Where would you suggest submitting it? | ### Feature request
Hello, we have done a Source-Free compression training function, and the benefits of bert-base-cased are as follows. I want to submit a PR, is it ok? and which repo, huggingface/transformers or huggingface/optimum, is more suitable?


### Motivation
Improve the inference speed of NLP models
### Your contribution
we want to submit a pr for huggingface~ | 09-02-2022 16:10:54 | 09-02-2022 16:10:54 | Hey @leiqing1 ! Thanks a lot for being motivated to contribute 🤗 ! You mention this is about training, right? In this case, I would say `huggingface/transformers` is the place to go. But let's wait the opinions from my colleagues @sgugger @LysandreJik and @michaelbenayoun
As the training (for PyTorch models) is done by using the [Trainer class](https://github.com/huggingface/transformers/blob/ecdf9b06bc03af272ceb8d6951e30e677fdfd35c/src/transformers/trainer.py#L223), the PR is likely to involve the integration your compression method into that class.
<|||||>Let's maybe start with a simple training script that you could add as a new research project in the repo?<|||||>Thanks for the reply. @ydshieh Yes, the compression process is about training. You mean we can prepare a simple training script to submit to https://github.com/huggingface/transformers/tree/main/examples/research_projects this directory?
@sgugger <|||||>Yes, alongside with the extra modules you might need, all in the same folder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,871 | closed | Further reduce the number of alls to head for cached objects | # What does this PR do?
This PR completes #18534 and leverages the cache system of files that do not exist at a given commit in a repo introduced in the last release of `huggingface_hub` (by this [PR](https://github.com/huggingface/huggingface_hub/pull/986)) to further reduce the numbers of calls to the API when trying to load configurations/models/tokenizers/pipelines to just 1 call **every time** the object is cached and the current commit is the same one as the distant repo for the given revision.
cc @Narsil
| 09-02-2022 15:10:25 | 09-02-2022 15:10:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>and also cc @Wauplin :)<|||||>Yes, my plan was to port this to `hugginface_hub` next, along with the commi_hash argument (which does not exist there yet), to then be able to use the function of `huggingface_hub` after the next release!
Thanks for the reviews, will address comments later this morning. |
transformers | 18,870 | closed | mlflow can log a maximum of 100 parameters on Azure ML | When trying to use the Trainer's mlflow integration within Azure ML, it will fail at the `on_train_begin` callback because it tries to log all of the TrainingArguments and the model config which will total more than 100 parameters.
> RestException: INVALID_PARAMETER_VALUE: Response: {'Error': {'Code': 'ValidationError', 'Severity': None, 'Message': 'A field of the entity is over the size limit. FieldName=Parameters, Limit=100, Size=175. See https://aka.ms/azure-machine-learning-limits for service limits documentation.', 'MessageFormat': None, 'MessageParameters': None, 'ReferenceCode': None, 'DetailsUri': None, 'Target': None, 'Details': [], 'InnerError': None, 'DebugInfo': None, 'AdditionalInfo': None}, 'Correlation': {'operation': '9816ae760b843120b907ea5121aeb911', 'request': '4ac415478d4966af'}, 'Environment': 'eastus2', 'Location': 'eastus2', 'Time': '2022-09-02T14:33:48.3242911+00:00', 'ComponentName': 'mlflow', 'error_code': 'INVALID_PARAMETER_VALUE'}
Doing this outside of Azure ML does not produce this error.
I know there is a separate AzureML callback, but I believe this is for the older version of Azure ML and the newer version just uses mlflow. I could not get it to work using only the Azure ML callback.
There are many values in TrainingArguments that are not super important and are typically never set, so the easy workaround is to limit the number of TrainingArguments that get logged. After going through all arguments, I identified 40 or so that are the most important for logging. The rest of the arguments can still be saved to the output directory by saving all arguments as a json file.
If you want to see this error for yourself, run the following code inside Azure ML:
```python
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
import mlflow
from transformers import AutoModel, TrainingArguments
ml_client = MLClient.from_config(credential=DefaultAzureCredential())
azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(azureml_mlflow_uri)
with mlflow.start_run():
targs = TrainingArguments("output")
model = AutoModel.from_pretrained("albert-base-v2")
mlflow.log_params(targs.to_dict())
mlflow.log_params(model.config.to_dict())
```
Questions:
1. Should a change be made to the mlflow callback?
2. Alternatively, should Azure ML + mlflow users not use the mlflow callback and instead do their own custom logging?
3. Should an updated Azure ML callback be created?
@sgugger | 09-02-2022 14:47:40 | 09-02-2022 14:47:40 | Note that we are not maintaining the external logging platforms ourselves, their creators are supposed to do this work. Since there is no one in Microsoft that is pushing this forward, some community contributed callbacks have been created, but it's not guaranteed that they work. I would recommend using other trackers that are better maintained until some folks at MlFlow/Azure put the work for a nice integration :-)<|||||>I'll just make my own callback. Thanks!<|||||>Hi nbroad1881,
I am facing the same problem that you have mentioned at the beginning, however, I am not sure about the workaround and its bit urgent. Can you please help me how were you able to have your own callback? This will be of great help.
Regards,
Dev<|||||>@dks198 Something you might be able to do for a custom AzureMLCallback once you've defined your trainers:
from transformers import TrainerCallback
import importlib.util
------------------------------------------------
def is_azureml_available():
if importlib.util.find_spec("azureml") is None:
return False
if importlib.util.find_spec("azureml.core") is None:
return False
return importlib.util.find_spec("azureml.core.run") is not None
class AzureMLCallback(TrainerCallback):
"""
A [`TrainerCallback`] that sends the logs to [AzureML](https://pypi.org/project/azureml-sdk/).
"""
def __init__(self, azureml_run=None):
if not is_azureml_available():
raise RuntimeError("AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.")
self.azureml_run = azureml_run
def on_init_end(self, args, state, control, **kwargs):
from azureml.core.run import Run
if self.azureml_run is None and state.is_world_process_zero:
self.azureml_run = Run.get_context()
def on_log(self, args, state, control, logs=None, **kwargs):
if self.azureml_run and state.is_world_process_zero:
for k, v in logs.items():
if isinstance(v, (int, float)):
self.azureml_run.log(k, v, description=k)
-----------------------------------------------
trainer.add_callback(AzureMLCallback)
trainer.train()<|||||>I believe AzureML logging is being deprecated for mlflow.
@dks198, you can use the [answer](https://github.com/huggingface/accelerate/pull/675#discussion_r974482489) I gave in the other thread. You can just limit the number of parameters logged.<|||||>Hi,
I tried implementing the code you have shared for my Bert model training
however, I still get the same error message. I am trying to figure out how
to implement this so that I can get through this error. I have attached the
error message for your reference.
Regards,
Dev
On Tue, Sep 20, 2022 at 1:52 PM giozinzi ***@***.***> wrote:
> @dks198 <https://github.com/dks198> Something you might be able to do for
> a custom AzureMLCallback once you've defined your trainers:
>
> from transformers import TrainerCallback
> import importlib.util
> ------------------------------
>
> def is_azureml_available():
> if importlib.util.find_spec("azureml") is None:
> return False
> if importlib.util.find_spec("azureml.core") is None:
> return False
> return importlib.util.find_spec("azureml.core.run") is not None
>
> class AzureMLCallback(TrainerCallback):
> """
> A [TrainerCallback] that sends the logs to AzureML
> <https://pypi.org/project/azureml-sdk/>.
> """
>
> def __init__(self, azureml_run=None):
> if not is_azureml_available():
> raise RuntimeError("AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.")
> self.azureml_run = azureml_run
>
> def on_init_end(self, args, state, control, **kwargs):
> from azureml.core.run import Run
>
> if self.azureml_run is None and state.is_world_process_zero:
> self.azureml_run = Run.get_context()
>
> def on_log(self, args, state, control, logs=None, **kwargs):
> if self.azureml_run and state.is_world_process_zero:
> for k, v in logs.items():
> if isinstance(v, (int, float)):
> self.azureml_run.log(k, v, description=k)
>
> ------------------------------
>
> trainer.add_callback(AzureMLCallback)
>
> trainer.train()
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/18870#issuecomment-1252898595>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/A3FF72LYYCFYTOIPM2LYXS3V7IPW7ANCNFSM6AAAAAAQDK7Y2I>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>AzureML recently raised the limit to the number of parameters that can be logged per mlflow run to 200. This should unblock using HF autolog in the issue raised initially. That change has been rolled out earlier in October. By now, you should be able to drop the workaround and just use HF autolog with mlflow in AzureML. |
transformers | 18,869 | closed | pin Slack SDK to 3.18.1 to avoid failing issue | # What does this PR do?
Currently CI Slack reports failed to be sent due to an error
```bash
The server responded with: {'ok': False, 'error': 'invalid_blocks_format'}
```
It happens since `slack-sdk-3.18.2`. This PR pin `slack-sdk-3.18.1` so we can receive the reports. | 09-02-2022 14:31:30 | 09-02-2022 14:31:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,868 | closed | Remove cached torch_extensions on CI runners | # What does this PR do?
- The test
```
tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_scheduler_ds_optimizer
```
failed since 2 weeks due to some cache issue. The error message is
```bash
E ImportError: /github/home/.cache/torch_extensions/py38_cu113/fused_adam/fused_adam.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE`
```
- After I remove the cache (on the host runners, not inside the running docker) by
```bash
sudo rm -rf /home/github_actions/actions-runner/_work_temp/_github_home/.cache/torch_extensions/py38_cu113/
```
the test passes.
- This PR add the following in the workflow file
```bash
rm -rf /github/home/.cache/torch_extensions/
```
to avoid the same problem occurring in the future.
Remark: Notice the host directory
```
/home/github_actions/actions-runner/_work_temp/_github_home/
```
is mapped to
```
/github/home/
```
inside the running docker (we can see this in the job run page). | 09-02-2022 13:44:17 | 09-02-2022 13:44:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,867 | closed | [OWL-ViT] Add model to the appropriate section | # What does this PR do?
This PR moves OWL-ViT to the "multimodal" section in the docs, as the model isn't vision-only.
cc @stevhliu | 09-02-2022 13:31:41 | 09-02-2022 13:31:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,866 | closed | Added AnyPrecisionAdamW as an optimizer | Added AnyPrecisionAdamW as an optimizer
- Related Issue
#18827
@stas00
| 09-02-2022 10:37:35 | 09-02-2022 10:37:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for trying, @Zeesky-code - but that won't do anything ;)
I guess my instructions weren't very instructive I've just pointed to a few places where one would start looking at mimicking the integration of other optimizers, my apologies if it wasn't obvious.
So in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer.
and as I said the key to the PR is tests and documentation. Again checking the existing tests and working from there is what's needed.
If it's too much and you're no longer interested please don't hesitate to comment in the feature request issue that it's open for grabs again. If you want to continue, that is great too - please don't hesitate to ask questions if any.
p.s. it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done.
<|||||>
Ohh, I've taken a look at the PR that added adamw_bnb_8bit and I'm afraid I don't think I'll be able to work on this.
I'll close this PR and let others know the issue is still available to work on.
Thank you😅
> Thank you for trying, @Zeesky-code - but that won't do anything ;)
>
> I guess my instructions weren't very instructive I've just pointed to a few places where one would start looking at mimicking the integration of other optimizers, my apologies if it wasn't obvious.
>
> So in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer.
>
> and as I said the key to the PR is tests and documentation. Again checking the existing tests and working from there is what's needed.
>
> If it's too much and you're no longer interested please don't hesitate to comment in the feature request issue that it's open for grabs again. If you want to continue, that is great too - please don't hesitate to ask questions if any.
>
> p.s. it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done.
<|||||>Thank you for an honest evaluation, @Zeesky-code - and much appreciated for trying! |
transformers | 18,865 | closed | A script to download artifacts and perform CI error statistics | # What does this PR do?
This script is helpful for the past CI project. It downloads all artifacts from a workflow run, and get the error statistics + the corresponding failing tests.
`errors.json`: the places where error occur + what those errors are
`failed_tests.json`: which test methods failed (can be used to determine which models will be supported in a specific backend version)
** We might adjust this script a bit once we start to perform the automation of the past CI project. **
Currently, it prints something as the following (but we save the full information in 2 json files)
```bash
('RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.', 66)
("AttributeError: module 'torch.jit' has no attribute '_state'", 51)
("AttributeError: module 'torch' has no attribute 'minimum'", 45)
("AttributeError: module 'torch' has no attribute 'multiply'", 25)
('AttributeError: Caught AttributeError in replica 0 on device 0.', 3)
('RuntimeError: "normal_kernel_cpu" not implemented for \'BFloat16\'', 3)
("AssertionError: Couldn't trace module.", 3)
("AttributeError: module 'torch' has no attribute 'isneginf'", 2)
("TypeError: where(): argument 'input' (position 2) must be Tensor, not int", 2)
("AttributeError: 'Tensor' object has no attribute 'nansum'", 1)
('RuntimeError: Caught RuntimeError in replica 0 on device 0.', 1)
```
| 09-02-2022 10:02:46 | 09-02-2022 10:02:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,864 | closed | Fix naming issue with Image2TextGenerationPipeline | # What does this PR do?
Fixes naming issue with the Image2TextGenerationPipeline: naming was not consistent with other libraries.
See [this message](https://huggingface.slack.com/archives/C014N4749J9/p1662091141349169?thread_ts=1662048034.910319&cid=C014N4749J9).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Narsil, @sgugger
| 09-02-2022 08:15:24 | 09-02-2022 08:15:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 18,863 | closed | alter retrived to retrieved | # What does this PR do?
alter 'retrived' to 'retrieved'
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-02-2022 04:27:19 | 09-02-2022 04:27:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,862 | closed | How can I control data feeding order to model using Huggingface Trainer? | ### Feature request
I want to train model in the order in which the data are stored.
For example, if there are 100 data, then I want to feed 1st, 2nd data together(because I set batch_size=2 in code) and then 3rd, 4th data and then 5th, 6th data together and so on....
But huggingface Trainer train model using datacollator and this feed data to model randomly by the parameter data_seed.
**How can I train model feeding data in the order in which the data are stored?**
```
# load tokenizer
model_checkpoint = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# load model
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
# make batch
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
batch_size = 2
epochs = 3
args = Seq2SeqTrainingArguments(
output_dir = "saved_model",
overwrite_output_dir = True,
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
gradient_accumulation_steps=2,
weight_decay=0.01,
num_train_epochs=epochs,
predict_with_generate=True,
fp16=False,
dataloader_num_workers=8,
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
```
### Motivation
I want to control data feeding order to the model.
### Your contribution
I want to control data feeding order to the model. | 09-02-2022 01:29:14 | 09-02-2022 01:29:14 | You can subclass the Seq2SeqTrainer and override the [_get_train_sampler](https://github.com/huggingface/transformers/blob/8d59385f124dd1b330cac7eaa7162799870793ec/src/transformers/trainer.py#L759) method. Instead of creating a RandomSampler object, create a [SequentialSampler](https://pytorch.org/docs/stable/data.html#torch.utils.data.SequentialSampler).
```python3
from transformers.trainer_seq2seq import Seq2SeqTrainer
from torch.utils.data import SequentialSampler
class SequentialSeq2SeqTrainer(Seq2SeqTrainer):
def _get_train_sampler(self) -> SequentialSampler:
return SequentialSampler(self.train_dataset)
```<|||||>Thank you!! I'll try as you mentioned.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 18,861 | closed | Convert logged learning rate from tensor to float via `.item()` so it can be JSON serialized. | Fixes #18860
@sgugger
| 09-02-2022 00:42:14 | 09-02-2022 00:42:14 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.