repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,254 | closed | Add fast tokenizer for BARTpho | This PR is to add a "fast" BARTpho tokenizer (backed by HuggingFace's *tokenizers* library).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-14-2022 15:11:15 | 05-14-2022 15:11:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17254). All of your documentation changes will be reflected on that endpoint.<|||||>Following: [https://github.com/huggingface/transformers/pull/13788](https://github.com/huggingface/transformers/pull/13788)
I now add a "fast" version of the BartphoTokenizer.
@sgugger , @LysandreJik, @patil-suraj , @SaulLu and @patrickvonplaten Please could you have a look and provide your feedback? Thanks.<|||||>Hi @patil-suraj and @sgugger I revised the slow and fast BartphoTokenizer variants to satisfy your requirements.
Please have a look and give feedback. Thanks.
cc: @SaulLu @LysandreJik <|||||>Please note that the unsuccessful checks are due to the failed `test_modeling_wav2vec2_conformer.py`, not related to our BartphoTokenizer. @SaulLu
<|||||>> Please note that the unsuccessful checks are due to the failed `test_modeling_wav2vec2_conformer.py`, not related to our BartphoTokenizer. @SaulLu
@SaulLu fixed the wav2vec2_conformer tests on master<|||||>@datquocnguyen We can't merge anything that has any breaking change on the existing tokenizer, as I said before.<|||||>@sgugger Ah, I now see your point. I initially thought the code would be much nicer if I also push a new version of the slow tokenizer. But then it breaks the existing code.
Anyway, the fast tokenizer would totally work without changing the original code of the slow tokenizer (as I already developed the fast_tokenizer_file), I think. I would need a bit of time to roll back the slow tokenizer to its original version.
(cc @SaulLu , @LysandreJik , @patil-suraj and @patrickvonplaten )
<|||||>Hi @SaulLu , @sgugger , @patil-suraj @LysandreJik and @patrickvonplaten
In addition to a fast BARTpho tokenizer, I also revised my code to add fast tokenizers for BERTweet and PhoBERT. Here, changes now do not break existing slow tokenizers. My hacking trick to have the same tokenization strategy for both slow and fast variants is already mentioned [here](https://github.com/huggingface/transformers/pull/17254#discussion_r878687089).
Please have a look and provide feedback. Thanks!
Note that I have no idea to fix the failed test `check_code_quality` w.r.t. `black`:
```
#!/bin/bash -eo pipefail
black --check --preview examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
would reformat src/transformers/models/bartpho/tokenization_bartpho_fast.py
Oh no! 💥 💔 💥
1 file would be reformatted, 1594 files would be left unchanged.
Exited with code exit status 1
```
However, the target file "tokenization_bartpho_fast.py" is left unchanged in my local machine:
<img width="938" alt="Screen Shot 2022-05-22 at 11 58 19 pm" src="https://user-images.githubusercontent.com/2412555/169706748-f34f1034-93f3-48c4-937d-4126bb119d7c.png">
I think there might be an inconsistency with `black` used in my local machine and in your CI, so I could not fix it from my side. It would be great if you guys could help fix it. Thanks a lot.
<|||||>@SaulLu Thank you very much for your detailed feedback and suggestion. Before moving forward to revise the code w.r.t. the `add_tokens` feature, it would be great if you could provide some more context/clarification on the intention of using `add_tokens`.
Vietnamese can be considered as an isolated language, where the (monolingual) Vietnamese lexicon of syllables contains about 8K syllable types. Using a monolingual vocab of 40K types in `vinai/bartpho-syllable` is far more than enough to cover all possible cases of Vietnamese syllables. I am currently not sure whether the `add_tokens` feature is needed in our tokenizer/model when using our tokenizer/model on Vietnamese data? <|||||>@SaulLu Similarly, for monolingual models PhoBERT for Vietnamese and BERTweet for English, vocabularies of 64K subword types should be more than enough, so that we might not need to use the `add_tokens` feature, right? <|||||>Hi @datquocnguyen. It's amazing that you added those two new fast tokenizers. However we need PRs to be focused on one thing. Would you terribly mind splitting it in three (one for BARTpho, one for PhoBERT and one for BERTweet)?
Thanks a lot!<|||||>> @SaulLu Thank you very much for your detailed feedback and suggestion. Before moving forward to revise the code w.r.t. the add_tokens feature, it would be great if you could provide some more context/clarification on the intention of using add_tokens.
@datquocnguyen I think there are many, many use cases for `add_tokens`. But for example, we can imagine a use case where a user would like to fine-tune the model on a task that needs to identify specific tokens: like for example `"<QUESTION>"` and `"<ANSWER>"`. This method is convenient because it is unified across all tokenizers. <|||||>@SaulLu Thank you very much for your feedback.
I improved the hacking strategy to handle the issue with newly added tokens.
Assume that the sizes of the multilingual and monolingual vocabularies are X and Y, respectively (here, X > Y, X is the `base_vocab_size` and Y is set at `mask_token_id` in our hacking strategy). Added tokens A1, A2, A3, ... would have original ids of X, X+1, X+2,... that will be mapped into new ids Y, Y+1, Y+2,..., respectively.
I extended the original function `get_added_vocab` into `get_added_vocab_hacking` to extract a dictionary `added_vocab ` {A1: Y, A2: Y+1, A3: Y+2, ...} and another dictionary `id_mapping` of id mapping {X: Y, X+1: Y+1, X+2: Y+2, ...}
```python
def get_added_vocab_hacking(self):
"""
Returns the added tokens in the vocabulary as a dictionary of token to index.
Returns:
`Dict[str, int], Dict[int, int]`: The added tokens, and their original and new ids
"""
base_vocab_size = self._tokenizer.get_vocab_size(with_added_tokens=False)
full_vocab_size = self._tokenizer.get_vocab_size(with_added_tokens=True)
if full_vocab_size == base_vocab_size:
return {}, {}
# Tokens in added_vocab should have ids that are equal to or larger than the size of base_vocab
added_vocab = dict(
(self._tokenizer.id_to_token(index), index + 1 - base_vocab_size + self.mask_token_id)
for index in range(base_vocab_size, full_vocab_size)
)
id_mapping = dict((index, self._tokenizer.token_to_id(tok)) for tok, index in added_vocab.items())
return added_vocab, id_mapping
```
So in tokenization, the previous strategy maps all ids larger than `mask_token_id` to `unk_token_id` now is revised to also handle added tokens [as follows](https://github.com/datquocnguyen/transformers/blob/f59b4afeb1af6551feac5d3214bbdf582ebbb098/src/transformers/models/bartpho/tokenization_bartpho_fast.py#L234-L242):
```python
ids = []
for (id, token) in zip(e.ids, e.tokens):
if id <= self.mask_token_id:
ids.append(id)
else:
if token.strip() in added_vocab: # handle added tokens
ids.append(added_vocab[token.strip()])
else:
ids.append(self.unk_token_id)
```
In addition, [a preprocess of mapping ids](https://github.com/datquocnguyen/transformers/blob/f59b4afeb1af6551feac5d3214bbdf582ebbb098/src/transformers/models/bartpho/tokenization_bartpho_fast.py#L174-L197) Y, Y+1, Y+2, ... into X, X+1, X+2 is applied before decoding:
```python
def _decode(
self,
token_ids: Union[int, List[int]],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: bool = True,
**kwargs
) -> str:
self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False)
if isinstance(token_ids, int):
token_ids = [token_ids]
# Mapping added tokens' ids into their original values
_, id_mapping = self.get_added_vocab_hacking()
if len(id_mapping) > 0:
token_ids = [id_mapping[id] if id in id_mapping else id for id in token_ids]
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
if clean_up_tokenization_spaces:
clean_text = self.clean_up_tokenization(text)
return clean_text
else:
return text
```
With this improved strategy, there are two tests needed to override:
```python
def test_tokenizer_fast_store_full_signature(self):
"""
Override the original test as BartphoTokenizer requires a monolingual_vocab_file rather than a merges_file
"""
```
```python
def test_add_tokens_tokenizer(self):
"""
Override the original test as in the fast tokenizer, the actual vocab_size is in fact mask_token_id + 1
"""
```
<|||||>> Hi @datquocnguyen. It's amazing that you added those two new fast tokenizers. However we need PRs to be focused on one thing. Would you terribly mind splitting it in three (one for BARTpho, one for PhoBERT and one for BERTweet)?
>
> Thanks a lot!
@sgugger I changed the code, so that this PR is only for BARTpho. cc: @SaulLu <|||||>@SaulLu please help to review [the improved strategy](https://github.com/huggingface/transformers/pull/17254#issuecomment-1139492485) and give feedback. Thank you very much.
Please note that failed checks are not related to my bartpho tokenizer, except for one check using `black`, however `black` was successful in my local computer, as detailed at https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067. Could you provide information about `black` used in your CI?, so I can replicate the issue on my local computer, then fix it. Thanks.
cc: @sgugger
```
#!/bin/bash -eo pipefail
black --check --preview examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
would reformat src/transformers/models/bartpho/tokenization_bartpho_fast.py
Oh no! 💥 💔 💥
1 file would be reformatted, 1594 files would be left unchanged.
Exited with code exit status 1
```
<|||||>You need to install `black==22.3` to have the same results as the CI.<|||||>> You need to install `black==22.3` to have the same results as the CI.
@sgugger You might miss my https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067 I already had `black` version 22.3 as detailed in https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067.<|||||>@sgugger Following https://github.com/huggingface/transformers/pull/17254#issuecomment-1133932067, I was not aware that I have to include `--preview` into my command `black -l 119 <py_file_path>`. The code quality check is passed.
There are now 4 failed checks not caused by BartphoTokenizerFast, I believe:
- FAILED tests/models/layoutlmv2/test_tokenization_layoutlmv2.py::LayoutLMv2TokenizationTest::test_saving_tokenizer_trainer
====== 1 failed, 135 passed, 32 skipped, 20 warnings in 142.09s (0:02:22) ======
- FAILED tests/pipelines/test_pipelines_summarization.py::SummarizationPipelineTests::test_small_model_pt
- `run_tests_flax` = 804 failed, 5364 passed, 11260 skipped, 7960 warnings in 1306.16s (0:21:46) ==
- `run_tests_torch` = 823 failed, 10738 passed, 6752 skipped, 5425 warnings in 1554.96s (0:25:54) ==
cc: @SaulLu , @LysandreJik, @patil-suraj and @patrickvonplaten It would be great if you guys can also help review this PR. Thanks a lot.<|||||>You will need to rebase on the main branch to fix the test failures. It's due to the botched release of Protobuf that breaks everything (the main branch has it pinned).<|||||>@sgugger I rebased the main branch with the latest commits from `transformers`.
There are 3 failed checks not relevant to the BartphoTokenizer:
- `Build PR Documentation / build / build_pr_documentation`: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
- `run_tests_tf`: FAILED tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings
- `run_tests_torch`: FAILED tests/models/glpn/test_feature_extraction_glpn.py::GLPNFeatureExtractionTest::test_call_pytorch
fyi, @SaulLu , @LysandreJik, @patil-suraj and @patrickvonplaten <|||||>Hey @datquocnguyen, thanks a lot for your PR and for working hard on this! I think this is one situation where the code on the hub (detailed below) would fit really well, for the following reasons:
- The tokenizer code that is defined seems to work with the vocabulary you have worked with so far, but is unlikely to work with other vocabularies. At least it won't be the case unless the approach you have taken to generate that vocabulary is very documented, as it is very complex when compared to other tokenizers.
- If I understand correctly, the approach taken could have been handled by a single vocabulary rather than 2. I definitely understand why doing it like this for BARTpho makes sense, but this is unlikely to be the case for other checkpoints leveraging this architecture.
- The code in `transformers` has to be maintained for years, so we want to optimize for heavily tested code; the methods that you add, while definitely useful in order to get the BARTpho tokenization right, are not tested and have `hacking` in their name, which shows that they're targetting something a bit different than what we aim to solve with `transformers`' internal code (but that definitely has its place on the hub!).
Finally, you're moving very fast with your implementations, which is great. However, given the backwards-compatibility approach we have chosen and the fact that we want production-ready code means that we'll be slowing things down in this case, unfortunately.
---
The code on the hub is explained [here](https://huggingface.co/docs/transformers/custom_models). It's a way to share models and configurations by sharing their modeling code directly on the hub. When doing `from_pretrained`, you can then fetch the code on the hub. BARTpho is exactly the kind of use-cases we had in mind when working on this feature - we just didn't get to implementing the tokenizer code yet! I think we should work to enable this ASAP and have BARTpho be a first trial.
This would enable you to move as fast as you need, while providing the same functionality to downstream `transformers` users, and will allow you to manage your repositories as you see fit. Would that work for you?<|||||>[@LysandreJik](https://github.com/LysandreJik) Thanks for your detailed feedback.
Before I go to answer whether the code on the hub would work for me.
I am just concerning your first comment:
> The tokenizer code that is defined seems to work with the vocabulary you have worked with so far, but is unlikely to work with other vocabularies. At least it won't be the case unless the approach you have taken to generate that vocabulary is very documented, as it is very complex when compared to other tokenizers.
So I would try to respond to this first comment. As detailed in [#13788 (comment)](https://github.com/huggingface/transformers/pull/13788#issuecomment-931908671), regarding the use case of BartphoTokenizer: Other languages can thus simply reuse BartphoTokenizer with their `monolingual_vocab_file`. The goal is to reduce the model sizes of existing pre-trained XLM-RoBERTa/mBART models when applying to a smaller set of languages instead of the whole 50/100 languages. Here, you would trim XLM-RoBERTa/mBART to just dealing with subwords in the `monolingual_vocab_file` while not requiring retraining the corresponding multilingual sentencepiece model.
The generation process of BARTpho vocabulary is not that very complicated, as detailed in [#13788 (comment)](https://github.com/huggingface/transformers/pull/13788#issuecomment-931908671). In particular, I apply a pre-trained/existing sentencepiece tokenization model from a pre-trained language model (e.g., XLM-RoBERTa/mBART/...) to segment sentences in a language/task-specific corpus, and then selected just top X (e.g. 40K) subwords to be included in a specific vocabulary for my downstream language/task (here, I named this specific vocabulary as `monolingual_vocab_file`). The existing sentencepiece model as well as the specific vocabulary are both required for a proper tokenization.
Regarding BartphoTokenizerFast, the process of generating the `tokenizer_file` is that: (1) I load the slow BartphoTokenizer, (2) call the function `convert_slow_tokenizer` to convert it into a fast variant, and (3) then save the fast one. This might be a bit complicated for others as it is not well-documented, but I could simply abandon the use of `tokenizer_file` in BartphoTokenizerFast. Thus BartphoTokenizerFast would just create and convert a slow tokenizer BartphoTokenizer to build the backend.
I believe there are many use cases in which BartphoTokenizer/BartphoTokenizerFast would fit.
[@SaulLu](https://github.com/SaulLu) As you have been playing around with BartphoTokenizer, is there any comment from your side regarding [@LysandreJik](https://github.com/LysandreJik)' first point. Thank you both.<|||||>@LysandreJik
> If I understand correctly, the approach taken could have been handled by a single vocabulary rather than 2.
I am not sure this is the case.
The pre-trained (multilingual) sentencepiece model and the specific monolingual_vocab_file are both required for proper tokenization: the multilingual sentencepiece model is used for subword tokenization while all subwords that do not appear in the monolingual_vocab_file are converted into an unknown token. <|||||>@LysandreJik I did dig into the code on the hub, and am wondering whether I understand your approach correctly:
- Instead of merging `tokenization_bartpho_fast.py` into the main `transformers` branch, we now just need to upload/push it to `https://huggingface.co/vinai/bartpho-syllable/tree/main`.
- There would be an upcoming feature of `sharing a custom tokenizer`, which I should register BartphoTokenizerFast from `vinai/bartpho-syllable` or `https://huggingface.co/vinai/bartpho-syllable/blob/main/tokenization_bartpho_fast.py`. Then it would allow users to automatically download or import `tokenization_bartpho_fast.py` and use BartphoTokenizerFast via AutoTokenizer with existing features in the main `transformers` branch.
So what I should do is to wait until you guys complete that `sharing a custom tokenizer` feature and then I would just need to have some piece of code for registering BartphoTokenizerFast with `register_for_auto_class('AutoTokenizer')` and it would run as the same as merged into the main `transformers` branch, wouldn't it?
Thanks.
cc: @SaulLu <|||||>For a wider context where many subwords appearing in the "merges" file do not appear in the "vocab" file as in CTRL, FlauBERT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), it is likely impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy.
Thus, the trick used in BartphoTokenizerFast would come into play, and help solve this issue. If merged, it is then straightforward to develop similar fast tokenizers for CTRL, FlauBERT, PhoBERT and BERTweet.
It would be great if @LysandreJik @SaulLu @patrickvonplaten or @sgugger could provide concrete feedback on whether this PR will have a chance to be merged. If this PR could not be merged, then what is the status of the "sharing a custom tokenizer on the hub" feature (e.g. tentative date for releasing this feature) ?
Thank you very much.<|||||>Hi @datquocnguyen ,
I echo [Lysandre's answer](https://github.com/huggingface/transformers/pull/17254#issuecomment-1143221043): I thank you for working very hard for this PR :hugs: and I also think it would be a very good fit for the feature on the hub. And this addition will be really useful for the community!
> It would run as the same as merged into the main transformers branch, wouldn't it?
Yes, the idea is that it would be (almost) identical to what you have with transformers! I don't know when it will be released (as I'm not directly working on it), but it seems to be a high-priority feature!
> For a wider context where many subwords appearing in the "merges" file do not appear in the "vocab" file as in CTRL, FlauBERT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), it is likely impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy.
Indeed, you raise a very good point. I have also observed that there are tokens listed in the merge rules that do not appear in the vocabulary for `FlauBERT` - and I believe you that this is also the case for `CTRL`, `PhoBERT` and `BERTweet`. Nevertheless, from my point of view, looking at `FlauBERT`'s code, the fix that seems to me the most suitable for our API (tokenizer slow → converter → tokenizer fast ) would be to clean up the merge file during the conversion step. This technique would indeed avoid having to modify the tokenizer fast core method(s). I've attached a snippet below that illustrates this idea. Am I missing something by thinking that this would achieve the desired final behaviour?
------------
_Snippet to illustrate the merges file "clean up" that I have in mind_
To test this snippet, we need to retrieved locally the `vocab.json` and `merges.txt` files of `FlauBERT`, for example by doing `git clone https://huggingface.co/flaubert/flaubert_base_cased`.
Then, if we try to test to initialize a tokenizer fast (pure without the transformers's tokenizer wrapper for the moment), we observe that it raises an error
```python
from tokenizers import Tokenizer
from tokenizers.models import BPE
import json
file_vocab = f"flaubert_base_cased/vocab.json"
file_merges = f"content/flaubert_base_cased/merges.txt"
with open(file_vocab) as f:
vocab = json.load(f)
with open(file_merges) as f:
merges = f.readlines()
merges = [merge.split(" ") for merge in merges]
merges = [(merge[0], merge[1]) for merge in merges if len(merge)==3]
tokenizer = Tokenizer(
BPE(
vocab,
merges,
unk_token="<unk>",
end_of_word_suffix="</w>",
fuse_unk=True,
)
)
```
Error message:
```bash
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
[<ipython-input-26-b70d9b50bd17>](https://localhost:8080/#) in <module>()
5 unk_token="<unk>",
6 end_of_word_suffix="</w>",
----> 7 fuse_unk=True,
8 )
9 )
Exception: Error while initializing BPE: Token `trouvécap` out of vocabulary
```
But by cleaning the merges file we can initialize the tokenizer without errors
```python
# ------ Clean up step ------
new_merges = []
for token_1, token_2 in merges:
if token_1 not in vocab or token_2 not in vocab or f"{token_1}{token_2}" not in vocab:
print(token_1, token_2)
continue
new_merges.append((token_1, token_2))
# ---------------------------
tokenizer = Tokenizer(
BPE(
vocab,
new_merges,
unk_token="<unk>",
end_of_word_suffix="</w>",
fuse_unk=True,
)
)
```
<|||||>@SaulLu Thanks for your response.
> Am I missing something by thinking that this would achieve the desired final behaviour?
Cleaning the "merges" file will definitely result in different encoding outputs from the slow and fast tokenizers. For example, in the case of FlauBERT, the slow and fast tokenizers will encode/tokenize any word containing the sub-string `trouvécap` differently.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still looking forward to using the upcoming "sharing a custom tokenizer" feature =) <|||||>> Cleaning the "merges" file will definitely result in different encoding outputs from the slow and fast tokenizers. For example, in the case of FlauBERT, the slow and fast tokenizers will encode/tokenize any word containing the sub-string trouvécap differently.
I'm sorry, I didn't react to your message! You are right, my proposal will not be exactly the same as the current slow version.
One specific thing to know about this particular case of FlauBERT is that currently the slow tokenizer doesn't behave exactly like FlauBERT's original tokenizer which used [FastBPE](https://github.com/glample/fastBPE).
For example, `trouvécaptivantes` is not tokenized in the same way:
```
Transformers version: ['<s>', '<unk>', 'tiv', 'antes</w>', '</s>']
FastBPE version: ['<s>', 'trouv', 'écap', 'tiv', 'antes</w>', '</s>']
```
Ideally, we would like to have an exact match, but in this case I think the changes that would have to be made to achieve this would be very cumbersome compared to the difference observed (`trouvécaptivantes` is not a word in French but the concatenation of 2 words, without typography we should have had `trouvé captivantes`). All that to say, it's very very complicated to have perfect matching between different tokenization libraries and maintaining long-term hacks is not easy and that's why I think the sharing feature is really a perfect use case for your proposal! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> I am still looking forward to using the upcoming "sharing a custom tokenizer" feature =)
https://github.com/datquocnguyen/transformers/tree/fast_tokenizers_BARTpho_PhoBERT_BERTweet also contains fast tokenizer implementations of PhoBERT and BERTweet, which others might find them useful to develop similar fastBPE-based tokenizers for other models such as CTRL & FlauBERT.<|||||>Hi @SaulLu @LysandreJik , I am wondering about the status/progress of the "sharing a custom tokenizer" feature on the hub. Is there anything I can help with? This feature would help BERTweet, PhoBERT, BARTpho and the like to be easier to be used with their fast customed tokenizers. Thank you.<|||||>The custom tokenizer should now work correctly! @ArthurZucker, if you have a spare cycle, could you look into supporting the tokenizers added here by @datquocnguyen with code on the hub using the custom tokenizers?
A guide showing how to is [available here](https://huggingface.co/docs/transformers/custom_models#sending-the-code-to-the-hub). Thanks!<|||||>Hi @LysandreJik @ArthurZucker @SaulLu , I followed the guide, and can confirm that it works. For example, the following piece of code results in a correct fast tokenizer BertTweetTokenizerFast:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased", trust_remote_code=True, revision="ddfcf0409600519d6f8907531a65151f39be5c01")
print(tokenizer.__class__.__name__)
```
The current issue is that the [examples](https://github.com/huggingface/transformers/tree/main/examples) have not yet included the option `trust_remote_code`, so they produce errors. E.g:
```
Traceback (most recent call last):
File "run_ner.py", line 630, in <module>
main()
File "run_ner.py", line 358, in main
add_prefix_space=True,
File "/home/sonla/workspace/transformers/src/transformers/models/auto/tokenization_auto.py", line 587, in from_pretrained
f"Loading {pretrained_model_name_or_path} requires you to execute the tokenizer file in that"
ValueError: Loading /home/sonla/workspace/BERTweet/bertweet-covid19-base-uncased requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
```
To handle this error/issue, I have to modify the `run_ner.py` to include the option `trust_remote_code` and add this option to the tokenizer loading part. And the modified `run_ner.py` file now runs properly as before.
I am wondering whether there is any faster approach to handle this issue without modifying each of the examples? Thanks.<|||||>Oh great question @datquocnguyen, and thanks for taking care of the implementation! Really cool to see it works well.
@sgugger, what do you think regarding the examples? Should we add a `TrainingArgument` to enable specifying models with remote code? WDYT?<|||||>It should be one of the `ModelArguments` defined in the example (where the rest of the args, like revision etc. lie) but yes, I don't see why not!<|||||>The `ModelArguments` should have `trust_remote_code_model` and `trust_remote_code_tokenizer` separately for the model and tokenizer loading, respectively, shouldn't it? For example:
```
tokenizer_name_or_path = model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path
if config.model_type in {"bloom", "gpt2", "roberta"}:
tokenizer = AutoTokenizer.from_pretrained(
tokenizer_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=True,
revision=model_args.model_revision,
trust_remote_code=trust_remote_code_tokenizer, # For tokenizer
use_auth_token=True if model_args.use_auth_token else None,
add_prefix_space=True,
)
else:
tokenizer = AutoTokenizer.from_pretrained(
tokenizer_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=True,
revision=model_args.model_revision,
trust_remote_code=trust_remote_code_tokenizer, # For tokenizer
use_auth_token=True if model_args.use_auth_token else None,
)
model = AutoModelForTokenClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
trust_remote_code=trust_remote_code_model, # For model
use_auth_token=True if model_args.use_auth_token else None,
ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,
)
```<|||||>No, one is enough. Users that want more finegrained control can just modify the examples to suit their needs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,253 | closed | Adding CVT Model | # What does this PR do?
Add CvT Model for Vision Classification
Fixes #13158
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @LysandreJik | 05-14-2022 13:31:53 | 05-14-2022 13:31:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge can you review it and suggest any further changes.
I'm not sure how you want to modify `cls_token` part. Since modifying in `CvtStage` section (stopping the split) will change shape of hidden states (stored in all hidden states and passing of 4D shape for CNN in next layer).
I leave that part to you on how you want to change it to pass it over different classes further.
I have run the `make fix-copies` and done docstrings part. I think everything is done apart from change you wanted to make for `cls_token`.
|
transformers | 17,252 | closed | torch.cuda.amp.autocast not worhing in huggingface nlp models. | ### System Info
```shell
Hi, I am trying to finetune my roberta-base model to RTE datasets using fp16.
But it seems `torch.cuda.autocast' does not work in huggingface nlp models. The output of model is `torch.float32` and there is no memory saving.
My code is below
And is there any way of showing a example of training nlp models using fp16 without Trainer?
```
### Who can help?
@LysandreJik @JetRunner
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch.nn as nn
import time
import torch
from datasets import load_dataset
from transformers import get_scheduler
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_metric
import torch.multiprocessing as mp
import torch.distributed as dist
import argparse
import os
from torch.optim import AdamW
from torch.cuda.amp import GradScaler, autocast
parser = argparse.ArgumentParser(description="PyTorch nlp Training")
parser.add_argument("--log", default="./test.txt", type=str)
parser.add_argument("--dataset", default="rte", type=str)
parser.add_argument("--lr", default=2e-5, type=float)
parser.add_argument("--epochs", default=20, type=int)
parser.add_argument("--task", default="rte", type=str)
parser.add_argument("--batches", default=8, type=int)
parser.add_argument("--workers", default=4, type=int)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
def main():
args = parser.parse_args()
mp.spawn(main_worker, nprocs=args.workers, args=(args.workers, args))
def main_worker(rank, process_num, args):
dist.init_process_group(
backend="nccl", init_method="tcp://127.0.0.1:1237", world_size=4, rank=rank
)
# dataset dataloaer
os.environ["TOKENIZERS_PARALLELISM"] = "true"
train_dataset = load_dataset("glue", args.task, split="train")
val_dataset = load_dataset("glue", args.task, split="validation")
sentence1_key, sentence2_key = task_to_keys[args.task]
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
# sentence1_key, sentence2_key = task_to_keys["cola"]
def encode(examples):
if sentence2_key is not None:
return tokenizer(
examples[sentence1_key],
examples[sentence2_key],
truncation=True,
padding="max_length",
max_length=128,
)
return tokenizer(
examples[sentence1_key],
truncation=True,
padding="max_length",
max_length=128,
)
train_dataset = train_dataset.map(encode, batched=True)
val_dataset = val_dataset.map(encode, batched=True)
val_dataset = val_dataset.map(
lambda examples: {"labels": examples["label"]}, batched=True
)
train_dataset = train_dataset.map(
lambda examples: {"labels": examples["label"]}, batched=True
)
train_dataset.set_format(
type="torch", columns=["input_ids", "labels", "attention_mask"]
)
val_dataset.set_format(
type="torch", columns=["input_ids", "labels", "attention_mask"]
)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=8,
num_workers=12,
pin_memory=True,
drop_last=True,
shuffle=False,
sampler=train_sampler,
)
val_dataloader = torch.utils.data.DataLoader(
val_dataset,
batch_size=8,
num_workers=12,
pin_memory=True,
drop_last=True,
shuffle=False,
)
# metric
metric_mat = load_metric("glue", args.task)
metric_acc = load_metric("accuracy")
# model
epochs = args.epochs
model = AutoModelForSequenceClassification.from_pretrained("roberta-base")
model = model.to(rank)
optimizer = AdamW(
[{"params": model.parameters()}],
lr=args.lr,
)
model = torch.nn.parallel.DistributedDataParallel(model)
lr_scheduler = get_scheduler(
name="polynomial",
optimizer=optimizer,
num_warmup_steps=500,
num_training_steps=epochs * len(train_dataloader),
)
criterion = nn.CrossEntropyLoss().to(rank)
scaler = GradScaler()
for epoch in range(epochs):
model.train()
train_loss = 0.0
train_acc1 = 0.0
time_avg = 0.0
train_sampler.set_epoch(epoch)
for i, batch in enumerate(train_dataloader):
optimizer.zero_grad()
start = time.time()
batch = {k: v.to(rank) for k, v in batch.items()}
with autocast():
outputs = model(batch["input_ids"], batch["attention_mask"])
logits = outputs.logits
# batch["labels"] = batch["labels"].type(torch.float16)
loss = criterion(logits, batch["labels"])
pred = torch.argmax(logits, dim=1)
acc = metric_acc.compute(predictions=pred, references=batch["labels"])
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
```
Train the code with 4 GPU. Actually even with one gpu it shows no difference.
### Expected behavior
```shell
using Pytorch amp.autocast should save memory and gain effiency. But it seems not.
```
| 05-14-2022 10:42:16 | 05-14-2022 10:42:16 | |
transformers | 17,251 | closed | Support MobileBert model in transformer.onnx package | ### Feature request
Just wondering would it be possible to support mobilebert in transformer.onnx package? or is there any quick hack that we can try to export the mobilebert model from huggingface to onnx?
Thanks.
The mobile I am trying is : `google/mobilebert-uncased`
And the command : `python -m transformers.onnx --model=google/mobilebert-uncased onnx/`
`raise KeyError(
KeyError: "mobilebert is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'flaubert', 'marian', 'm2m-100', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-j', 'gpt-neo', 'layoutlm', 'electra', 'vit', 'beit', 'blenderbot', 'blenderbot-small'] are supported. If you want to support mobilebert please propose a PR or open up an issue."`
### Motivation
Trying to get mobileBert model exported to onnx format to further investigate and for the usage of some ORT mobile scenarios.
### Your contribution
A PR is not available for now. | 05-13-2022 22:30:09 | 05-13-2022 22:30:09 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It seems it is supported now: https://github.com/huggingface/transformers/pull/17029<|||||>Thanks for the pointer to this pr! |
transformers | 17,250 | closed | Automatically sort auto mappings | # What does this PR do?
This PR introduces a new script to automatically sort all the mappings in the auto modules alphabetically. It fixes/checks it with the usual `make style`/`make quality`/`make fixup` and a new step in the check code quality job of the CI enforces it has properly been applied. | 05-13-2022 19:36:45 | 05-13-2022 19:36:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,249 | closed | Fix test_model_parallelization | # What does this PR do?
When number of gpus is greater than ```len(model.device_map.keys())```, exceptional case happens.
Fixes #17248
## Who can review?
@patrickvonplaten
| 05-13-2022 18:57:02 | 05-13-2022 18:57:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Change is fine by me! @sgugger @stas00 what do you think? |
transformers | 17,248 | closed | gpt2 model parallelization test failed | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-1015-gcp-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
### Who can help?
In main branch, ``` pytest tests/models/gpt2/test_modeling_gpt2.py ``` failed at ```test_model_parallelization```.
```
def test_model_parallelization(self):
...
# Assert that the memory use on all devices is higher than it was when loaded only on CPU
for n in range(torch.cuda.device_count()):
> self.assertGreater(memory_after_parallelization[n], memory_at_start[n])
E AssertionError: 0 not greater than 0
tests/test_modeling_common.py:2069: AssertionError
```
I'm implementing model parallelization on OPT, but there is the same problem. (https://github.com/huggingface/transformers/pull/17245)
But, it works on trainer.
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
``` pytest tests/models/gpt2/test_modeling_gpt2.py ```
### Expected behavior
```shell
memory_at_start : [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
memory_after_parallelization : [179, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 0, 0, 0, 0]
Number of my gpu devices is 16. but len(self.h) at gpt2 is 12.
```
| 05-13-2022 18:33:04 | 05-13-2022 18:33:04 | |
transformers | 17,247 | closed | Add support for pretraining recurring span selection to Splinter | This pull request aims to add support for recurring span selection pretraining as proposed by the authors of [Splinter](https://arxiv.org/abs/2101.00438). The pretraining objective differs from the question answering task in a couple of ways:
- There is not a question, but a number of question tokens replacing recurring spans.
- The shape of `start_positions` and `end_positions` is `(batch_size, num_questions)` instead of `(batch_size, )`.
- The shape of `start_logits` and `end_logits` is `(batch_size, num_questions, sequence_length)` instead of `(batch_size, sequence_length)`.
- The loss should ignore zero positions, i.e. `ignore_index=0`. Zeros are used in the original code to denote padded question tokens and their start and end positions.
To this end, we added `SplinterForPreTraining`.
Minimal training example:
```python
import torch
from torch.utils.data import IterableDataset
from transformers import SplinterConfig
from transformers import SplinterForPreTraining
from transformers import Trainer
from transformers import TrainingArguments
class QuestionAnsweringDataset(IterableDataset):
def __iter__(self):
yield {
"input_ids": torch.tensor([101, 104, 123, 456, 104, 234, 567, 102]),
"attention_mask": torch.tensor([1, 1, 1, 1, 1, 1, 1, 1]),
"token_type_ids": torch.tensor([0, 0, 0, 0, 0, 0, 0, 0]),
"question_positions": torch.tensor([1, 4]),
"start_positions": torch.tensor([2, 5]),
"end_positions": torch.tensor([3, 6]),
}
config = SplinterConfig()
model = SplinterForPreTraining(config)
dataset = QuestionAnsweringDataset()
trainer = Trainer(
model=model,
args=TrainingArguments(max_steps=3, output_dir="/tmp"),
train_dataset=dataset,
)
trainer.train()
```
CC @tobigue
@patil-suraj @LysandreJik @patrickvonplaten @oriram
| 05-13-2022 18:25:20 | 05-13-2022 18:25:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for reviewing the PR! :) I added the suggested changes.
@jvcop will add an answer to this https://github.com/huggingface/transformers/pull/17247#discussion_r873691809<|||||>As far as I can see all comments have been addressed - merging! Thanks a lot for your work here @jvcop !<|||||>Thanks a lot for the fast review! And to @tobigue who was an integral part of this :tada: |
transformers | 17,246 | closed | Add PR title to push CI report | # What does this PR do?
As title. The current effect looks like
<img width="512" alt="Screenshot 2022-05-13 191853" src="https://user-images.githubusercontent.com/2521628/168335363-ddb06fb3-4c3e-40ea-8b3a-2a82e9402c38.png">
I need to figure out a way to add link. | 05-13-2022 17:19:49 | 05-13-2022 17:19:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,245 | closed | Add OPT model parallelize | # Model Parallelism for OPT
Added ```parallelize``` and ```deparallelize``` methods on ```OPTDecoder```, ```OPTModel``` and ```OPTForCausalLM```.
Referred to ```gpt2``` model parallelize (https://github.com/huggingface/transformers/pull/8696).
Fixes #17240
## Who can review?
Let me know if you need any modifications, @patrickvonplaten | 05-13-2022 16:42:26 | 05-13-2022 16:42:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17245). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @lkm2835,
Thanks for your PR - however this way of parallelizing the model is a bit outdated. The recommended way of using the model in parallel is to use `accelerate` see: https://twitter.com/huggingface/status/1524783489593360385
We'll soon have this natively supported in `transformers` as well cc @sgugger <|||||>Then, is it better to close this PR?<|||||>> natively
I'm afraid so! There are lots of other "Good first issues" or "Good second issues" though if you'd like to give it a try :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,244 | closed | Error in Loading the Feature extractor | ### System Info
```shell
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-05-13 16:11:52.801587: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.18.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
```
### Who can help?
@sgugger @LysandreJik

I am facing this error
I am not able to figure out how to get past it. Till yesterday I was running the same code and didnt face any errors but today when I ran it to reproduce the results this happened can you please help
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
from hugsvision.nnet.VisionClassifierTrainer import VisionClassifierTrainer
from transformers import AutoFeatureExtractor, SwinForImageClassification
from transformers import ViTFeatureExtractor, ViTForImageClassification
from transformers import BeitFeatureExtractor, BeitForImageClassification
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned
trainer = VisionClassifierTrainer(
model_name = "ViT-Model",
train = train,
test = test,
output_dir = "./out/",
max_epochs = 5,
batch_size = 16, # On RTX 2080 Ti
lr = 0.0003,
fp16 = True,
model = ConvNextForImageClassification.from_pretrained(
huggingface_model,
num_labels = 5,
label2id = label2id,
id2label = id2label,
use_auth_token=True,
ignore_mismatched_sizes=True
),
feature_extractor = ConvNextFeatureExtractor.from_pretrained(
huggingface_model,
),
)
```
### Expected behavior
```shell
I want to know why this error arose which till yesterday didnt even exist.
Just to make a note Hugsvision didnt update their code base in past one day hence I reached out to you all for help
Thanks in advacne
```
| 05-13-2022 16:18:34 | 05-13-2022 16:18:34 | This is fixed by #17239
Will make a patch for PyPi.<|||||>Patched on pypi! |
transformers | 17,243 | closed | install dev. version of accelerate in docker file | # What does this PR do?
With an offline discussion with @muellerzr regarding this CI failure
```
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_auto_batch_size_finder
(line 776) ImportError:
```
I update the docker file in this PR. Will try to build the new docker image once this PR is merged.
| 05-13-2022 16:10:58 | 05-13-2022 16:10:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,242 | closed | Quick fix for push CI report channel | # What does this PR do?
As title.
I can definitely use
```
if ci_event == "scheduled":
...
else:
...
```
Let me know if you prefer that approach, @LysandreJik .
(I updated the secret - despite the channel ID might just be the same as before) | 05-13-2022 15:58:16 | 05-13-2022 15:58:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,241 | closed | Question answering pipeline: error for long text sequences when `max_seq_len` is specified | ### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
**code:**
```python
#!pip install transformers==4.16.0
!pip install transformers==4.17.0
from transformers import pipeline
context = 100 * "The quick brown fox jumps over the lazy dog. "
qa_pipeline = pipeline("question-answering", max_seq_len=2000)
qa_pipeline(question="what does the fox do?", context=context)
```
**exception traceback:**
```
No model was supplied, defaulted to distilbert-base-cased-distilled-squad (https://huggingface.co/distilbert-base-cased-distilled-squad)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-4-d1e4a860038f>](https://localhost:8080/#) in <module>()
1 qa_pipeline = pipeline("question-answering", max_seq_len=2000)
----> 2 qa_pipeline(question="what does the fox do?", context=context)
10 frames
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/question_answering.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
249 examples = self._args_parser(*args, **kwargs)
250 if len(examples) == 1:
--> 251 return super().__call__(examples[0], **kwargs)
252 return super().__call__(examples, **kwargs)
253
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1025 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1026 else:
-> 1027 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
1028
1029 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1047 all_outputs = []
1048 for model_inputs in self.preprocess(inputs, **preprocess_params):
-> 1049 model_outputs = self.forward(model_inputs, **forward_params)
1050 all_outputs.append(model_outputs)
1051 outputs = self.postprocess(all_outputs, **postprocess_params)
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in forward(self, model_inputs, **forward_params)
942 with inference_context():
943 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
--> 944 model_outputs = self._forward(model_inputs, **forward_params)
945 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
946 else:
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/question_answering.py](https://localhost:8080/#) in _forward(self, inputs)
369 example = inputs["example"]
370 model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
--> 371 start, end = self.model(**model_inputs)[:2]
372 return {"start": start, "end": end, "example": example, **inputs}
373
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, start_positions, end_positions, output_attentions, output_hidden_states, return_dict)
853 output_attentions=output_attentions,
854 output_hidden_states=output_hidden_states,
--> 855 return_dict=return_dict,
856 )
857 hidden_states = distilbert_output[0] # (bs, max_query_len, dim)
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
546
547 if inputs_embeds is None:
--> 548 inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
549 return self.transformer(
550 x=inputs_embeds,
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/distilbert/modeling_distilbert.py](https://localhost:8080/#) in forward(self, input_ids)
131 position_embeddings = self.position_embeddings(position_ids) # (bs, max_seq_length, dim)
132
--> 133 embeddings = word_embeddings + position_embeddings # (bs, max_seq_length, dim)
134 embeddings = self.LayerNorm(embeddings) # (bs, max_seq_length, dim)
135 embeddings = self.dropout(embeddings) # (bs, max_seq_length, dim)
RuntimeError: The size of tensor a (1009) must match the size of tensor b (512) at non-singleton dimension 1
```
### Expected behavior
```shell
Run through and produce a result similar to the following, like with transformers 4.16.0
{'answer': 'The quick brown fox jumps over the lazy dog',
'end': 3418,
'score': 0.017251048237085342,
'start': 3375}
```
```
| 05-13-2022 15:45:23 | 05-13-2022 15:45:23 | I think this is expected, not a bug. The `max_seq_length` of `distilbert` is 512. Set `max_seq_length` to be larger than 512 essentially disable the truncation. When you feed a text than 512 tokens, it will raise this error<|||||>No, according ot the documentation of `transformers.QuestionAnsweringPipeline.__call__`, the parameter `max_seq_len` is the maximum length of the total sentence (context + question) *after tokenization*.
The context will be split in several chunks if needed, i.e., if it is longer than the maximum sequence length of the model.
And this is the way the pipeline behaved up to `transformers==4.17.0`.
This feature is useful to process long sequences (longer than model length).<|||||>@ATroxler
I am not sure this was changed since 4.17.0 since the diff does concern this parameter
```diff
diff --git a/src/transformers/pipelines/question_answering.py b/src/transformers/pipelines/question_answering.py
index efab83b92..bbffa3471 100644
--- a/src/transformers/pipelines/question_answering.py
+++ b/src/transformers/pipelines/question_answering.py
@@ -5,10 +5,9 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
import numpy as np
from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features
-from ..file_utils import PaddingStrategy, add_end_docstrings, is_tf_available, is_torch_available
from ..modelcard import ModelCard
from ..tokenization_utils import PreTrainedTokenizer
-from ..utils import logging
+from ..utils import PaddingStrategy, add_end_docstrings, is_tf_available, is_torch_available, logging
from .base import PIPELINE_INIT_ARGS, ArgumentHandler, ChunkPipeline
@@ -302,11 +301,6 @@ class QuestionAnsweringPipeline(ChunkPipeline):
]
)
- # keep the cls_token unmasked (some models use it to indicate unanswerable questions)
- if self.tokenizer.cls_token_id is not None:
- cls_index = np.nonzero(encoded_inputs["input_ids"] == self.tokenizer.cls_token_id)
- p_mask[cls_index] = 0
-
features = []
for span_idx in range(num_spans):
input_ids_span_idx = encoded_inputs["input_ids"][span_idx]
@@ -316,6 +310,11 @@ class QuestionAnsweringPipeline(ChunkPipeline):
token_type_ids_span_idx = (
encoded_inputs["token_type_ids"][span_idx] if "token_type_ids" in encoded_inputs else None
)
+ # keep the cls_token unmasked (some models use it to indicate unanswerable questions)
+ if self.tokenizer.cls_token_id is not None:
+ cls_indices = np.nonzero(np.array(input_ids_span_idx) == self.tokenizer.cls_token_id)[0]
+ for cls_index in cls_indices:
+ p_mask[span_idx][cls_index] = 0
submask = p_mask[span_idx]
if isinstance(submask, np.ndarray):
submask = submask.tolist()
@@ -399,8 +398,11 @@ class QuestionAnsweringPipeline(ChunkPipeline):
end_ = np.where(undesired_tokens_mask, -10000.0, end_)
# Normalize logits and spans to retrieve the answer
- start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
- end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
+ start_ = np.exp(start_ - start_.max(axis=-1, keepdims=True))
+ start_ = start_ / start_.sum()
+
+ end_ = np.exp(end_ - end_.max(axis=-1, keepdims=True))
+ end_ = end_ / end_.sum()
if handle_impossible_answer:
min_null_score = min(min_null_score, (start_[0, 0] * end_[0, 0]).item())
```
However there was indeed a bugfix in 4.17.0 (from 4.16.0) where `max_seq_len` was not passed (so it was basically ignored).
When ignored (or not passed) `max_seq_len == min(self.tokenizer.max_seq_len, 384)` even before that (not sure which version bug much earlier) it was always 384.
`max_seq_len` corresponds to the maximum length of a single chunk (question + context chunk), and it will chunk indeed if full_context is too long.
So @sijunhe is indeed correct here.
Rereading the documentation of this parameter I can understand the confusion.
```
The maximum length of the total sentence (context + question) after tokenization. The context will be
split in several chunks (using `doc_stride`) if needed.
```
Would something like :
```
The maximum length of the total sentence (context + question) of each chunk passed to the model. If the context is too large, it will be split in several chunks (using `doc_stride` as overlap length) if needed.
```
Be more understandable ?
The `(context + question)` wants to convey that if the question is taking too much space, then there's less room for context so you need to be careful about extremely long questions.
Is the problem clearer to you ? Do you have any suggestions to improve even further the docs ?
Also if I understand correctly you want to limit the amount of context fed to your model right (not just chunking but really ignoring part of the text you send, which usually we try to avoid since you are sending it :) ) ? May I ask why you want to do so ? Are you using documents of arbitrary length and know that the answer should be in the beginning for instance ?
The idea is just to figure out how we could maybe cook up a nice option for this use case (while keeping the others understandable too)
<|||||>Many thanks, @Narsil , for clarification.
Indeed, changing the documentation in the suggested manner would avoid the confusion.
My use case is to use the QA pipeline as a pre-proessor for a sequence classification task:
* STEP 1 - apply QA pipeline to extract from long input sequences the parts relevant to the task.
* STEP 2 - apply sequence classification to the extract
With my interpretation of the existing documentation, I had been under the impression that the sequences are truncated by default to a length of 384, which I wanted to override by specifying `max_seq_len=2000`.
Thanks to your explanation I understand now that I can simply omit the parameter `max_seq_len`, and the pipeline behaves eactly the way I want, i.e. it breaks the long context into chunks.
**Example:**
```python
!pip install transformers==4.18.0
from transformers import pipeline
context = 100 * "This part of the text is totally useless. " + "The quick brown fox jumps over the lazy dog."
qa_pipeline = pipeline("question-answering")
qa_pipeline(question="what does the fox do?", context=context)
```
**Result:**
```
{'answer': 'jumps over the lazy dog',
'end': 4643,
'score': 0.639270007610321,
'start': 4620}
```
:-)<|||||>@ATroxler I opened an issue to clarify this documentation ! Thanks for raising the issue, and glad it works as intended ! |
transformers | 17,240 | closed | Distributed Support for OPT models in transformers | ### Feature request
Hi
Thanks a lot for adding OPT models to transformers. As of now the 55GB 30B parameter model needs to be loaded into a single GPU, otherwise it t[hrows a CUDA memory error](https://discuss.huggingface.co/t/running-inference-on-opt-30m-on-gpu/17895/2).
It would be great if we could have distributed support for these[ models similar to gpt2](https://github.com/huggingface/transformers/pull/7772) so we can leverage multiple GPUs to run them
### Motivation
Make it easier to run OPT models on available infra
### Your contribution
NA,
can test changes on my system | 05-13-2022 15:40:13 | 05-13-2022 15:40:13 | @patrickvonplaten rasising a feature request for model parallelism for the newly added OPT models. Please triage/comment as you see fit<|||||>Same answer as in https://github.com/huggingface/transformers/pull/17245#issuecomment-1128064880 here. Think we'll soon have something that works out of the box cc @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,239 | closed | Fix Trainer for Datasets that don't have dict items | # What does this PR do?
This PR fixes a break in `Trainer` when the dataset items are not dictionaries. | 05-13-2022 15:35:28 | 05-13-2022 15:35:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>You're welcome |
transformers | 17,238 | closed | [WIP] Use word_ids to determine if a pre-entity is a subword in TokenClassificationPipeline | # What does this PR do?
Currently, `TokenClassificationPipeline` checks an attribute called `continuing_subword_prefix` in order to determine whether it can use the "correct" token aggregation strategy. Otherwise, it uses a backup heuristic which doesn't work well usually. This check works for BERT, but not for XLNet and RoBERTa:
```
>>> from transformers import AutoTokenizer
>>> bert_tk = AutoTokenizer.from_pretrained("bert-base-cased")
>>> print(getattr(tk._tokenizer.model, "continuing_subword_prefix", None))
##
>>> xlnet_tk = AutoTokenizer.from_pretrained("xlnet-base-cased")
>>> print(getattr(xlnet_tk._tokenizer.model, "continuing_subword_prefix", None))
None
```
However, there is a better way. The fast tokenizers for XLNet and RoBERTa are word-aware and provide a `word_ids` method. This PR updates the pipeline to pass around the `word_ids` list until it is used in `gather_pre_entities`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @Narsil I'd appreciate a draft review and then I'll update tests if this looks good.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 05-13-2022 15:18:52 | 05-13-2022 15:18:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17238). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,237 | closed | Align logits and labels in OPT | # What does this PR do?
For other decoder models, the labels are shifted and the last logit of each sequence is removed so they align when computing the loss. This isn't done for OPT. This PR adds this feature.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 05-13-2022 15:00:46 | 05-13-2022 15:00:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM - this is because we took the model from BART which is an encoder decoder
wdyt @ArthurZucker @patrickvonplaten ?<|||||>Oh, good find @MichelBartels! We weren't careful enough when reviewing the PR here - we should have aligned this with GPT2 right away.
The problem is that, it's not really wrong to **not** shift the labels and people could have already written their training pipelines with OPT where the labels are shifted before being passed to the model. So this could be backwards breaking here. I however do think it's important to align OPT as much as possible with GPT2.
@LysandreJik @sgugger - do you think we could fix this in a patch release? <|||||>Yes it needs to be addressed ASAP to avoid breaking changes, a patch release today is fine by me.<|||||>Yes, agreed! Let's merge this PR as-is, check if there are any other issues and do a patch release in a couple of hours. |
transformers | 17,236 | closed | [Longformer] Issues with "is_index_masked" when using single encoder layer | ### System Info
```shell
- `transformers` version: 4.19.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Just run the code attached bellow
### Sample script:
```python
import torch
from transformers import LongformerModel
model = LongformerModel.from_pretrained(
"allenai/longformer-base-4096", torchscript=True
)
submodel = model.encoder.layer[0]
input_shape = (1, 512, 768)
activations = torch.rand(input_shape)
attention_mask = torch.ones((1, 512), dtype=torch.long)
results = submodel(activations, attention_mask=attention_mask)
```
### Traceback
```
Traceback (most recent call last):
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main
run()
File "/Users/nvukobrat/.vscode/extensions/ms-python.python-2022.6.2/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/nvukobrat/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/nvukobrat/Desktop/Python/pytorch_longformer_huggingface_bug.py", line 15, in <module>
results = submodel(activations, attention_mask=attention_mask)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1209, in forward
self_attn_outputs = self.attention(
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1145, in forward
self_outputs = self.self(
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/nvukobrat/miniconda3/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 642, in forward
attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0)
TypeError: 'NoneType' object is not subscriptable
```
### Proposed solution
The issue here is that the `is_index_masked` isn't populated when a single encoder layer is extracted (problem occurs in the Longformer self-attention layer). The proposed solution could be to check and populate `is_index_masked` dynamically.
File: `transformers/models/longformer/modeling_longformer.py`
Class : `LongformerSelfAttention`
Function: `forward`
Code:
```python
# rest of the code...
if layer_head_mask is not None:
assert layer_head_mask.size() == (
self.num_heads,
), f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs
# softmax sometimes inserts NaN if all positions are masked, replace them with 0
# Proposed fix
if is_index_masked is None:
is_index_masked = attention_mask < 0
attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :, None, None], 0.0)
attn_probs = attn_probs.type_as(attn_scores)
# rest of the code...
```
### Expected behavior
```shell
I would expect to be able to get single encoder layer outputs once I run the provided script.
Let me know if this fix is valid. If yes, I can open a pull request if needed.
```
| 05-13-2022 14:57:19 | 05-13-2022 14:57:19 | Hey @ydshieh, do you have any news/comments on this issue?<|||||>Hi @NVukobrat
After looking more closely, here is my thought:
What you suggests could be achieved by adding
```
is_index_masked = attention_mask < 0
is_index_global_attn = attention_mask > 0
is_global_attn = is_index_global_attn.flatten().any().item()
```
to `LongformerSelfAttention`. However, the input `attention_mask` is not as simple as `1 or 0` anymore, and the way you prepare it (`attention_mask = torch.ones`) as input to `LongformerLayer` is incorrectly. See the details below.
I have to discuss with the team members about the design, but so far my personal understanding is that we encourage the users to interact with the models at the `Model` level (for example `LongformerModel`) instead of the intermediate `layer`s - to avoid these kinds of incorrect inputs.
(Of course, the code is in open source, and the users could customize it if there is a real necessity - but they should be careful and responsible for the inputs)
Again, **let me have a discussion with the team members** first and come back to this thread.
### More details here:
- The (base model) `LongformerModel` receives `attention_mask` which is what we are familiar with:
- It is usually prepared by a tokenizer
- If not provided, we use `attention_mask = torch.ones(...)`
- However, `attention_mask` will be processed in `LongformerModel`
- dealing with global attention:
https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1692-L1694
- padding to window size:
https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1696-L1703
- change to additive attention mask (changing shape and using `-10000.0`):
https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1705-L1709
- In `LongformerEncoder`, the following are computed from the processed `attention_mask`
https://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/models/longformer/modeling_longformer.py#L1263-L1265
- The inputs to `LongformerLayer`, `LongformerAttention` and `LongformerSelfAttention` are the processed version shown above<|||||>Hey @NVukobrat,
Couldn't you just prepare the following args:
`is_index_masked=None`
`is_index_global_attn=None`
`is_global_attn=None`
before you pass your inputs to the LongformerSelfAttentionLayer.
Note that `LongformerSelfAttentionLayer` is a non-public class which is subject to breaking changes,
so we don't recommend directly importing it. If you do however, I think it also shouldn't be too difficult to create the necessary inputs before calling it no? <|||||>Hey @ydshieh @patrickvonplaten, thanks a lot for providing the details! Very informative and helpful for our use case!
Selecting and setting mentioned attributes (`attention_mask`, `layer_head_mask`, `is_index_masked`, `is_index_global_attn`, and `is_global_attn`) before passing activations to the `LongformerSelfAttentionLayer` works for us.
Thanks once again for your help! |
transformers | 17,235 | closed | fix --gpus option for docker | # What does this PR do?
Some multi-GPUs tests in scheduled CI workflow file use `--gpus 0`. I think it is an error, and we might test with only 1 GPU for those multi GPUs tests.
This PR fixes it.
**Remark**: It's quite strange that, from the setup job log, we can see 2 GPUs in `nvidia-smi` while we have `options: --gpus 0`. I am not 100% sure if this PR has real value, but at least it avoids some confusing maybe. | 05-13-2022 14:42:28 | 05-13-2022 14:42:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,234 | open | add FAN model (vision) | ### Model description
Fully attentional networks (FAN) is a family of general-purpose Vision Transformer backbones that are highly robust to unseen natural corruptions in various visual recognition tasks.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
* https://github.com/NVlabs/FAN
* https://arxiv.org/abs/2204.12451 | 05-13-2022 13:48:49 | 05-13-2022 13:48:49 | Hi there, I would like to implement this model @NielsRogge <|||||>Hi @NielsRogge.
To my knowledge there would still be two pending tasks
- [X] Update README.md to include FAN model
- [ ] Migrate Files, weights to NVIDA organization space
Please let me know what additional tasks you think might be pending |
transformers | 17,233 | closed | bug in modeling_tf_wav2vec2 | ### System Info
```shell
- `transformers` version: 4.19.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU? Yes): 1.11.0+cu113 (True)
- Tensorflow version (GPU? Yes): 2.8.0 (True)
- Flax version (GPU:Yes): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: I am running on colab therefore I think it's parallel
```
### Who can help?
@patrickvonplaten
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import os
from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC
import tensorflow as tf
import numpy as np
import torch
import json
from datasets import load_dataset
import soundfile as sf
import torch
Wav2vec2Model = "facebook/wav2vec2-base-960h"
Wav2vec2_EXPORT_PATH = f"/content/export_wav2vec2-base-960h"
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="tf",
padding="longest",return_attention_mask=True).input_values # Batch size 1
class MyWav2vec2(TFWav2Vec2ForCTC):
@tf.function(
input_signature=[
{
"input_ids": tf.TensorSpec((None, None), tf.float32, name="serving1_input_ids"),
}
]
)
def serving1(self, inputs):
outputs = self.call(input_values=inputs["input_ids"])
return self.serving_output(outputs)
mywav2vec2 = MyWav2vec2.from_pretrained(Wav2vec2Model)
tf.saved_model.save(mywav2vec2, Wav2vec2_EXPORT_PATH, signatures={
"serving1": mywav2vec2.serving1,
})
```
### Error
```
TypeError Traceback (most recent call last)
<ipython-input-13-06d8d6c67672> in <module>()
1 jslwav2vec2 = JslWav2vec2.from_pretrained(Wav2vec2Model)
2 tf.saved_model.save(jslwav2vec2, Wav2vec2_EXPORT_PATH, signatures={
----> 3 "serving1": jslwav2vec2.serving1,
4 # "serving2": mygpt2.serving2
5 })
43 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
1332 # pylint: enable=line-too-long
1333 metrics.IncrementWriteApi(_SAVE_V2_LABEL)
-> 1334 save_and_return_nodes(obj, export_dir, signatures, options)
1335 metrics.IncrementWrite(write_version="2")
1336
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save_and_return_nodes(obj, export_dir, signatures, options, experimental_skip_checkpoint)
1367
1368 _, exported_graph, object_saver, asset_info, saved_nodes, node_paths = (
-> 1369 _build_meta_graph(obj, signatures, options, meta_graph_def))
1370 saved_model.saved_model_schema_version = (
1371 constants.SAVED_MODEL_SCHEMA_VERSION)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
1534
1535 with save_context.save_context(options):
-> 1536 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
1480 signatures, wrapped_functions = (
1481 signature_serialization.canonicalize_signatures(signatures))
-> 1482 signature_serialization.validate_saveable_view(checkpoint_graph_view)
1483 signature_map = signature_serialization.create_signature_map(signatures)
1484 checkpoint_graph_view.set_signature(signature_map)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/signature_serialization.py in validate_saveable_view(saveable_view)
299 def validate_saveable_view(saveable_view):
300 """Performs signature-related sanity checks on `saveable_view`."""
--> 301 for name, dep in saveable_view.list_children(saveable_view.root):
302 if name == SIGNATURE_ATTRIBUTE_NAME:
303 if not isinstance(dep, _SignatureMap):
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in list_children(self, obj)
134 obj,
135 save_type=base.SaveType.SAVEDMODEL,
--> 136 cache=self._serialization_cache))
137 for name, child in self._children_cache[obj].items():
138 yield base.TrackableReference(name, child)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/graph_view.py in list_children(self, obj, save_type, **kwargs)
254 obj._maybe_initialize_trackable()
255 children = [base.TrackableReference(name, ref) for name, ref
--> 256 in obj._trackable_children(save_type, **kwargs).items()]
257 # pylint: enable=protected-access
258
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _trackable_children(self, save_type, **kwargs)
1477 elif save_type == SaveType.SAVEDMODEL:
1478 cache = kwargs["cache"]
-> 1479 return self._get_legacy_saved_model_children(cache)
1480 else:
1481 raise ValueError("Unexpected format passed to `_trackable_children`. "
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _get_legacy_saved_model_children(self, serialization_cache)
1488
1489 # Retrieve functions attached to the object.
-> 1490 functions = self._list_functions_for_serialization(serialization_cache)
1491
1492 # Trace concrete functions to force side-effects:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in _list_functions_for_serialization(self, serialization_cache)
3080 self.train_tf_function = None
3081 functions = super(
-> 3082 Model, self)._list_functions_for_serialization(serialization_cache)
3083 self.train_function = train_function
3084 self.test_function = test_function
/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
3167 def _list_functions_for_serialization(self, serialization_cache):
3168 return (self._trackable_saved_model_saver
-> 3169 .list_functions_for_serialization(serialization_cache))
3170
3171 @property
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
91 return {}
92
---> 93 fns = self.functions_to_serialize(serialization_cache)
94
95 # The parent AutoTrackable class saves all user-defined tf.functions, and
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
71 def functions_to_serialize(self, serialization_cache):
72 return (self._get_serialized_attributes(
---> 73 serialization_cache).functions_to_serialize)
74
75 def _get_serialized_attributes(self, serialization_cache):
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
87
88 object_dict, function_dict = self._get_serialized_attributes_internal(
---> 89 serialization_cache)
90
91 serialized_attr.set_and_validate_objects(object_dict)
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
55 objects, functions = (
56 super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
---> 57 serialization_cache))
58 functions['_default_save_signature'] = default_signature
59 return objects, functions
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
96 """Returns dictionary of serialized attributes."""
97 objects = save_impl.wrap_layer_objects(self.obj, serialization_cache)
---> 98 functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
99 # Attribute validator requires that the default save signature is added to
100 # function dict, even if the value is None.
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrap_layer_functions(layer, serialization_cache)
195 for fn in fns.values():
196 if fn is not None and not isinstance(fn, LayerCall):
--> 197 fn.get_concrete_function()
198
199 # Restore overwritten functions and losses
/usr/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
117 if type is None:
118 try:
--> 119 next(self.gen)
120 except StopIteration:
121 return False
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in tracing_scope()
357 if training is not None:
358 with backend.deprecated_internal_learning_phase_scope(training):
--> 359 fn.get_concrete_function(*args, **kwargs)
360 else:
361 fn.get_concrete_function(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1262 def get_concrete_function(self, *args, **kwargs):
1263 # Implements GenericFunction.get_concrete_function.
-> 1264 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1265 concrete._garbage_collector.release() # pylint: disable=protected-access
1266 return concrete
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1254 # run the first trace but we should fail if variables are created.
1255 concrete = self._stateful_fn._get_concrete_function_garbage_collected( # pylint: disable=protected-access
-> 1256 *args, **kwargs)
1257 if self._created_variables:
1258 raise ValueError("Creating variables on a non-first call to a function"
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
3034 args, kwargs = None, None
3035 with self._lock:
-> 3036 graph_function, _ = self._maybe_define_function(args, kwargs)
3037 seen_names = set()
3038 captured = object_identity.ObjectIdentitySet(
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3290
3291 self._function_cache.add_call_context(cache_key.call_context)
-> 3292 graph_function = self._create_graph_function(args, kwargs)
3293 self._function_cache.add(cache_key, cache_key_deletion_observer,
3294 graph_function)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3138 arg_names=arg_names,
3139 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3140 capture_by_value=self._capture_by_value),
3141 self._function_attributes,
3142 function_spec=self.function_spec,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1159 _, original_func = tf_decorator.unwrap(python_func)
1160
-> 1161 func_outputs = python_func(*func_args, **func_kwargs)
1162
1163 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
675 # the function a weak reference to itself to avoid a reference cycle.
676 with OptionalXlaContext(compile_with_xla):
--> 677 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
678 return out
679
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
168 return control_flow_util.smart_cond(
169 training, lambda: replace_training_and_call(True),
--> 170 lambda: replace_training_and_call(False))
171
172 # Create arg spec for decorated function. If 'training' is not defined in the
/usr/local/lib/python3.7/dist-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
105 return tf.__internal__.smart_cond.smart_cond(
--> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
108
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
51 if pred_value is not None:
52 if pred_value:
---> 53 return true_fn()
54 else:
55 return false_fn()
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in <lambda>()
167
168 return control_flow_util.smart_cond(
--> 169 training, lambda: replace_training_and_call(True),
170 lambda: replace_training_and_call(False))
171
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
164 def replace_training_and_call(training):
165 set_training_arg(training, training_arg_index, args, kwargs)
--> 166 return wrapped_call(*args, **kwargs)
167
168 return control_flow_util.smart_cond(
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in call(inputs, *args, **kwargs)
650 return layer.keras_api.__call__ # pylint: disable=protected-access
651 def call(inputs, *args, **kwargs):
--> 652 return call_and_return_conditional_losses(inputs, *args, **kwargs)[0]
653 return _create_call_fn_decorator(layer, call)
654
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
608 def __call__(self, *args, **kwargs):
609 self._maybe_trace(args, kwargs)
--> 610 return self.wrapped_call(*args, **kwargs)
611
612 def get_concrete_function(self, *args, **kwargs):
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
570 with autocast_variable.enable_auto_cast_variables(
571 layer._compute_dtype_object): # pylint: disable=protected-access
--> 572 ret = method(*args, **kwargs)
573 _restore_layer_losses(original_losses)
574 return ret
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
168 return control_flow_util.smart_cond(
169 training, lambda: replace_training_and_call(True),
--> 170 lambda: replace_training_and_call(False))
171
172 # Create arg spec for decorated function. If 'training' is not defined in the
/usr/local/lib/python3.7/dist-packages/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
104 pred, true_fn=true_fn, false_fn=false_fn, name=name)
105 return tf.__internal__.smart_cond.smart_cond(
--> 106 pred, true_fn=true_fn, false_fn=false_fn, name=name)
107
108
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in <lambda>()
167
168 return control_flow_util.smart_cond(
--> 169 training, lambda: replace_training_and_call(True),
170 lambda: replace_training_and_call(False))
171
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/utils.py in replace_training_and_call(training)
164 def replace_training_and_call(training):
165 set_training_arg(training, training_arg_index, args, kwargs)
--> 166 return wrapped_call(*args, **kwargs)
167
168 return control_flow_util.smart_cond(
/usr/local/lib/python3.7/dist-packages/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(*args, **kwargs)
632 def call_and_return_conditional_losses(*args, **kwargs):
633 """Returns layer (call_output, conditional losses) tuple."""
--> 634 call_output = layer_call(*args, **kwargs)
635 if version_utils.is_v1_layer_or_model(layer):
636 conditional_losses = layer.get_losses_for(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1278 mask_time_indices = kwargs.get("mask_time_indices", None)
1279 if inputs["training"]:
-> 1280 hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
1281
1282 encoder_outputs = self.encoder(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in _mask_hidden_states(self, hidden_states, mask_time_indices)
1212 mask_prob=self.config.mask_time_prob,
1213 mask_length=self.config.mask_time_length,
-> 1214 min_masks=2,
1215 )
1216 hidden_states = tf.where(
/usr/local/lib/python3.7/dist-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in _compute_mask_indices(shape, mask_prob, mask_length, min_masks)
264 print(tf.random.uniform((1,)))
265 print((mask_prob * sequence_length / mask_length + tf.random.uniform((1,)) )[0] )
--> 266 num_masked_spans = int(mask_prob * sequence_length / mask_length + tf.random.uniform((1,)))
267 num_masked_spans = max(num_masked_spans, min_masks)
268
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Tensor'
```
### Expected behavior
```shell
I want to be able to export it to use it in tensorflow-serving
```
| 05-13-2022 13:44:14 | 05-13-2022 13:44:14 | Hi @ahmedlone127 👋 The error appears because that line does not run without Eager Execution (see below), which is the case for your script. This is a problem on our side, and we will be fixing it 👍

<|||||>https://github.com/huggingface/transformers/issues/17285<|||||>Thanks a lot, hope this gets a fix soon :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Following merging of #18153 the reproduction snippet runs on main without error. |
transformers | 17,232 | closed | Fix Flava FlavaForPreTrainingIntegrationTest test | # What does this PR do?
Fix Flava CI failure
```
> self.assertAlmostEqual(outputs.loss_info.mmm_text.item(), 1.75533199)
E AssertionError: 1.7553329467773438 != 1.75533199 within 7 places (9.56777343796844e-07 difference)
```
Just change the argument `places` to `4`.
[Job run log](https://github.com/huggingface/transformers/runs/6416748323?check_suite_focus=true) | 05-13-2022 12:25:59 | 05-13-2022 12:25:59 | cc @apsdehal for information :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,231 | closed | fix retribert's `test_torch_encode_plus_sent_to_model` | # What does this PR do?
This PR proposes a fix for the slow test `test_torch_encode_plus_sent_to_model` for RetriBert that failed in the CI daily that ran yesterday.
The error is due to the fact that RetriBert is not a classical model in the sense that the input can be fed to one or two encoders of the model. As a result `get_input_embeddings` is an unimplemented method and the forward method expects more arguments in input than what is outputed by the tokenizer.
I have therefore overridden the common test with a specific test for RetriBert that reflects these two specificities, which I will highlight them in a comment below.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed
## Local test
I've tested it by running:
```bash
RUN_SLOW=yes pytest tests/models/retribert/ -k test_torch_encode_plus_sent_to_model
```
output:
```
=========================================================================================== test session starts ============================================================================================
platform linux -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
rootdir: /home/lucile_huggingface_co/repos/transformers, configfile: setup.cfg
plugins: dash-2.3.1, hypothesis-6.41.0, timeout-2.1.0, forked-1.4.0, xdist-2.5.0
collected 98 items / 97 deselected / 1 selected
tests/models/retribert/test_tokenization_retribert.py . [100%]
============================================================================== 1 passed, 97 deselected, 10 warnings in 5.38s ===============================================================================
```
| 05-13-2022 12:13:52 | 05-13-2022 12:13:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh , this is a good question (which I also asked myself, I should have shared my thoughts on it).
Actually `self.bert_doc` is set to None in the model that is loaded. So, we can't test anything on `self.bert_doc` and `embed_answers` will be identical to `embed_questions` (and I remembered that we're testing the tokenizer not the model). But I follow what you think is best, i.e. if you think that it would be better to test the other 2 cases as a precaution if the checkpoint ever changes :hugs: <|||||>> `self.bert_doc` is set to None
In this case, I think we can just keep what you have done so far in this PR, no need to change 😄 . Thanks for the info.<|||||>@SaulLu Guess we can merge this PR now :-) ? |
transformers | 17,230 | closed | Add RWKV2 (fast) | ### Model description
I would like to implement a new model architecture.
## Short description
RWKV v2 is an "RNN with transformer-level performance, without using attention. Similar to Apple's Attention Free Transformer. All trained models open-source. Inference is very fast (even on CPUs) and might work on cell phones. There's also a GPT-type implementation." -- ([Hochreiter's description](https://twitter.com/HochreiterSepp/status/1524270961314484227))
RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV v2 you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect. RWKV can leverage GPUs, but doesn't need to.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
## Implementation and weights
There's an implementation at [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) which also gives a detailed description of the model internals and some performance benchmarks. Model weights currently are being trained for a few datasets, including the Pile (see e.g. [BlinkDL/RWKV-v2-RNN-Pile](https://github.com/BlinkDL/RWKV-v2-RNN-Pile/)) and [Danish Gigaword](https://gigaword.dk) by me. Both will be openly available - some checkpoints for the Pile already are, even though it's an ongoing process.
## Status
The model seems quite exciting and I'm able to replicate preliminary results. I'm already talking with @BlinkDL about the implementation. I'm happy to implement/port the model architecture (for both RNN and GPT variants), tokenizer, and tests myself (and have already started) and would appreciate help and advice. | 05-13-2022 10:48:16 | 05-13-2022 10:48:16 | -- on second thoughts: it's not immediately clear to me how many people will use this particular model, or how it will perform. What I'd really like to do is implement and develop it on Hub, and see if it's useful/popular there. I spent an amount of time with the docs, and the route to adding new model architectures seems to preferentially support adding _directly_ to `transformers`. Tooling for new model architectures that worked on Hub (e.g. cookiecutter, class organisation, and tests) would be super neat. Is that something there's any interest in?<|||||>> -- on second thoughts: it's not immediately clear to me how many people will use this particular model, or how it will perform.
To answer your question: If it performs better than the other CausalLM models out there, it will most likely get used. Make a PR, build an initial version that can be run on HF, and see if any of the HF devs are willing to chime in. I am interested in this work, particularly because it solves a problem I haven't seen before: Be able to run CasualLM models on CPU. And my work stretches beyond the KoboldAI team, I know there are more out there that seem to benefit from the usage of CPU models because of the high prices that GPU models currently have.<|||||>Work is going OK. We're porting the GPT-like part to Transformers first, for training and induction, and will work out the fast RNN induction-only part after the GPT part passes tests. <|||||>Where is your work at? I have worked on this model and would like to contribute. I'm also experienced now at troubleshooting the parts of this model (mostly inference accuracy though), and have spent time understanding the cuda kernels. I have some experience with adjusting new codebases to unexpected featureset combinations.<|||||>I'm also curious how this one is coming along. (I just saw the original paper today. Not sure how I missed it...)<|||||>@leondz are you guys still working on this? I am looking to get into this if this can work on edge devices<|||||>Some time ago I looked a little into continuing this, but other things came up.
After that experience, I would recommend that future implementers start a new fork, rather than working off the existing one, because very little has been done, so it can take extra effort to learn the existing situation without much return.
For the record:
leondz's branch is at https://github.com/leondz/transformers/tree/rwkv-v2 .
I added smidges to it at https://github.com/xloem/transformers/tree/rwkv-v2 and https://github.com/xloem/transformers/tree/rwkv-v2-disable_non_clm_for_now .
Since that work, RWKV is on version 4 now (although the changes between versions are not generally complex): https://github.com/BlinkDL/RWKV-LM<|||||>I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?<|||||>You could ask the same about any model or technology near the top of a leaderboard. Things happen because people do the work or make the business decisions behind them happening. There are scads and scads of things better than the original transformer paper, but they're not normative yet.<|||||>> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?
This is better but GPT is good enough for most applications.
I will just keep training larger models. RWKV 14B release soon. <|||||>> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?
It's not presented well and clearly, I am working on a fork or huggingface integration that answers questions, this is pretty much a breakthrough model imo, I am just making sure the runtimes are true. It still in R and D phase adoption phase comes soon after<|||||>I spent about a month working on this but the code wasn't stable and wasn't version controlled in the normal way, which made refactoring really tricky. Then time ran out. I think if the engineering side of things is fixed, and there's a stable release, it's a great model - definitely more data-efficient than competitors, which is really the core factor now.<|||||>> I can't understand why this hasn't seen wider adoption. It makes me a bit skeptical. If it's better in all ways compared to the original transformer paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?
For our own project we have kind of basic support for it workarounded in with the original base, but the reason we don't finetune it or don't support it properly is because Huggingface support is missing and we are tightly integrated with huggingface. I assume other providers / projects have the same issue. For adoption I'd love to see RWKV land in huggingface so we can begin to offer it to our users the proper way, without them relying on manual steps, and without missing features for this model.<|||||>Yeah but why doesn't OpenAI literally just spend one month on this with 10
guys and use this? It think this has some drawback but no one can tell me
what it is... It's feel reasonable that all new papers from Google, OpenAI
should use this.
Den ons 30 nov. 2022 18:55henk717 ***@***.***> skrev:
> I can't understand why this hasn't seen wider adoption. It makes me a bit
> skeptical. If it's better in all ways compared to the original transformer
> paper why wouldn't we see adoption from Meta, OpenAI, DeepMind etc?
>
> For our own project we have kind of basic support for it workarounded in
> with the original base, but the reason we don't finetune it or don't
> support it properly is because Huggingface support is missing and we are
> tightly integrated with huggingface. I assume other providers / projects
> have the same issue. For adoption I'd love to see RWKV land in huggingface
> so we can begin to offer it to our users the proper way, without them
> relying on manual steps, and without missing features for this model.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1332535414>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYLDTWSJQDOOINSE5GVFUDWK6IJZANCNFSM5V275BWA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
<|||||>> Yeah but why doesn't OpenAI literally just spend one month on this with 10 guys and use this? It think this has some drawback but no one can tell me what it is... It's feel reasonable that all new papers from Google, OpenAI should use this.
There are a number of papers with similar "exponential moving average" design now.
For example, S4D is using slightly fancier kernels: https://github.com/HazyResearch/state-spaces (while I find simple kernels are enough).
RWKV is weaker at LAMBADA (comparing with GPT) when the model is small (< 3B), but I find adding one single tiny QKV attention is enough to solve it (helps a small model to copy words in prompt).
Moreover, it's reasonable to expect a competitive linear-time attention model, because when human novelists write very long stories the speed is consistent (except GRRM lol).<|||||>>
I don't think this project is well known, theres a huge eco system based of just what works right now i.e T5 and GPT*x. For example percievers io, and percievers AR by deepmind seems to do something similar to get linear attention. To get this project to that level of popularity we have to build various production level proofs, most people already understand the challenges of T5 and GPT*x series. Second the models from a product perspective isn't as important, it's the data that is important. People are making the bets that its smarter to deploy a product with shitty AI and wait for the improvement before investing in the R and D. They build the product and make it easy to replace the AI portion of it in 10 minutes. These factors make it difficult to get projects and indepdent researchers to get the spotlight they need.<|||||>I understand. But this is the only architecture that has infinite context
length.
Den tors 1 dec. 2022 17:01Michael Chung ***@***.***> skrev:
> I don't think this project is well known, theres a huge eco system based
> of just what works right now i.e T5 and GPT*x. For example percievers,
> and percievers AR by deepmind seems to do something similar to get linear
> attention. To get this project to that level of popularity we have to build
> various production level proofs, most people already understand the
> challenges of T5 and GPT*x series. Second the models from a product
> perspective isn't as important, it's the data that is important. People are
> making the bets that its smarter to deploy a product with shitty AI and
> wait for the improvement before investing in the R and D. These factors
> make it difficult to get projects and indepdent researchers to get the
> spotlight they need.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1333989472>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYLDTTICKMR7YJCRZTPKO3WLDDUTANCNFSM5V275BWA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
<|||||>"...this is the only architecture that has infinite context length."
Wait, really?... How did I miss that? I thought it was just a faster, more efficient approach.<|||||>"So it's combining the best of RNN and transformer - great performance,
fast inference, saves VRAM, fast training, "infinite" ctx_len, and free
sentence embedding."
> https://www.reddit.com/r/MachineLearning/comments/umq908/_/
Den tors 1 dec. 2022 18:18jbm ***@***.***> skrev:
> "...this is the only architecture that has infinite context length."
>
> Wait, really?... How did I miss that? I thought it was just a faster, more
> efficient approach?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1334098970>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYLDTTLSQHYBSKLYA5BRX3WLDMYBANCNFSM5V275BWA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
<|||||>The context length is presently limited by the accuracy of the floating point representation, due to the heavily simplified and unified architecture. RWKV is a strong combination of speed and long-context.<|||||>Right, okay. Well, that's pretty compelling, for sure...<|||||>> The context length is presently limited by the accuracy of the floating point representation, due to the heavily simplified and unified architecture. RWKV is a strong combination of speed and long-context.
I think its also limited by the memory as well<|||||>There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.<|||||>> There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.
So you are telling me, that the `context` is effectively encoded into the state. I am reffering to the context length of the model consumes. I guess what you are trying to say is that because we have a state, the model can look into that state for any context size? as a result it has an infinite context length? I looked into the code and it says
```
T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]
```
so it appears to have a limit based off memory @BlinkDL can you clearify ? <|||||>I should let Blink clarify, but regarding T_MAX: https://github.com/BlinkDL/RWKV-LM/blob/a268cd2e40351ee31c30c5f8a5d1266d35b41829/RWKV-v4neo/src/model.py#L34
<|||||>Since the model support for this stalled, perhaps someone on HF's side such as @younesbelkada can help get this model supported?<|||||>> > There is no memory limit associated with context length that I am aware of with these models. State can be retained in a recurrent manner, providing for using only however much memory is available for accelerated parallel operation.
>
> So you are telling me, that the `context` is effectively encoded into the state. I am reffering to the context length of the model consumes. I guess what you are trying to say is that because we have a state, the model can look into that state for any context size? as a result it has an infinite context length? I looked into the code and it says
>
> ```
> T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]
> ```
>
> so it appears to have a limit based off memory @BlinkDL can you clearify ?
I am not using the correct method to train it because I am lazy. But you can always finetune the model to support longer ctxlen. For example, fine-tuned to 4096 here:
https://huggingface.co/BlinkDL/rwkv-4-pile-3b
With the correct training method, I estimate the effective ctx_len can at least be 100K.<|||||>So it doesn't have "infinite" ctx_len.
Den lör 3 dec. 2022 06:26PENG Bo ***@***.***> skrev:
> There is no memory limit associated with context length that I am aware of
> with these models. State can be retained in a recurrent manner, providing
> for using only however much memory is available for accelerated parallel
> operation.
>
> So you are telling me, that the context is effectively encoded into the
> state. I am reffering to the context length of the model consumes. I guess
> what you are trying to say is that because we have a state, the model can
> look into that state for any context size? as a result it has an infinite
> context length? I looked into the code and it says
>
> T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]
>
> so it appears to have a limit based off memory @BlinkDL
> <https://github.com/BlinkDL> can you clearify ?
>
> I am not using the correct method to train it because I am lazy.
>
> But you can always finetune the model to support longer ctxlen. For
> example, fine-tuned to 4096 here:
>
> https://huggingface.co/BlinkDL/rwkv-4-pile-3b
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1336066067>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYLDTTHM2NCFZJFFG4JF63WLLKWTANCNFSM5V275BWA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
<|||||>I suspect technically if you used a rational number representation rather than floating point it would have infinite context length.
Aside: I’m not an ML researcher, but I don’t know why downscaling like this doesn’t get more attention. It seems context length could be fully infinite by re-encoding past information for what is helpful for future states, and a network wired to discover its own architecture would quickly find this.<|||||>> So it doesn't have "infinite" ctx_len. Den lör 3 dec. 2022 06:26PENG Bo ***@***.***> skrev:
RNN has infinite ctx_len if you use correct training & inference method.
I am just being lazy because when the model is small it can't even generate perfect result for 1024 ctxlen.
So I will improve it only after the 50B params model.<|||||>> I suspect technically if you used a rational number representation rather than floating point it would have infinite context length.
Correct. And you can use FP64 to make it practically infinite.<|||||>So then I ask again. Why hasn't this architecture shown wider adoption?
Den lör 3 dec. 2022 16:09PENG Bo ***@***.***> skrev:
> I suspect technically if you used a rational number representation rather
> than floating point it would have infinite context length.
>
> Correct. And you can use FP64 to make it practically infinite.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17230#issuecomment-1336178809>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHYLDTX44AE7G56WBMEPFE3WLNPCVANCNFSM5V275BWA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
<|||||>> So then I ask again. Why hasn't this architecture shown wider adoption?
Why not try the model first lol. I believe 99.9+% researchers haven't even tried it.
Some results, and user feedback:

<|||||>> So then I ask again. Why hasn't this architecture shown wider adoption? Den lör 3 dec. 2022 16:09PENG Bo ***@***.***> skrev:
> […](#)
> I suspect technically if you used a rational number representation rather than floating point it would have infinite context length. Correct. And you can use FP64 to make it practically infinite. — Reply to this email directly, view it on GitHub <[#17230 (comment)](https://github.com/huggingface/transformers/issues/17230#issuecomment-1336178809)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AHYLDTX44AE7G56WBMEPFE3WLNPCVANCNFSM5V275BWA> . You are receiving this because you commented.Message ID: ***@***.***>
go and try and use the model and you might see why or not depending on who you are<|||||>> So then I ask again. Why hasn't this architecture shown wider adoption?
Really simple: it's not on HF. I am waiting for this being implemented in HF, so I can use my trainer on it.
<|||||>Hey guys I am working on this https://github.com/ArEnSc/Production-RWKV, it uses a hugging face like interface and setup RWKV quickly, right now only supports greedy decoding, it's very early days I have not published the package just yet, I am gonna support a few more samplers later, and it only loads the 1.5 B model right now because I need to write some configs for json. The project aims to stay close to the research while providing an avenue for production. We will get some optimizations in and as well a tables showing the results.<|||||>Hi everyone!
Happy too see that the community is very excited about this addition, and thanks @ArEnSc for the great repo which will make the integration process definitely smoother.
Would someone mind opening a Pull Request to add this model (even if it's still on draft), and we'll be more than happy to help you with @ArthurZucker on the conversion process - seems that all the building blocks (architecture, tokenizer + model weights) are here so the conversion should be quite easy.<|||||>> Hi everyone! Happy too see that the community is very excited about this addition, and thanks @ArEnSc for the great repo which will make the integration process definitely smoother. Would someone mind opening a Pull Request to add this model (even if it's still on draft), and we'll be more than happy to help you with @ArthurZucker on the conversion process - seems that all the building blocks (architecture, tokenizer + model weights) are here so the conversion should be quite easy.
Great :) For optimal inference, we need to support both the RNN mode and the GPT mode of RWKV.
The idea is, we can use the GPT mode to process a very long prompt and generate the correct hidden state for the RNN mode, such that the RNN mode can efficiently continue from it.<|||||>I would personally propose a hybrid mode that can do GPT-style extended contexts in an RNN way. This provides for training on very long contextual data if float64 is used, by processing the parts in sequence.<|||||>> I would personally propose a hybrid mode that can do GPT-style extended contexts in an RNN way. This provides for training on very long contextual data if float64 is used, by processing the parts in sequence.
Yeah I will begin with "x% probability to extend the last sample" when training :)
So:
```
1-x prob. of chunkLen
(1-x)^2 prob. of chunkLen*2
(1-x)^3 prob. of chunkLen*3
...
```<|||||>It seems that @leondz has already a working branch and @ArEnSc has the code refactored to make it easy to use.
@ArEnSc can you maybe open a PR and tag @leondz and add him as a co-author (together maybe with all co-authors involved in the integration) so that we can move the discussion there? This plan seems to be the most efficient path towards faster integration except if I am missing something here (I did not followed the integration issues faced by @leondz ). Let me know if you need any help here<|||||>It looked to me like the leondz work did not get past stubbing.
I ran into a similar issue trying to integrate; I'm simply not familiar with all the repository's in-depth norms for naming and testing.<|||||>Feel free to still open a draft PR and I can review the early stage to give you pointers!
The `transformers-cli add-model-like` is very good if a model similar exist but I am not sure that is our case 😅 So just push anything and we'll help on the missing files, naming etc! 🤗 <|||||>@younesbelkada @ArthurZucker ok ill make a new PR sometime today.
I am going to pivot my repo to be a light weight version with little dependencies featuring optimizations from the community sorta like a FastT5 implementation variant, considering we have hugging face dev support for this. The only issues I had prior doing a transformers integration was running tests.
<|||||>Super cool! We'll be more than happy to help you regarding tests 💪 <|||||>Hey,
This integration went fine, until two snags wre hit:
1. the code for reading input couldn't be reproduced
2. the code for training couldn't be reproduced
I would love to see these stable & independent in their own branch. There was no hope of getting RWKV2 to pass the HF model implementation requirements (esp. the model weights precisely matching!) without these being established, but maybe things are better now.
Re: uptake - this model kicks ass, imo the restrictions have only been the difficulty of re-using/reproducing the codebase while it was under development, and that the paper hadn't been written. The math all checks out (I even wrote some tutorial slides for teaching the model) and the implementations have been elegant, it's just engineering issues in the way. Once a reproducible training codebase & paper are out, it's 🚀 time!
-- also would be super cool to have integrated the fast RNN inference if that's still working, but again the implementation and interface was fluid last time I tried to integrate this, and you can't integrate a moving implementation.<|||||>Wow very cool @leondz !
Would be also very keen to have a look at the tutorial you made, we can also ultimately have them on the HF blog to announce the release of this architecture (ofc once we figure out everything about the integration & happy to help you on the post too), how does that sound? <|||||>It's absolutely BlinkDL's project, so up to them and they get the headline credit, but that sounds lovely - I'm down :)<|||||>> It's absolutely BlinkDL's project, so up to them and they get the headline credit, but that sounds lovely - I'm down :)
Can you share your slides? :)
Consider this a community project, and we can build an ecosystem on top of RWKV, like what happens to Stable Diffusion.
I will focus on improving the algorithm & model - now training RWKV-4a with one single tiny extra attention (just a few extra lines comparing with RWKV-4) to further improve some difficult zeroshot tasks (such as LAMBADA) for smaller models.<|||||>> Hey,
>
> This integration went fine, until two snags wre hit:
>
> 1. the code for reading input couldn't be reproduced
> 2. the code for training couldn't be reproduced
>
> I would love to see these stable & independent in their own branch. There was no hope of getting RWKV2 to pass the HF model implementation requirements (esp. the model weights precisely matching!) without these being established, but maybe things are better now.
>
> Re: uptake - this model kicks ass, imo the restrictions have only been the difficulty of re-using/reproducing the codebase while it was under development, and that the paper hadn't been written. The math all checks out (I even wrote some tutorial slides for teaching the model) and the implementations have been elegant, it's just engineering issues in the way. Once a reproducible training codebase & paper are out, it's 🚀 time!
>
> -- also would be super cool to have integrated the fast RNN inference if that's still working, but again the implementation and interface was fluid last time I tried to integrate this, and you can't integrate a moving implementation.
Can I also get the slides perhaps a google docs link for them would be the quickest there are a few parts of this architecture that are still fuzzy to me<|||||>
>
> 1. the code for reading input couldn't be reproduced
> 2. the code for training couldn't be reproduced
I wasn’t aware. It’s too bad we didn’t take these things farther; I was having the opposite issue. @ArEnSc , please let us know if there are any snags preventing opening a PR so somebody else can step in too.<|||||>> > 1. the code for reading input couldn't be reproduced
> > 2. the code for training couldn't be reproduced
>
> I wasn’t aware. It’s too bad we didn’t take these things farther; I was having the opposite issue. @ArEnSc , please let us know if there are any snags preventing opening a PR so somebody else can step in too.
It's important to say that this was due to the pace and mode of development, *not* the model's quality!<|||||>Might not be fully helpful, but I have a repository with a bunch of different variations on inference
https://github.com/harrisonvanderbyl/rwkv_chatbot/blob/main/src/model_run_onnx.py for example is a file where I have made the code compatible with onnx, tensorflow, and Iree inference converters (with only some minor tweaking)<|||||>@ArthurZucker
Hey I am getting issues setting up the dev environment.
I am on python 3.8.10, updated to the latest pip3. I create a venv using 3.8.10 and then run this command
I am on OSX Monterey, M1 Pro.
Which version of python should I be developing on ?
```
pip3 install -e ".[dev]"
ERROR: Could not find a version that satisfies the requirement tensorflow-text; extra == "dev" (from transformers[dev]) (from versions: none)
ERROR: No matching distribution found for tensorflow-text; extra == "dev
```<|||||>Hi @ArEnSc
Indeed it's a bit tricky to install dev environment on a MAC M1.
Could you please replace your `setup.py` by this one: https://gist.github.com/younesbelkada/ce24f0b517db46502792c4b638d4f5b9 and run your command again
After that, you need to run `pip3 install numpy --upgrade` and everything should work fine<|||||>@younesbelkada
```
(.env) michaelchung@michaels-mbp transformers % pip install -e ".[dev]"
Obtaining file:///Users/michaelchung/Code/transformers
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Collecting packaging>=20.0
Using cached packaging-22.0-py3-none-any.whl (42 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
Using cached tokenizers-0.13.2.tar.gz (359 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting requests
Using cached requests-2.28.1-py3-none-any.whl (62 kB)
Collecting numpy>=1.17
Using cached numpy-1.23.5-cp38-cp38-macosx_11_0_arm64.whl (13.3 MB)
Collecting tqdm>=4.27
Using cached tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
Collecting regex!=2019.12.17
Using cached regex-2022.10.31-cp38-cp38-macosx_11_0_arm64.whl (287 kB)
Collecting filelock
Using cached filelock-3.8.2-py3-none-any.whl (10 kB)
Collecting huggingface-hub<1.0,>=0.10.0
Using cached huggingface_hub-0.11.1-py3-none-any.whl (182 kB)
Collecting pyyaml>=5.1
Using cached PyYAML-6.0-cp38-cp38-macosx_12_0_arm64.whl
Collecting pytest-xdist
Using cached pytest_xdist-3.1.0-py3-none-any.whl (36 kB)
Collecting rjieba
Using cached rjieba-0.1.11-cp36-abi3-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl (5.7 MB)
Collecting unidic>=1.0.2
Using cached unidic-1.1.0.tar.gz (7.7 kB)
Preparing metadata (setup.py) ... done
Collecting phonemizer
Using cached phonemizer-3.2.1-py3-none-any.whl (90 kB)
Collecting jaxlib<=0.3.6,>=0.1.65
Using cached jaxlib-0.3.5-cp38-none-macosx_11_0_arm64.whl (61.3 MB)
Collecting codecarbon==1.2.0
Using cached codecarbon-1.2.0-py3-none-any.whl (135 kB)
Collecting pyctcdecode>=0.4.0
Using cached pyctcdecode-0.4.0-py2.py3-none-any.whl (45 kB)
Collecting flake8>=3.8.3
Using cached flake8-6.0.0-py2.py3-none-any.whl (57 kB)
Collecting sacremoses
Using cached sacremoses-0.0.53.tar.gz (880 kB)
Preparing metadata (setup.py) ... done
Collecting tensorflow-metal
Using cached tensorflow_metal-0.7.0-cp38-cp38-macosx_12_0_arm64.whl (1.4 MB)
Collecting GitPython<3.1.19
Using cached GitPython-3.1.18-py3-none-any.whl (170 kB)
Collecting datasets!=2.5.0
Using cached datasets-2.7.1-py3-none-any.whl (451 kB)
Collecting scikit-learn
Using cached scikit_learn-1.2.0-cp38-cp38-macosx_12_0_arm64.whl (8.2 MB)
Collecting sudachidict-core>=20220729
Using cached SudachiDict-core-20221021.tar.gz (9.0 kB)
Preparing metadata (setup.py) ... done
Collecting sacrebleu<2.0.0,>=1.4.12
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Collecting Pillow
Using cached Pillow-9.3.0-cp38-cp38-macosx_11_0_arm64.whl (2.9 MB)
Collecting tf2onnx
Using cached tf2onnx-1.13.0-py3-none-any.whl (442 kB)
Collecting sentencepiece!=0.1.92,>=0.1.91
Using cached sentencepiece-0.1.97-cp38-cp38-macosx_11_0_arm64.whl (1.1 MB)
Collecting evaluate>=0.2.0
Using cached evaluate-0.3.0-py3-none-any.whl (72 kB)
Collecting fugashi>=1.0
Using cached fugashi-1.2.1.tar.gz (337 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-xf18599w/fugashi_18a210c9f68f4c1fb6ece4f85f9f7479/setup.py", line 15, in <module>
output, data_files = check_libmecab()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-xf18599w/fugashi_18a210c9f68f4c1fb6ece4f85f9f7479/fugashi_util.py", line 58, in check_libmecab
raise RuntimeError("Could not configure working env. Have you installed MeCab?")
RuntimeError: Could not configure working env. Have you installed MeCab?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
(.env) michaelchung@michaels-mbp transformers %
```
closer! but still problems<|||||>I think here you need to install Mecab through `brew` - can you try to run:
```
brew install mecab
brew install mecab-ipadic
```
and re-run `pip install -e "[dev]"` again?<|||||>I had the same issue when installing, you should make sure to install `fugashi==1.1.2a6` ( ignore the `mecab` part).
You can also follow the short guide from #18355 <|||||>Is a full dev environment needed to start with? Personally it would be quite inspiring to see a PR even if it didn't pass tests.<|||||>@ArEnSc did you managed to open a PR? I think it's ok to leave it as a draft even if the test does not even pass (i.e. eventually no need to install the dev env, at least for the beginning, we can in the worst case take over the PR if any). Let us know what do you think!<|||||>Yeah hey sorry guys! probably sometime this week or today, my day job is iOS Development, it isn't in MLE. I just a moon light side job in NLP and Speech Synthesis in the media creation domain. Looking to transition eventually, hopefully this PR will proof of my capabilities so I won't abandon it =) <|||||>https://github.com/huggingface/transformers/issues/20737 here is the draft, probably generating all the scaffolding soon<|||||>There is recent active work for interfacing multiple backends to rwkv at https://github.com/harrisonvanderbyl/rwkv_chatbot/blob/main/src/rwkvops.py#L914 (list down at end of file)
EDIT: dev discussion happens in the rwkv discord, where unfortunately I am not active<|||||>>
yeah we will be looking into that as soon as I figure out how the architecture works from a high level I might have some questions but Iam tracing the model now<|||||>I have made a very simple and dumb wrapper for RWKV including `RWKVModel.from_pretrained` and `RWKVModel.generate` functions that could maybe serve as inspiration: [RWKV.py](https://github.com/oobabooga/text-generation-webui/blob/a54b91af778ffb89193874a11ede74a0b1b0cd41/modules/RWKV.py)
This depends on the rwkv library: `pip install rwkv==0.0.6`
I'd like to tag @zphang. He recently implemented LLaMA support in transformers. Maybe adding RWKV would interest him as well.<|||||>this is by far some of the best models right now, the performance of 7B is outstanding.
How come the best model is not supported by HF ?<|||||>Because nobody tried implementing it?<|||||>```
We want to have a positive impact on the AI field. We think the direction of more responsible AI is through openly sharing models, datasets, training procedures, evaluation metrics and working together to solve issues. We believe open source and open science bring trust, robustness, reproducibility, and continuous innovation. With this in mind, we are leading [BigScience](https://bigscience.huggingface.co/), a collaborative workshop around the study and creation of very large language models gathering more than 1,000 researchers of all backgrounds and disciplines.
```
Thats HF mission, so I was wondering how come HF has missed the best model in the industry. Making me think about bias behind what this "Open" platform says vs what they do.
And because of that, i was wondering how come HF teams are not giving a hand to port this in.
I saw LlaMA integration going in at flash speed with HF coverage.. and why this hasnt ??<|||||>There is already an open PR by @ArEnSc<|||||>Two things:
If there are open PR, mention their number so we can keep track of what is stale, duplicate etc.
Llama was so fast because people actively wanted to use it. Meta releases something, HF jumps in line and puts a PR together to support it. Since RWKV is not that big, no support. I am waiting eagerly for support...
<|||||>Hi there,
I am also super excited about this model, I think that PR will go on stale as there has been no activity since a while. If someone wants to take the lead on it, I would be happy to assist with @ArthurZucker !<|||||>Well, I won't go into politics wether big or not big company should get community support or not.. having in mind their resources and manpower.
Projects like this, which are *highly* relevant, gets unsupported. Its the trending of github.. what else are we looking for ?
https://github.com/huggingface/transformers/issues/17230
https://github.com/huggingface/transformers/pull/20809
https://github.com/huggingface/transformers/issues/21875
https://github.com/huggingface/transformers/issues/20737 |
transformers | 17,229 | closed | OPT-fix | # What does this PR do?
Quicklly fxing 3 testing issues!
cc @patrickvonplaten @ydshieh
| 05-13-2022 10:39:34 | 05-13-2022 10:39:34 | Ok let's merge this one first and then I'll rebase mine here: https://github.com/huggingface/transformers/pull/17228<|||||>@younesbelkada can you also replace:
```
tokenizer = GPT2Tokenizer.from_pretrained("patrickvonplaten/opt_gpt2_tokenizer")
```
in the embedding test with
```
tokenizer = GPT2Tokenizer.from_pretrained(self.path_model)
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, just a nit:
This line under `OPTGenerationTest`
```
model = OPTForCausalLM.from_pretrained(self.path_model)
```
has `self.path_model` undefined.<|||||>And there are still 2 `patrickvonplaten/opt_gpt2_tokenizer` in the current version, but probably these are intended? I will leave this part for you ang Patrick. |
transformers | 17,228 | closed | OPT - fix docstring and improve tests slighly | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Some final fixes for OPT that we forgot yesterday
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-13-2022 10:11:00 | 05-13-2022 10:11:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,227 | closed | Adds support for OPT in Flax and TF. | # What does this PR do?
Adds support for OPT in Flax and TF.
Also clean Pytorch code a bit.
## Who can review?
@LysandreJik, @patrickvonplaten, @patil-suraj, @sgugger
Sorry for the two pull requests in a row, pulled from main instead of rebasing and had the entire commit history. Created a new branch to clean a bit. | 05-13-2022 07:54:25 | 05-13-2022 07:54:25 | > Thanks for adding those! Without the `TFOPTForCausalLM`, I don't see the point of adding the TF version of OPT since it can't really be used, so would either not add TF yet or make sure this model is added before merging the PR.
Yes I am not done yet! Sorry if I pinged you a bit early<|||||>@ArthurZucker let me know if the PR is ready for a review or you need help with the tests :-) <|||||>> @ArthurZucker let me know if the PR is ready for a review or you need help with the tests :-)
I just have 1 last test that behave strangely (its more about padding tokens and positional embedings) but the jax code will be ready for review tomorrow 12am. Then will work quickly on the tf code and the PR should be ready by the end of the week! <|||||>FLAX code is pretty much done. The only test that I can't solve is the difference in output for the jited model generation!
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17227). All of your documentation changes will be reflected on that endpoint.<|||||>@ArthurZucker could we add a test similar to this one: https://github.com/huggingface/transformers/pull/17359 to both Flax and TF?
@Rocketknight1 @gante could you check the TF version here as well? <|||||>@ArthurZucker,
Do you think we could fix the PR (I think the PR history is a bit messed up). Also totally fine to close this PR and just open a new PR (move all the relevant files to a new PR) if the git correction is too difficult<|||||>> @ArthurZucker,
>
>
>
> Do you think we could fix the PR (I think the PR history is a bit messed up). Also totally fine to close this PR and just open a new PR (move all the relevant files to a new PR) if the git correction is too difficult
Hey, I think we can close it.
Will create a new clean branch |
transformers | 17,226 | closed | Add support for Opt in tf and flax | # What does this PR do?
Adds support for OPT in Flax and TF.
Also clean Pytorch code a bit.
## Who can review?
@LysandreJik, @patrickvonplaten, @patil-suraj, @sgugger
| 05-13-2022 06:48:30 | 05-13-2022 06:48:30 | Closing for a new pull request where history is fixed |
transformers | 17,225 | closed | OPTForCausalLM lm_head input size should be config.word_embed_proj_dim | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The input size of `lm_head` in `OPTForCausalLM` should be `config.word_embed_proj_dim`, not `config.hidden_size`. This is because, like the comment above the changed line says, `lm_head.weight` is tied to `model.decoder.embed_tokens.weight`, so the input size of lm_head should be the output size of embed_tokens (which is `config.word_embed_proj_dim`) and vice-versa.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-13-2022 06:07:04 | 05-13-2022 06:07:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi!
Thanks for pointing out the issue :)
@patrickvonplaten @ArthurZucker Do we have tests with models that doesn't have the same `hidden_dim` and `word_embed_proj_dim` ? Wondering why the tests are still passing<|||||>For extra information, we noticed this on KoboldAI and our software automatically saves the model the first time it is downloaded to the cache. We do that to help our users store the model in the most suitable format for them for later offline use. So if all tests pass also check the model after it has been saved using huggingface transformers rather than the converters.<|||||>Regarding tests, we should probably add a fast test that randomly initializes a model with `word_embed_proj_dim` != `hidden_size`. Essentially, we could add a test like the following: https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/models/opt/test_modeling_opt.py#L235
Only that we overwrite the `word_embed_proj_dim` variable with something != hidden_size before initializing a random model.
@vfbd would cool if you could add a test for this - if you have no time that's totally fine as well and we could add a test afterwards<|||||>> Regarding tests, we should probably add a fast test that randomly initializes a model with `word_embed_proj_dim` != `hidden_size`. Essentially, we could add a test like the following:
>
> https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/models/opt/test_modeling_opt.py#L235
>
>
> Only that we overwrite the `word_embed_proj_dim` variable with something != hidden_size before initializing a random model.
> @vfbd would cool if you could add a test for this - if you have no time that's totally fine as well and we could add a test afterwards
I think I can take care of that in my FLAX PR, or should I rather create a new PR? <|||||>> in
A new PR would be great :-)<|||||>Any updates on this?<|||||>Thanks for the ping @mrseeker - this is good for merge IMO :-) |
transformers | 17,224 | open | ALBEF: Align Before Fuse | ### Model description
Align Before Fuse (ALBEF) is a vision-language (VL) model that showed competitive results in numerous VL tasks such as image-text retrieval, visual question answering, visual entailment, and visual grounding.
The authors propose to use text encoder (BERT's first half layers) and image encoder (ViT) to create an aligned representation for respective modality before fusing them together with a multi-modal encoder (BERT's second half layers). The model is trained on multi-modal representation tasks and momentum distillation to achieve state-of-the-art results in VL tasks.
As multi-modal models are gaining more attention in academia/industry, I think ALBEF could be a nice addition to the transformers library.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- There are an official implementation and pre-trained/fine-tuned weights by the authors at this [repo](https://github.com/salesforce/ALBEF)
- Link to the [paper](https://arxiv.org/abs/2107.07651) | 05-13-2022 01:51:08 | 05-13-2022 01:51:08 | @jkgrad What is the state of this issue? If no one is working on this, I would like to implement it.<|||||>Hey @DanielFLevine, we'd love for you to try and contribute that model!
cc @NielsRogge who can help out once he's back from leave :)<|||||>@LysandreJik @NielsRogge Great! I've already started looking over the authors' code. Will reach out with any questions. |
transformers | 17,223 | closed | Add type hints for ProphetNet (Pytorch) | Adding type hints for forward methods in user-facing class for Pegasus model (PyTorch) as mentioned in #16059
@Rocketknight1
| 05-13-2022 01:41:10 | 05-13-2022 01:41:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> This looks good to me now, thank you! Let me know when you're ready and I'll merge it.
Go ahead! Thanks! |
transformers | 17,222 | closed | (T5) tf.function wrapped model.generate() does not produce the same result as non-wrapped model.generate() | ### System Info
```shell
- `transformers` version: 4.19.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrik @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run the code below:
```
from transformers import TFT5ForConditionalGeneration, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: I need to convert this into a savedmodel.", padding='max_length', return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(model.generate)
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
assert outputs_0 == outputs_1, "Results are not equal."
```
2. Observe results. This is what i get on my machine:
```
tf.Tensor(
[[ 0 1674 2171 67 7 16 236 20819 15 7 8731 561
18980 29 5 1]], shape=(1, 16), dtype=int32)
['<pad> Ich muss dies in ein gespeichertes Modell umwandeln.</s>']
tf.Tensor(
[[ 0 1674 2171 67 7 16 236 20819 15 7 7 7
7 7 7 7 7 7 7 7]], shape=(1, 20), dtype=int32)
['<pad> Ich muss dies in ein gespeichertesssssssssss']
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
[<ipython-input-74-27b3c9e72d47>](https://localhost:8080/#) in <module>()
18 print(tokenizer.batch_decode(outputs_1))
19
---> 20 assert outputs_0 == outputs_1, "Results are not equal."
AssertionError: Results are not equal.
```
This issue gets worse the larger the max_length option in model.generate() gets.
### Expected behavior
```shell
The results are expected to be equal.
```
| 05-12-2022 23:21:00 | 05-12-2022 23:21:00 | @JEF1056 thank you for pointing it out 👍 I can reproduce the issue with transformers==4.19 and 4.18 (4.17 and older versions do not support the `tf.function` wrapper), with and without input padding.
I will look into it and let you know of any findings :) In the recent past, we found numerical instabilities in some compiled functions on CPU, so it may be related.<|||||>Interesting. I've tried with a GPU and used the master branch of the repository as well and results are identical. Unfortunately, I don't have a lot of experience with XLA so i'm not sure if I can be of much help.
Thanks for picking up the issue though!<|||||>@JEF1056 upon further digging in debugging mode, I couldn't find any unexpected behavior. I did encounter the expected behavior, which explains the mismatch:
- `tf.function` compiles and optimizes the graph, which rearranges FP32 operations. Rearranging FP32 operations leads to very minor numerical differences (see [here](https://stackoverflow.com/questions/48957828/floating-point-arithmetic-why-would-order-of-addition-matter) an explanation). We can see this in the encoder forward pass, which has differences in the order of 1e-6;
- Generation is in essence a sequence of forward passes, where past hidden outputs are fed as inputs. What starts as a tiny difference quickly builds up into a larger difference, which at some point results in different tokens.
The reverse can also be observed: if we pick an input with a stronger signal or a more powerful model, we see smaller differences at a token level. Consider the following example, and try it out with `t5-small`, `t5-base`, and `t5-large`:
```
from transformers import TFT5ForConditionalGeneration, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("t5-large")
model = TFT5ForConditionalGeneration.from_pretrained("t5-large")
input_ids = tokenizer("translate English to German: This is a very long sentence that is easy to translate because it has common words.", padding='max_length', return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(model.generate)
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
```
The outputs are not exactly the same, but they are sensible. In any case, `tf.function` + generation is something we are working at the moment, stay tuned for further updates (which may alleviate this issue) :D<|||||>Also -- @Rocketknight1 @patrickvonplaten this issue [`tf.function` resulting in different FP32 ops -> different generate results] is something I'm seeing constantly and, curiously, the `tf.function` generation outputs qualitatively worse text. I wonder if there is any way to mitigate this issue and/or if we should try to contact the TF team 🤔 <|||||>@gante I see, and that does make sense. I presume that the issue occurs in T5 and not GPT2 becasue T5 is encoder-decoder, while GPT2 is decoder only?
```
from transformers import TFGPT2LMHeadModel, AutoTokenizer
import tensorflow as tf
import numpy as np
model_name = 'gpt2'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFGPT2LMHeadModel.from_pretrained(model_name)
input_ids = tokenizer("This is a sentence that ", return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids, max_length=100)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(lambda x: model.generate(x, max_length=100))
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
assert np.array_equal(outputs_0, outputs_1), "Results are not equal."
```
The above code works fine.
For context, I've been trying to convert t5 to tensorflowjs and though I have got it working before by creating my own generate function in tf.js, having huggingface's generate function directly as part of the savedmodel would really improve its speed.
In the current t5 tf.function generate() wrapper, XlaDynamicUpdateSlice and TensorListConcatV2 ops are used, which pobably won't ever be supported by tf.js, do you know if there is any way to implement tf.function without these?<|||||>@gante from my experience small numerical differences in the magnitude of 1e-6 should not lead to different tokens being generated (in flax they don't). Also the generate outputs of https://github.com/huggingface/transformers/issues/17222#issue-1234576122 look quite bad. Since we don't seem to have this issue in GPT2, could it be that something encoder-decoder specific is not done correctly in `tf.function` ? E.g. the `encoder_attention_mask` or the cache? @JEF1056 could you try running the code also with `use_cache=False` to see if we still get a difference ?
To me it looks like a bug still from our side<|||||>@patrickvonplaten
With use_cache:
```
from transformers import TFT5ForConditionalGeneration, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: This is a very short sentence.", padding='max_length', return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(model.generate)
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
assert outputs_0 == outputs_1, "Results are not equal."
```
```
tf.Tensor([[ 0 644 229 236 1319 7755 49 20144 5 1]], shape=(1, 10), dtype=int32)
['<pad> Das ist ein sehr kurzer Satz.</s>']
tf.Tensor(
[[ 0 644 229 236 1319 7755 5 1 0 0 0 0 0 0
0 0 0 0 0 0]], shape=(1, 20), dtype=int32)
['<pad> Das ist ein sehr kurz.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>']
```
Without use_cache:
```
from transformers import TFT5ForConditionalGeneration, AutoTokenizer
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: This is a very short sentence.", padding='max_length', return_tensors="tf").input_ids
# This is normal model.generate()
outputs_0 = model.generate(input_ids, use_cache=False)
print(outputs_0)
print(tokenizer.batch_decode(outputs_0))
# This is wrapped with tf.function()
wrapped = tf.function(lambda x: model.generate(x, use_cache=False))
outputs_1 = wrapped(input_ids)
print(outputs_1)
print(tokenizer.batch_decode(outputs_1))
assert outputs_0 == outputs_1, "Results are not equal."
```
```
tf.Tensor([[ 0 644 229 236 1319 7755 49 20144 5 1]], shape=(1, 10), dtype=int32)
['<pad> Das ist ein sehr kurzer Satz.</s>']
tf.Tensor(
[[ 0 644 229 236 1319 7755 5 1 0 0 0 0 0 0
0 0 0 0 0 0]], shape=(1, 20), dtype=int32)
['<pad> Das ist ein sehr kurz.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>']
```
Results appear to be the same.
I would also like to bring up that after wrapping generate() in tf.function, it no longer appears to respect stop tokens, which should be something that can be fixed on huggingface's side.
I think @gante 's point that small numerical differences magnified by generating an autoregressive fashion is correct, as results diverge wildly the longer the generated sentence is.<|||||>@JEF1056,
Note that the model does generate the EOS token. It's expected to get padding in the end since shapes need to be static when using XLA. You can remove the `<pad>` tokens by doing:
```py
print(tokenizer.batch_decode(outputs_1, skip_special_tokens=True))
```
Still interested in a more in-detail analysis where exactly the differences start to creep in and at what point they become very significant. XLA generation does not look good enough to me to not be a bug<|||||>I see, I didn't realize that XLA shapes need to be static (seems waste compute for longer sequences though). That might pose some difficulties in getting generate() to work with GPT2 (decoder models in general) since inputs can't be padded to length in that case.
<|||||>For some additional context @JEF1056: we have already found (and mitigated) an XLA/non-XLA mismatch (see https://github.com/tensorflow/tensorflow/issues/55682), so we can't rule out conversion problems. Sadly, they are hard to detect, as it relies on numerical debugging with XLA.
I'm going to continue developing XLA + generate, keeping this issue in the backlog -- there is a chance future changes fix the issue we are seeing, or that I stumble across the root cause naturally as I enable the use of `tf.function` on other models.
Thank you for raising the issue, and let us know if your see related problems 🙏 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue is fixed, XLA T5 should be working properly :) |
transformers | 17,221 | closed | Use word index for determining whether a token is a subword when addi… | …ng word labels.
# What does this PR do?
Use repeated word index instead of offset[0] == 0 for assigning pad labels to non-first subwords.
As described in the #17220, words like 그 will be split into 2 subwords both with 0,1 offsets but with the same word index. This PR ensures that this second subword receives the correct -100 label
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17220
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I tried to write a test but the test tokenizer doesn't split my test word. Open to suggestions or maybe it can be a slow test with the pretrained tokenizer.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2022 23:19:52 | 05-12-2022 23:19:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17221). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge do you have suggestions for making a test case for the fast tokenizer that will split the "그" token?
Could I use the pretrained Layoutlmv2 tokenizer and mark the test as slow, or do you have suggestions for how to properly configure a test tokenizer that will split that token?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,220 | closed | LayoutLMv2 Fast Tokenizer improperly aligns labels for non-first subwords with 0 offsets | ### System Info
```shell
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.16.0
- Platform: macOS-12.3.1-x86_64-i386-64bit
- Python version: 3.8.9
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
In [1]: from transformers import LayoutLMv2TokenizerFast
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
In [2]: tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
In [4]: toks = tokenizer(['그'], boxes=[[1,2,3,4]], word_labels=[2])
In [5]: toks
Out[5]: {'input_ids': [101, 1455, 30017, 102], 'token_type_ids': [0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1], 'bbox': [[0, 0, 0, 0], [1, 2, 3, 4], [1, 2, 3, 4], [1000, 1000, 1000, 1000]], 'labels': [-100, 2, 2, -100]}
In [9]: toks.labels
Out[9]: [-100, 2, 2, -100]
```
### Expected behavior
```shell
Since the single word '그' was split into 2 tokens, the second subword should be assigned a -100 value.
This is happening because of logic here:
https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py#L599
where a token's offset is used to determine whether it is the first subword or not.
The problem with '그' is that it is split into 2 subword: 'ᄀ', '##ᅳ' that share an offset of (0,1) so both given the label of the word.
By using solely the word_index to decide which is a subword this problem can be avoided.
```
| 05-12-2022 22:29:57 | 05-12-2022 22:29:57 | https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py#L599
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,219 | closed | Updated checkpoint support for Sagemaker Model Parallel | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR updates SMP checkpoint support. With these changes SMP optimizer state checkpoints will be saved partially while SMP model weights will be saved in full. Since weights are saved in full, checkpoint behavior will be compatible with `save_pretrained` and `shard_checkpoint`.
- Uses `local_state_dict()` with partial optimizer state saving.
- Uses `smp.save` optimizer state saving for SMP.
- Uses `smp.load `when loading optimizer state saving for SMP.
- Reorders weight loading to happen after wrapping of model for SMP.
- Updated checks for the existence of optimizer checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: `filename_0_0` or `filename_0_0_0`).
- adds `load_best_model_at_end` support for SMP
This PR is created based on the feedback from [previous PR on partial checkpoint support for SMP:](https://github.com/huggingface/transformers/pull/16950)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2022 21:31:51 | 05-12-2022 21:31:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot for your PR! I've left some refactoring suggestions to avoid duplicating code.
Thank you very much for reviewing. I updated the PR. <|||||>Updated based on you suggestions. |
transformers | 17,218 | closed | Handle copyright in add-new-model-like | # What does this PR do?
This makes sure the copyright is switched to the current year when a user uses `transformers-cli add-new-model-like` | 05-12-2022 21:16:16 | 05-12-2022 21:16:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,217 | closed | Black preview | # What does this PR do?
This PR switches `make style`, `make fixup` and `make quality` to use `black --preview`, which reformats the docstrings (also done by `hf-doc-styler` so nothing new here) as well as all error/logger/warning strings to respect the char limit.
This will avoid the annoying comments from the nefarious sgugger on PRs.
Note: for the preview feature there are differences between minor versions, so needed to update the setup a bit. | 05-12-2022 19:14:48 | 05-12-2022 19:14:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,216 | closed | Fixed incorrect error message on missing weight file. | # What does this PR do?
I just started using Hugging Face Transformers for the first time, and encountered this error.
OSError: Error no file named pytorch_model.bin found in directory (...) but there is a file for Flax weights. Use `from_flax=True` to load this model from those weights.
Indeed, I forgot to download `pytorch_model.bin`, but the model I tried to use was not using Flax, so I dug a little bit to see which file was the library looking for.
For me it seems that there was a simple mistake...
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2022 16:57:06 | 05-12-2022 16:57:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,215 | closed | -1e9 constants in T5 implementation | These -1e9 constants are too large for fp16 training
https://github.com/huggingface/transformers/blob/df735d1317994e366ab0edff6c55930e18912b7c/src/transformers/models/t5/modeling_tf_t5.py#L728
https://github.com/huggingface/transformers/blob/df735d1317994e366ab0edff6c55930e18912b7c/src/transformers/models/t5/modeling_tf_t5.py#L746
Maybe they should be made configurable? | 05-12-2022 14:06:22 | 05-12-2022 14:06:22 | Probably of interest to @ydshieh |
transformers | 17,214 | closed | Bug of the text-classification in examples | ### System Info
```shell
When I finetuned the text classification model based on the glue no trainer script, I found a bug in our script.
The URL is below:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L525
When we use the accelerator for multi-GPU training, the code should transfer from
if step == len(eval_dataloader)
to
if step == len(eval_dataloader) -1
Otherwise, it cannot work to filter the last step duplicated samples.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
just run the script with a text classification using multi-GPU accelerator. The problem occurs in the last step for duplicated samples.
### Expected behavior
```shell
I think it should be fixed soon.
```
| 05-12-2022 13:47:02 | 05-12-2022 13:47:02 | |
transformers | 17,213 | closed | Add support for Perceiver ONNX export | # What does this PR do?
As part of #16308, this PR adds support for exporting `Perceiver` to ONNX 🚀
It introduces support for the following features:
- `masked-lm`, e.g. with `python -m transformers.onnx --feature=masked-lm --model=deepmind/language-perceiver export`
- `sequence-classification`, e.g. with `python -m transformers.onnx --feature=sequence-classification --model=deepmind/language-perceiver export`
- `image-classification`, e.g. with `python -m transformers.onnx --feature=image-classification --model=deepmind/vision-perceiver-conv export`
To achieve this, I made the following changes:
- Added `PerceiverOnnxConfig`.
- Changed parts of the modelling. The operations `.T`, `torch.broadcast_to` and `torch.moveaxis` aren't currently supported to be exported to ONNX by PyTorch, so I built some workarounds.
- Changed the modality check in `onnx.__main__.py` since the model type `perceiver` can have either tokenizer or feature extractor, depending on the concrete model. (There might be a better way to achieve this than my try-except construction.)
- Added Perceiver to ONNX `FeaturesManager`.
- Added Perceiver to `test_onnx_v2.py`.
## Limitations
The `AutoModel` for Perceiver doesn't work without any preprocessors:
```python
model = AutoModel.from_pretrained("deepmind/language-perceiver")
tokenizer = AutoTokenizer.from_pretrained("deepmind/language-perceiver")
tokd = tokenizer("Rhubarb", return_tensors="pt")
tokd["inputs"] = tokd.pop("input_ids")
model(**tokd)
```
gives this error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/Users/patrick/Projects/open-source/transformers/notebooks/perceiver-onnx.ipynb Cell 10' in <cell line: 6>()
4 tokd = tokenizer("Rhubarb", return_tensors="pt")
5 tokd["inputs"] = tokd.pop("input_ids")
----> 6 model(**tokd)
File ~/.pyenv-x86/versions/transformers-x86/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Projects/open-source/transformers/src/transformers/models/perceiver/modeling_perceiver.py:866, in PerceiverModel.forward(self, inputs, attention_mask, subsampled_output_points, head_mask, output_attentions, output_hidden_states, return_dict)
864 inputs_without_pos = None
865 if inputs.size()[-1] != self.config.d_model:
--> 866 raise ValueError(
867 f"Last dimension of the inputs: {inputs.size()[-1]} doesn't correspond to config.d_model: {self.config.d_model}. "
868 "Make sure to set config.d_model appropriately."
869 )
871 batch_size, seq_length, _ = inputs.size()
872 device = inputs.device
ValueError: Last dimension of the inputs: 9 doesn't correspond to config.d_model: 768. Make sure to set config.d_model appropriately.
```
An embedding is needed, which is implemented in [`PerceiverTextPreprocessor`](https://huggingface.co/docs/transformers/main/en/model_doc/perceiver#transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor), but not included in the default `PerceiverModel`. Therefore, the ONNX export in this PR doesn't include the `default` feature either.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: #16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? → Added model to `test_onnx_v2.py`.
## Who can review?
@lewtun @ChainYo Could you have a look? 🤗
@NielsRogge I made some minor changes to your code in `modeling_perceiver.py`. Maybe you'd like to have a look too.
| 05-12-2022 13:32:07 | 05-12-2022 13:32:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @deutschmn this looks like a good start! :tada:
Maybe the `preprocessor` check needs to be updated to fit other models' requirements. |
transformers | 17,212 | closed | update BART docs | # What does this PR do?
Update bart `deocder_attention_mask` docstring.
Fixes #17191 | 05-12-2022 13:10:44 | 05-12-2022 13:10:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,211 | closed | CUDA out of memory in Seq2SeqTrainer class | ### System Info
```shell
- `transformers` version: 4.11.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger @patrickvonplaten
Hello! I am trying to finetune "sshleifer/distill-pegasus-xsum-16-4" model for a seq2seq2 generation task(specifically summarization) on my own custom dataset(~1800 training data points) using hugging face transformers Seq2SeqTrainer but encountered CUDA OOM error.
I am trying to follow the [finetune-summarization notebook](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb) mentioned by @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Libraries
```bash
import transformers
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
import nltk
import numpy as np
```
Data
```bash
data_files = {
"train": "data/train.jsonl",
"validation": "data/val.jsonl"
}
raw_datasets = load_dataset('json', data_files=data_files)
```
Load tokenizer and model
```bash
model_checkpoint = 'sshleifer/distill-pegasus-xsum-16-4'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
```
Process Data
```bash
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples["summary"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
```
Trainer
```bash
metric = load_metric("rouge")
batch_size = 2
model_name = model_checkpoint.split("/")[-1]
args = Seq2SeqTrainingArguments(
f"{model_name}-finetuned-xsum",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=1,
predict_with_generate=True,
fp16=True,
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
```
Error
```bash
The following columns in the training set don't have a corresponding argument in `PegasusForConditionalGeneration.forward` and have been ignored: summary, document.
***** Running training *****
Num examples = 1599
Num Epochs = 1
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 800
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-3435b262f1ae> in <module>
----> 1 trainer.train()
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1310 tr_loss_step = self.training_step(model, inputs)
1311 else:
-> 1312 tr_loss_step = self.training_step(model, inputs)
1313
1314 if args.logging_nan_inf_filter and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)):
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1837 if self.use_amp:
1838 with autocast():
-> 1839 loss = self.compute_loss(model, inputs)
1840 else:
1841 loss = self.compute_loss(model, inputs)
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1871 else:
1872 labels = None
-> 1873 outputs = model(**inputs)
1874 # Save past state if it exists
1875 # TODO: this needs to be fixed and made cleaner later.
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1391 output_attentions=output_attentions,
1392 output_hidden_states=output_hidden_states,
-> 1393 return_dict=return_dict,
1394 )
1395 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1226 output_attentions=output_attentions,
1227 output_hidden_states=output_hidden_states,
-> 1228 return_dict=return_dict,
1229 )
1230 # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
796 attention_mask,
797 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
--> 798 output_attentions=output_attentions,
799 )
800
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions)
320 attention_mask=attention_mask,
321 layer_head_mask=layer_head_mask,
--> 322 output_attentions=output_attentions,
323 )
324 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
207 # self_attention
208 key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
--> 209 value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
210
211 if self.is_decoder:
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
101
102 def forward(self, input: Tensor) -> Tensor:
--> 103 return F.linear(input, self.weight, self.bias)
104
105 def extra_repr(self) -> str:
~/anaconda3/envs/python3/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1846 if has_torch_function_variadic(input, weight, bias):
1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848 return torch._C._nn.linear(input, weight, bias)
1849
1850
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB total capacity; 13.65 GiB already allocated; 11.75 MiB free; 13.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
The error can be re-produced on loading any open-source summarization dataset. `
raw_datasets = load_dataset("xsum")
`
### Expected behavior
```shell
Finetune the summarization model.
```
| 05-12-2022 12:33:01 | 05-12-2022 12:33:01 | Hey @kritika121,
It seems like you don't have enough GPU memory to run your training. Do you maybe have access to a bigger GPU? Otherwise you can try reducing the batch_size, enabling [gradient_checkpointing](https://huggingface.co/docs/transformers/main/en/performance#gradient-checkpointing) or training in [fp16](https://huggingface.co/docs/transformers/main/en/performance#gradient-checkpointing) to save memory.<|||||>Thanks @patrickvonplaten I have batch_size of 2 and fp16 is set too. I tried enabling gradient_checkpointing and it worked for me. Thanks for your help!!
Closing the issue. |
transformers | 17,210 | closed | Add test to ensure models can take int64 inputs | Like it says on the tin, this is a test to ensure that our models can take `tf.int64` inputs. I expect this will cause some models to break, in which case this PR will also include patches to ensure that they can take `tf.int64`. | 05-12-2022 12:32:20 | 05-12-2022 12:32:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I added small patches to all the models that were failing this test. In all cases, the patch shouldn't break `int32` inputs, since I just cast the dtype of the other operand (which was previously invisibly hardcoded as `tf.int32`) to the dtype of the input tensor. |
transformers | 17,209 | closed | Thanks | null | 05-12-2022 11:53:55 | 05-12-2022 11:53:55 | You're welcome! :hugs: <|||||>Thanks again, not quite sure what we're doing <|||||>> You're welcome! :hugs:
Once we were mere men!, now, we are much less!? |
transformers | 17,208 | closed | Add Visual Question Answering (VQA) pipeline | ### Feature request
We currently have [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt) in the library, which, among other tasks, is capable of performing visual question answering (VQA).
It would be great to have a pipeline for this task, with the following API:
```
from transformers import pipeline
pipe = pipeline("vqa")
pipe("cats.png", "how many cats are there?")
```
This pipeline could default to the https://huggingface.co/dandelin/vilt-b32-finetuned-vqa checkpoint. Also check out the [Space](https://huggingface.co/spaces/nielsr/vilt-vqa) that showcases the model.
This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines). For an example PR that added a pipeline, see #11598.
### Motivation
A pipeline is required in order to have inference widgets + a task defined at hf.co/tasks.
Moreover, it would be great to do VQA in two lines of code.
### Your contribution
I can definitely assist in this, together with @Narsil, who's the pipeline expert. | 05-12-2022 11:10:15 | 05-12-2022 11:10:15 | Tagging @mishig25 for the widget <|||||>Also LXMERT should handle this task, but likely has a very different API.<|||||>This sounds amazing. Happy to contribute in anyway I can<|||||>I'd love to pick this up!<|||||>Hey @sijunhe, I'm just starting out in open-source, but I'd like to help out however I can! <|||||>@sabarish-srinivasan appreciate the help but I saw this a little late and I am almost done with the PR.
<|||||>@sijunhe No problem, thanks for letting me know! <|||||>@LysandreJik I looked at both ViLT and LXMERT and I don't think it's possible to combine these two into a single pipeline for the following reasons:
1. ViLT formats VQA as a classification task and LXMERT formats VQA as a squad-like QA task. It'd be hard to write a common post-processing
2. ViLT is self-contained within transformers but LXMERT expects some faster-RCNN model to generate the visual features that goes into the model.<|||||>Yes, don't think we should support LXMERT for the pipeline, since it isn't entirely included in the Transformers library.<|||||>Sounds good, let's go with ViLT then!<|||||>Now that #17286 is merged, this issue should be closed now?<|||||>Yes :) Thank you for your contribution @sijunhe! |
transformers | 17,207 | closed | Add UL2: Unifying Language Learning Paradigms | ### Model description
UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and weights (20 billion parameter models): https://github.com/google-research/google-research/tree/master/ul2
The code is based on T5x (which is JAX/FLAX): https://github.com/google-research/t5x | 05-12-2022 10:09:04 | 05-12-2022 10:09:04 | cc @stefan-it @peregilk @agemagician @stancld @edugp FYI might be interesting for you as well :-) <|||||>Is anyone working on porting UL2 to `transformers` already? If not, I am interested in porting it.<|||||>Hey @manuelciosici - think @kamalkraj is working on it . Maybe you guys can sync on how to collaborate? :-)
Happy to help in any way! <|||||>Hi @manuelciosici,
I am trying to understand the [t5x](https://github.com/google-research/t5x) library and loading the model.
We can work together. You can ping me on slack/discord
<|||||>@manuelciosici and @kamalkraj. I am about to start some UL2 training in t5x. I might also contribute here. <|||||>Hello @kamalkraj, regarding the `t5x` library (loading model, etc.), I've done some inference with `LongT5` model in my repo [here](https://github.com/stancld/longt5-eval).<|||||>Thank you so much @stancld
<|||||>> @manuelciosici and @kamalkraj. I am about to start some UL2 training in t5x. I might also contribute here.
Hi @manuelciosici ,
Did you start fine-tuning? Did you identify the t5 gin file required for it.
They have only released `ul2` gin file. Not the full set.
https://github.com/google-research/google-research/issues/1101
<|||||>@kamalkraj Unfortunately, I was handed a tight deadline, so I won't be able to look into UL2 until July.<|||||>no worries! Anybody interested in taking over the UL2 implementation ? Would be happy to help :-)<|||||>I can take a stab at this in the next week if no one else is actively working on it!
I'll hopefully open a PR soon - help is welcome from anyone who would like as well :)<|||||>I've had the model running locally for a while but didn't get around to pushing it to the hub until now 😅
with #17420, merged into master the architecture is already supported (in 4.20).
I've put the weights here for now:
https://huggingface.co/Seledorn/ul2
I think what remains is mostly verifying that we get identical output with the port and the original model. But this is as @kamalkraj noted a bit difficult without the complete gin files. Though the model does give me reasonable outputs, so I believe the conversion is at least mostly correct.
<|||||>That's amazing @DanielHesslow - I'll check them out this week!<|||||>Great job on porting the model!<|||||>Google released weights for 3 UL2 checkpoints. I'm assuming the model in HuggingFace corresponds to the last checkpoint, but just to make sure, that is correct right?<|||||>Yes that's true as I know! cc @DanielHesslow just to be sure as he has ported the checkpoint :-)<|||||>Yeah it's the latest one<|||||>Hello :hand: are you aware of any implementation of the Mixture-of-Denoisers loss? preferably with HF compatibility. Thanks in any case!<|||||>We haven't added this one yet - would you like to open a feature request / PR for it maybe? :-) |
transformers | 17,206 | closed | Traced models serialization and torchscripting fix | # What does this PR do?
- Fixes the issue that was preventing traced models to be TorchScripted
- Fixes the issue that was preventing trace models serialization
- Fixes get_attr issues
Fixes #15974
@jamesr66a Can you try on your end and validate that it solves your issues?
| 05-12-2022 09:12:28 | 05-12-2022 09:12:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There seems to remain a few failures in `torch.fx` tests; the PR can be merged after those are solved!<|||||>@michaelbenayoun I'd like to propose some additional fixes that we discovered were needed to properly trace `T5ForConditionalGeneration`:
https://github.com/jamesr66a/transformers/commit/1a75148346cb471267b59fe7b473f304f6a02691
Can these be added?<|||||>Yes, for some reason the tests do not pass for torch 1.11 (I tested locally on torch 1.10).
I will add those changes too.<|||||>@michaelbenayoun Can I propose one final change to switch the graph surgery workaround to only trigger on older PyTorch versions where it's relevant?
https://github.com/jamesr66a/transformers/commit/5ac7bb737d9c9806051311a16f1799102d456fb8
Otherwise, when we're working on PyTorch nightly, this^ code breaks because it's trying to remove nodes that still have uses<|||||>@jamesr66a I added the gating, but only from version 1.12 as it was failing otherwise.<|||||>@michaelbenayoun Unfortunately, bumping the version check up to `1.12` breaks us. Actually, that was indirectly working around a semantic issue with deleting the concrete arg node. Do you mind augmenting the patch with this:
https://github.com/pbelevich/transformers/commit/e3fce52d1e14b1f75941fcaca7cd3029a6128016<|||||>@sgugger Replaced the creation of a new flag by setting a special value for the `fx_compatible` flag for models that can be traced but not torchscipted (-1).
This flag should take a boolean value 99% of the time anyways.<|||||>@michaelbenayoun This doesn't really work either. I'm not trying to be gratuitously painful here, but the common model tester is at the core of our test suite for the new model addition PRs. Those PRs are huge, and it only thanks to a robust CI that we can make sure the models added actually work with the whole API Transformers offers.
Adding a new flag, or a magic value for an existing flag, just because there is one model that needs different testing is not something we usually do or allow. In both cases, either the contributor or the reviewer will have no idea what your new flag/magic value does, especially since there is no documentation of it anywhere.
As I said before, in those instances where we need to adapt a common test to a specific model, we override it in the tester of said model. cc @LysandreJik and @patrickvonplaten |
transformers | 17,205 | closed | [WIP] add MobileViT model | # What does this PR do?
Add the MobileViT model to Transformers. This is a computer vision model that combines CNNs with transformers: https://machinelearning.apple.com/research/vision-transformer
The model comes in three sizes: small, extra small, and xx-small. There are two heads: image classification and semantic segmentation. Object detection will be added later.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Internal discussion on Slack.)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2022 09:09:09 | 05-12-2022 09:09:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I messed up. Made a new PR instead: https://github.com/huggingface/transformers/pull/17354 |
transformers | 17,204 | open | [Kernel Fusion] Training benchmarks of Torchdynamo + AOTAutograd + NVFuser(many models) | Note to maintainers: We are using this PR to collaborate and there is no intention yet to merge anything, so please ignore unless you want to experiment with the latest auto-speedups.
## What was the issue with the previous AOTAutograd integration?
So, there was some investigation into applying AOTAutograd a couple months ago in this PR (https://github.com/huggingface/transformers/pull/15264). Although the performance results were quite promising, @stas00 and I found one major blocker - the potential for incorrect semantics. AOTAutograd is a tracing-based approach, and as such, it's fairly difficult for it to guarantee that its semantics are always correct. For example, data-dependent control flow, use of third-party libraries (like Numpy), or modification of global state all posed problems for integrating AOTAutograd into HuggingFace. Considering that HF has >100 models (and is adding more every day!), the burden of needing to ensure that AOTAutograd produces correct results would have been quite burdensome.
## TorchDynamo to the rescue
Luckily, now, there's another solution in the form of [Torchdynamo](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361) (from @jansel)! In contrast to tracing based approaches like `jit.trace` and AOTAutograd, Torchdynamo is *sound* - it should never produce incorrect results (modulo bugs). In comparison to approaches like `jit.script`, Torchdynamo is much more *complete* - it should allow any PyTorch code to be able to run, although it may not always speed things up.
The central approach that TorchDynamo takes is that as opposed to trying to live at the AST level (i.e. `jit.script`) or the object-level (i.e. tracing like `jit.trace`), it lives at the Python bytecode level. This is similar to the approach that language JITs like Javascript's V8 or JVM's Hotspot take. By living at this level, it's able to ensure that it can support *all* Python, as it can always fall back to eager-mode execution. Let's take an example of some code that would have been very problematic previously.
```
def f(x):
a = x * 2
b = a + torch.from_numpy(np.randn(5))
if b.sum() > 0:
return d.sin().sin()
else:
return d.cos().cos()
```
Not only does this have data-dependent control flow - it also has calls to external libraries that aren't PyTorch! (numpy in this case). TorchDynamo (morally) would rewrite this code into something like this:
```
def block1(x, np_tensor):
a = x * 2
b = a + np_tensor
return b
def block2(b):
return b.sin().sin()
def block3(b):
return b.sin().sin()
def f_dynamo(x):
b = block1(x, torch.from_numpy(np.randn(5))
if b.sum() > 0:
return block2(b)
else:
return block3(b)
```
Note that `block1`, `block2`, and `block3` are just simple straight line functions - exactly what AOTAutograd can handle! So, we can now apply AOTAutograd to each of those blocks.
In this way, TorchDynamo and AOTAutograd complement each other - TorchDynamo resolves the dynamic/non-traceable behavior that AOTAutograd can't handle, and AOTAutograd then provides static compilation that handles things like PyTorch's autograd.
So, what do you need to do to use AOTAutograd with Torchdynamo? Well, it's simple!
```
import torchdynamo
from torchdynamo.optimizations.training import aot_autograd_speedup_strategy
with torchdynamo.optimize(aot_autograd_speedup_strategy):
# run your model here!
```
All of the above is simply to capture the graphs in the first place. However, after capturing the graphs, we need to actually speed them up. To do so, we pass them to [NVFuser](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/), a PyTorch-native compiler for GPUs.
## Results
This script primarily comes from a great effort from @anijain2305. However, I want to note a couple of things.
1. In contrast with the pure AOTAutograd integration, where our benchmark only covered 3.5 models (and had some tricky to debug correctness issues), it was fairly trivial to extend this benchmarking to 14 models (with correctness testing for all of them!) In fact, the main bottleneck to adding more is just figuring out how to run more models (I pretty much exhausted all of the AutoConfig ones I could run easily).
2. For the most part, TorchDynamo + AOTAutograd improves both performance and memory usage. On some models, quite significantly (1.4x+ for MobileBert, FNet, and Albert), but it generally improves performance for nearly all models.
3. For many of these models, we *can't* produce a single graph to compile, often due to Numpy usage. Here, it's crucial that torchdynamo passes multiple graphs to AOTAutograd.
4. Currently, we feed the graphs produced by TorchDynamo and AOTAutograd into NVFuser. But, in the future, other backends should have no issues integrating into this as well (and in fact, we *have* some extra integrations, like TensorRT).
Run on A100:
```
$ python hf_dynamo_aot.py --run-dynamo-aot-efficient --nvfuser
```
Results:
| model | dtype | name | time (s) | mem (GB) | speedup | mem comp ression |
|:---------------------------|:---------------|---------------------|-----------:|-----------:|----------:|------------------:|
| BertForMaskedLM | float32 |eager | 0.040 | 3.521 | 1.000 | 1.000 |
| BertForMaskedLM | float32 |dynamo_aot_efficient | 0.037 | 3.516 | 1.094 | 1.001 |
| BertForMaskedLM | float16 |eager | 0.027 | 1.880 | 1.000 | 1.000 |
| BertForMaskedLM | float16 |dynamo_aot_efficient | 0.023 | 1.885 | 1.155 | 0.997 |
| BertForMaskedLM | bfloat16 |eager | 0.027 | 1.874 | 1.000 | 1.000 |
| BertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.023 | 1.867 | 1.154 | 1.003 |
| AlbertForMaskedLM | float32 |eager | 0.081 | 6.070 | 1.000 | 1.000 |
| AlbertForMaskedLM | float32 |dynamo_aot_efficient | 0.056 | 3.943 | 1.442 | 1.539 |
| AlbertForMaskedLM | float16 |eager | 0.046 | 2.908 | 1.000 | 1.000 |
| AlbertForMaskedLM | float16 |dynamo_aot_efficient | 0.035 | 1.971 | 1.338 | 1.475 |
| AlbertForMaskedLM | bfloat16 |eager | 0.048 | 2.866 | 1.000 | 1.000 |
| AlbertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.035 | 1.972 | 1.374 | 1.453 |
| GPT2LMHeadModel | float32 |eager | 0.055 | 4.632 | 1.000 | 1.000 |
| GPT2LMHeadModel | float32 |dynamo_aot_efficient | 0.043 | 3.791 | 1.280 | 1.222 |
| GPT2LMHeadModel | float16 |eager | 0.036 | 2.426 | 1.000 | 1.000 |
| GPT2LMHeadModel | float16 |dynamo_aot_efficient | 0.029 | 2.018 | 1.213 | 1.203 |
| GPT2LMHeadModel | bfloat16 |eager | 0.036 | 2.425 | 1.000 | 1.000 |
| GPT2LMHeadModel | bfloat16 |dynamo_aot_efficient | 0.030 | 1.998 | 1.208 | 1.214 |
| LongformerForMaskedLM | float32 |eager | 0.121 | 4.591 | 1.000 | 1.000 |
| LongformerForMaskedLM | float32 |dynamo_aot_efficient | 0.120 | 4.585 | 1.006 | 1.001 |
| LongformerForMaskedLM | float16 |eager | 0.096 | 2.711 | 1.000 | 1.000 |
| LongformerForMaskedLM | float16 |dynamo_aot_efficient | 0.096 | 2.705 | 1.005 | 1.002 |
| T5ForConditionalGeneration | float32 |eager | 0.103 | 8.300 | 1.000 | 1.000 |
| T5ForConditionalGeneration | float32 |dynamo_aot_efficient | 0.098 | 7.831 | 1.050 | 1.060 |
| DistilBertForMaskedLM | float32 |eager | 0.045 | 3.492 | 1.000 | 1.000 |
| DistilBertForMaskedLM | float32 |dynamo_aot_efficient | 0.043 | 3.497 | 1.038 | 0.999 |
| DistilBertForMaskedLM | float16 |eager | 0.026 | 1.870 | 1.000 | 1.000 |
| DistilBertForMaskedLM | float16 |dynamo_aot_efficient | 0.027 | 1.871 | 0.963 | 0.999 |
| DistilBertForMaskedLM | bfloat16 |eager | 0.026 | 1.860 | 1.000 | 1.000 |
| DistilBertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.027 | 1.861 | 0.986 | 1.000 |
| RobertaForMaskedLM | float32 |eager | 0.157 | 12.366 | 1.000 | 1.000 |
| RobertaForMaskedLM | float32 |dynamo_aot_efficient | 0.135 | 12.341 | 1.164 | 1.002 |
| RobertaForMaskedLM | float16 |eager | 0.098 | 6.573 | 1.000 | 1.000 |
| RobertaForMaskedLM | float16 |dynamo_aot_efficient | 0.088 | 6.567 | 1.114 | 1.001 |
| RobertaForMaskedLM | bfloat16 |eager | 0.101 | 6.579 | 1.000 | 1.000 |
| RobertaForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.088 | 6.559 | 1.140 | 1.003 |
| GPT2LMHeadModel | float32 |eager | 0.123 | 9.292 | 1.000 | 1.000 |
| GPT2LMHeadModel | float32 |dynamo_aot_efficient | 0.098 | 7.108 | 1.256 | 1.307 |
| GPT2LMHeadModel | float16 |eager | 0.080 | 4.610 | 1.000 | 1.000 |
| GPT2LMHeadModel | float16 |dynamo_aot_efficient | 0.067 | 3.767 | 1.182 | 1.224 |
| GPT2LMHeadModel | bfloat16 |eager | 0.081 | 4.779 | 1.000 | 1.000 |
| GPT2LMHeadModel | bfloat16 |dynamo_aot_efficient | 0.068 | 3.763 | 1.191 | 1.270 |
| ElectraForMaskedLM | float32 |eager | 0.074 | 6.257 | 1.000 | 1.000 |
| ElectraForMaskedLM | float32 |dynamo_aot_efficient | 0.064 | 6.258 | 1.151 | 1.000 |
| ElectraForMaskedLM | float16 |eager | 0.042 | 3.356 | 1.000 | 1.000 |
| ElectraForMaskedLM | float16 |dynamo_aot_efficient | 0.039 | 3.347 | 1.092 | 1.003 |
| ElectraForMaskedLM | bfloat16 |eager | 0.044 | 3.367 | 1.000 | 1.000 |
| ElectraForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.039 | 3.341 | 1.124 | 1.008 |
| FNetForMaskedLM | float32 |eager | 0.055 | 4.974 | 1.000 | 1.000 |
| FNetForMaskedLM | float32 |dynamo_aot_efficient | 0.038 | 2.802 | 1.429 | 1.775 |
| ConvBertForMaskedLM | float32 |eager | 0.090 | 5.809 | 1.000 | 1.000 |
| ConvBertForMaskedLM | float32 |dynamo_aot_efficient | 0.085 | 5.795 | 1.058 | 1.002 |
| ConvBertForMaskedLM | float16 |eager | 0.064 | 3.021 | 1.000 | 1.000 |
| ConvBertForMaskedLM | float16 |dynamo_aot_efficient | 0.062 | 3.009 | 1.024 | 1.004 |
| MobileBertForMaskedLM | float32 |eager | 0.104 | 2.474 | 1.000 | 1.000 |
| MobileBertForMaskedLM | float32 |dynamo_aot_efficient | 0.069 | 2.576 | 1.499 | 0.961 |
| MobileBertForMaskedLM | float16 |eager | 0.101 | 1.329 | 1.000 | 1.000 |
| MobileBertForMaskedLM | float16 |dynamo_aot_efficient | 0.067 | 1.423 | 1.499 | 0.934 |
| MobileBertForMaskedLM | bfloat16 |eager | 0.100 | 1.330 | 1.000 | 1.000 |
| MobileBertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.067 | 1.423 | 1.504 | 0.935 |
| CamembertForMaskedLM | float32 |eager | 0.075 | 6.312 | 1.000 | 1.000 |
| CamembertForMaskedLM | float32 |dynamo_aot_efficient | 0.065 | 6.317 | 1.151 | 0.999 |
| CamembertForMaskedLM | float16 |eager | 0.047 | 3.376 | 1.000 | 1.000 |
| CamembertForMaskedLM | float16 |dynamo_aot_efficient | 0.044 | 3.366 | 1.084 | 1.003 |
| CamembertForMaskedLM | bfloat16 |eager | 0.049 | 3.390 | 1.000 | 1.000 |
| CamembertForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.044 | 3.370 | 1.113 | 1.006 |
| LayoutLMForMaskedLM | float32 |eager | 0.077 | 6.305 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | float32 |dynamo_aot_efficient | 0.067 | 6.305 | 1.149 | 1.000 |
| LayoutLMForMaskedLM | float16 |eager | 0.045 | 3.371 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | float16 |dynamo_aot_efficient | 0.042 | 3.373 | 1.089 | 0.999 |
| LayoutLMForMaskedLM | bfloat16 |eager | 0.047 | 3.389 | 1.000 | 1.000 |
| LayoutLMForMaskedLM | bfloat16 |dynamo_aot_efficient | 0.042 | 3.371 | 1.118 | 1.005 |
### Limitations
There are a couple of limitations today (that we're working on addressing).
1. Like AOTAutograd, this pipeline currently requires static shape specialization. That is, when the input shapes change, we'll need to recompile.
2. The interaction with PyTorch's distributed features is somewhat untested.
### Reading resources:
AOTAutograd: https://docs.google.com/presentation/d/1rTt0BR2KChDQQTks2hHUtvHxtHQKwgQHVNrmbhj0byk/edit?usp=sharing
TorchDynamo: https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361
Min-Cut rematerialization: https://dev-discuss.pytorch.org/t/min-cut-optimal-recomputation-i-e-activation-checkpointing-with-aotautograd/467/7
NVFuser: https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/ | 05-12-2022 08:39:24 | 05-12-2022 08:39:24 | Installation Instructions:
```
# install torch-nightly
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly
# install functorch (and reinstall after `git pull` later if need to sync up)
git clone https://github.com/pytorch/functorch
cd functorch
rm -rf build
pip install -e .[aot]
cd ..
git clone https://github.com/pytorch/torchdynamo
cd torchdynamo
pip install -r requirements.txt
python setup.py develop
```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17204). All of your documentation changes will be reflected on that endpoint.<|||||>I was able to reproduce the speed ups/memory compression. great work, @Chillee and @anijain2305!
So is this planned to be officially released in pt-1.12?
When will the API be stable that is and then we can start integrating / writing examples for the users?
<|||||>> So is this planned to be officially released in pt-1.12?
We plan to have an "release" of torchdynamo and functorch that correspond to the official PyTorch 1.12 release, yes. We'll also be building binaries of functorch (and possibly dynamo) for easier install. However, this is different from a "stable" release in PyTorch core (where we make BC guarantees, will announce it officially, etc.)
> When will the API be stable that is and then we can start integrating / writing examples for the users?
I think the API is likely stable enough (and in general, the API surface of Dynamo is fairly minimal from the user side!). We can likely commit to supporting the usages of Torchdynamo in HF (which I expect to primarily just involve turning on context managers), although cc: @jansel on this point..<|||||>Thank you for clarifying, Horace.
We can of course make experimental feature support and tag it as such as we won't want to maintain multiple APIs if they will change in pt-1.13.
<|||||>> we won't want to maintain multiple APIs if they will change in pt-1.13.
The APIs (from the torchdynamo side) should be stable.<|||||>Continuing from our discussion on slack so that others can see and participate:
> Horace: How this can be integrated into `transformers`:
There are 2 ways HF transformers are used:
1. a user writing their own training loop - and just use the model - we will document how they can enable TorchDynamo there - this is the easiest as there no API to create on our side and BC to support - just keeping the docs and examples up-to-date
2. a user using HF Trainer or Accelerate - there we would need to add a flag which will turn TorchDynamo on automatically, same as a user chooses which --optim to use - here we have to be careful with designing a backward compatible API - your team and us will need to discuss the various options that the user should be able to set via cmd line - ideally a single flag that can have multiple values - as the there is already a myriad of options so we would want to keep it tight.
----------------
Let me ping @sgugger - Sylvain, do you feel we could integrate this into HF Trainer and Accelerate? It's just a few lines of code that make the code run faster and use less memory - with some models having little to no improvements and others with much larger impacts - please see the OP for the benchmark table.
before:
```
out = model(**train_inputs).loss.abs().sum()
```
after:
```
import torchdynamo
from torchdynamo.optimizations.training import aot_autograd_speedup_strategy
[...]
with torchdynamo.optimize(aot_autograd_speedup_strategy):
out = model(**train_inputs).loss.abs().sum()
```
Horace is saying that for pt-1.12 it'd be just:
```
import torchdynamo
[...]
with torchdynamo.optimize(“nvfuser”):
out = model(**train_inputs).loss.abs().sum()
```
<|||||>I don't mind adding an integration to the `Trainer` and/or `Accelerate`, it looks like it's just a matter of adapting the context manager [here](https://github.com/huggingface/transformers/blob/18d6b356c5a0b800907fe19860b4644db95ea46b/src/transformers/trainer.py#L2187) and we do have utils to create lists of context managers.
In terms of control, users are starting to be a bit confused with the huge number of training arguments we have, so trying to keep the flags/args of this new feature to a bare minimum would be great!<|||||>I need to adapt the install instructions to make it easy to build on nightly CI, I think this should do:
```
pip install git+https://github.com/pytorch/functorch#egg=functorch[aot]
pip install git+https://github.com/pytorch/torchdynamo
```
@Chillee, could you please validate that I'm not missing anything? The original was:
```
# install functorch (and reinstall after `git pull` later if need to sync up)
git clone https://github.com/pytorch/functorch
cd functorch
rm -rf build
pip install -e .[aot]
cd ..
git clone https://github.com/pytorch/torchdynamo
cd torchdynamo
pip install -r requirements.txt
python setup.py develop
```
--------------
Actually, shouldn't `python setup.py develop` be `pip install -e .` for consistency for the latter?
Thank you!
<|||||>@stas00 @Chillee unfortunately, these commands above do not do the job. Any chance you guys can update the instructions on correctly installing torchdynamo? Getting issues with different versions of the torch ( even if I install nightly).<|||||>Note that in the nightlies, `dynamo` is included in PyTorch as `import torch._dynamo as dynamo`. The latest instructions I have are:
```py
pip install numpy
pip install --pre torch[dynamo] --extra-index-url https://download.pytorch.org/whl/nightly/cu117/
```<|||||>
Thanks for the info @sgugger
> Note that in the nightlies, `dynamo` is included in PyTorch as `import torch._dynamo as dynamo`. The latest instructions I have are:
There is something wrong going on though. When I try to install nightlies with the command that you provided above (also tried to --force reinstall), it still complains like below.
`ModuleNotFoundError: No module named 'torch._dynamo'
`
It is weird that it worked once few days back and I could run all sanity checks passed with torchdynamo, then after an hour, I tried to rerun the experiment and got the error above. Looks like it is not stable at all, at least for now. |
transformers | 17,203 | closed | fixed bug in run_mlm_flax_stream.py | # What does this PR do?
Fixed bug caused by additional keys (id, text) referring to non-list values when concatenating samples. An alternative option is to drop these columns in the dataset before passing it to `iter`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17132 by @HLasse.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten (tagged in bug)
@patil-suraj (assigned to bug)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2022 08:06:24 | 05-12-2022 08:06:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>No problem. Tried using the `remove_columns ` argument as in the example but the map functions do not take that argument (probably easy to add):
```python
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
remove_columns=column_names,
)
```
Similar it might be nice for consistency to add the `.column_names` extensions to the IterableDataset.<|||||>> No problem. Tried using the `remove_columns ` argument as in the example but the map functions do not take that argument (probably easy to add):
>
> ```python
> tokenized_datasets = dataset.map(
> tokenize_function,
> batched=True,
> remove_columns=column_names,
> )
> ```
>
> Similar it might be nice for consistency to add the `.column_names` extensions to the IterableDataset.
Pinging @lhoestq here. Is not possible to pass remove column to streaming dataset ?<|||||>`remove_columns` does exist:
https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.IterableDataset.map.remove_columns<|||||>Indeed it does. I was using datasets version `1.17.0`. Rerunning with an updated version indeed resolved the issue. <|||||>Another note here, seems like the max_seq_length state that each text is truncated to e.g. max_seq_length, but from what I can read in the `advance_iter_and_group_samples` it seems like they are concatenated and split in groups of max_seq_length (with the remainder being discarded)
I would probably change it from:
```python
max_seq_length: Optional[int] = field(
default=None,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated. Default to the max input length of the model."
},
)
```
to:
```python
max_seq_length: Optional[int] = field(
default=None,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences are concatenated in groups of max_seq_length. Default to the max input length of the model."
},
)
``` <|||||>> Another note here, seems like the max_seq_length state that each text is truncated to e.g. max_seq_length, but from what I can read in the `advance_iter_and_group_samples` it seems like they are concatenated and split in groups of max_seq_length (with the remainder being discarded)
>
> I would probably change it from:
>
> ```python
> max_seq_length: Optional[int] = field(
> default=None,
> metadata={
> "help": "The maximum total input sequence length after tokenization. Sequences longer "
> "than this will be truncated. Default to the max input length of the model."
> },
> )
> ```
>
> to:
>
> ```python
> max_seq_length: Optional[int] = field(
> default=None,
> metadata={
> "help": "The maximum total input sequence length after tokenization. Sequences are concatenated in groups of max_seq_length. Default to the max input length of the model."
> },
> )
> ```
Good catch! Feel free to open another PR if you would like to fix this. |
transformers | 17,202 | closed | BLOOM | # What does this PR do?
Integrating BigScience converted models into HuggingFace library!
Original PR: https://github.com/thomwolf/transformers/pull/2 that I directly moved here
- [x] add a generation test with a small model pushed on the hub
- [x] slow tests needs to be modified accordingly
- [ ] add final credits to all reviewers
cc @thomasw21 @thomwolf @sgugger @stas00
EDIT: PR moved at https://github.com/huggingface/transformers/pull/17474 | 05-12-2022 08:05:41 | 05-12-2022 08:05:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @stas00 !
Thank you very much for your comment !!
1-
Yes it will be changed to BLOOM most probably in the next commit
2-
I like the idea of making it modulable (enable/disable alibi and embed norm) so that both models would work. But in this case 13b-en could be understood as a smaller version of BLOOM which is not the case. What do you think?
3-
I managed to convert the small models using this arch therefore should not be a problem to use this arch for them. There is still the question of the naming (should they still be named BLOOM?) <|||||>> 1- Yes it will be changed to BLOOM most probably in the next commit
Perfect!
> 2- I like the idea of making it modulable (enable/disable alibi and embed norm) so that both models would work. But in this case 13b-en could be understood as a smaller version of BLOOM which is not the case. What do you think?
That's a good point. It's a sort of Pre-BLOOM :)
> 3- I managed to convert the small models using this arch therefore should not be a problem to use this arch for them. There is still the question of the naming (should they still be named BLOOM?)
OK, let's discuss the last 2 on slack
<|||||>2. Maybe something I don't understand is why make it modulable? It feels like having another GPT2 no? 13B should more of less fit inside GPT-2 no? <|||||>> 13B should more of less fit inside GPT-2 no?
Well, we have gone through this 6 months ago, you can definitely re-read the discussions. There were 3 things that needed to be changed in HF's GPT2, which wasn't producing the same output under fp16.<|||||>Yeah I read it, `layernorm` is fixed by torch, the other two can be implemented there the same way they are implemented here (you just need to make it modular as well no?).<|||||>Unfortunately the `transformers`'s current policy is against making things modular. So we can't add anything to gpt2
I thought that perhaps Bloom could be an exception but I won't be surprised if this will not be allowed.<|||||>Okay actually the `badbmm` was implemented there as well. So really we're missing only one activation which is jitted `gelu_fast`. Well why would we make this one modular to the point of really being REALLLY close to `gpt-2` then? If so the other course of action is building a new skeleton (which seems overkill for a change in activation).<|||||>yes already tried pushing for it - and it wasn't approved.<|||||>@sgugger thank you very much for your comments!
For the tokenizer since the models has not been pushed yet on the hub, I had to "hotfix" this by explicitly giving the path to the [bigscience tokenizer](https://huggingface.co/bigscience/tokenizer). Do you think it is a good idea to push this tokenizer to the [debug model's hub](https://huggingface.co/bigscience/bigscience-small-testing) ?
Also I have notived that Bloom is the only model on HF that does not have a slow tokenizer *and* has a fast tokenizer (usually it is either both or only the slow tokenizer). <|||||>5 more small tests and we should be good!!<|||||>1 test left!<|||||>All tests finally passed!! I'll refactor the code with the suggested final changes and may ping you for a new review<|||||>I will need to modify the slow tests to add our custom ones<|||||>Just a small note on a test. Due to some stochasticity (since we are taking a random slice) [this test](https://github.com/younesbelkada/transformers/blob/cdf41e8f309a6744f4e1488bffc7be76503ccd6d/tests/models/bloom/test_modeling_bloom.py#L215) does not always pass with `atol=4e-2` (sometimes it passes with `5e-2` or `6e-2`). Therefore I've put `atol=1e-1` to be sure it passes. How accurate are we expecting this test to be? I may be wrong but I think the operations could not match at 100% (due to tensor slicing for example)<|||||>EDIT: Slow tests seems to work fine on the GPU, but batched generation seem to not work, I have to investigate that!<|||||>All tests are passing! Let me know if you need any more modification @LysandreJik @sgugger <|||||>Thanks @thomasw21 for the comments!
Agreed with you, regarding the alibi positional embeddings. What I'll do I think is to create the positional embeddings on-the-fly on the forward pass (since you have access to the input sequence length there). I was just worried about the computational cost of it (re-computing alibi at each inference step is more costly than computing it once) - but for the reward we get (making the model agnostic to the sequence length) I think that it's worth it
<|||||>>What I'll do I think is to create the positional embeddings on-the-fly on the forward pass
I wonder if this may break deepspeed zero-3. All params should be created when the model is created. But if it's not a param it probably should be fine.
See this issue: https://github.com/microsoft/DeepSpeed/issues/1757<|||||>If by param you mean torch.nn.Parameter then the alibi tensor is not a param. Since these embeddings are not learned you can just use them as a non param tensor. I think that it should be fine and will not break deepspeed zero-3<|||||>Looks good so far! Think we have to revisit the `dtype` config param here though - I'm against adding it to the config IMO the user should define it at runtime by passing `torch_type` to the model and then the layers relevant logic should not be:
```py
if config.dtype == ...
```
but rather:
```py
if inputs_embeds.dtype == ...
```
cc @sgugger @stas00 <|||||>> Looks good so far! Think we have to revisit the `dtype` config param here though - I'm against adding it to the config IMO the user should define it at runtime by passing `torch_type` to the model and then the layers relevant logic should not be:
>
> ```python
> if config.dtype == ...
> ```
>
> but rather:
>
> ```python
> if inputs_embeds.dtype == ...
> ```
>
> cc @sgugger @stas00
Thank you for the comments! I agree with the fact that we should stay in line with what is done currently and should not add any extra logic. I have applied your suggested changes, there is no more explicit initialization with the dtype from the config + there should not be any logic such as `if config.dtype == ...`
But I would still keep the `torch_dtype` param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.
<|||||>> But I would still keep the torch_dtype param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.
FYI, `torch_dtype` gets automatically added to all saved models' config files since that feature was added, so you don't need to do anything special about it. Via `save_pretrained` that is.<|||||>> > But I would still keep the torch_dtype param in the config file because in Megatron-DS this parameter is quite important, it helps explicitly keeping track on the precision that has been used during training.
>
> FYI, `torch_dtype` gets automatically added to all saved models' config files since that feature was added, so you don't need to do anything special about it. Via `save_pretrained` that is.
Yes but for bloom models I save the models + config files using the `convert_bloom_to_pytorch.py`script that does not use `save_pretrained`, that is also why I want to keep them in the config file<|||||>Ah, yes, in that case - yes, please, to adding manually the correct `torch_dtype` - thank you, @younesbelkada!<|||||>Added some final changes + suggestions from @thomasw21 ! Thanks to alibi shifting the tests pass with a much lower tolerance.
LGTM now, let me know if you see any other changes<|||||>Yeah let me close it and create a new PR!<|||||>Moved the PR at #17474<|||||>> The git commit history seems to be messed up - should we maybe open a new PR here?
It appears to be due to a broken merge commit here: https://github.com/huggingface/transformers/pull/17202/commits/06d98db76f9a958bd68d8158411674ba62856de1 Didn't really need a new PR, but rolling back the bad commit. But oh well it's done.
|
transformers | 17,201 | closed | a memory leak in qqp prediction using bart | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.
I only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.
This is the script to reproduce:
```
CUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24
```
### Expected behavior
```shell
Prediction without out memory.
```
| 05-12-2022 08:04:29 | 05-12-2022 08:04:29 | There is nothing we can do to help with that, as you don't seem to have the RAM necessary to hold all predictions. The only advise I have is that you should predict on parts of the dataset and not the whole.
This is not a bug in Transformers, removing the label.<|||||>Sorry, I don't think that. I have 512GB RAM. And I can conduct training and evaluation well, but this issue only occurs during prediction.<|||||>The training does not accumulate predictions and the evaluation uses the evaluation set which is smaller.<|||||>But I don't understand why a tensor of shape (300k,) will exceed RAM? Does trainer save intermediate hidden states during prediction?<|||||>Mmmmmm, it may be that the model is outputting more tensors than just the logits. I see it has `use_cache=True` in its config, can you try again by setting it to `False`?<|||||>OK! I try it now! Thanks!<|||||>Sorry, it doesn't work.<|||||>It takes more time and more memory with every 100 steps.<|||||>I have found how to solve it!<|||||>The `logits` in the [https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2635](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2635) returns two tensors of shape (batch_size, num_classes) and (batch_size, max_sequence_length, hidden_size).
And the second shape will be accumulated, and finally will become a tensor of (300k, 256, 1024), and it will be out of memory.
And if i don't accumulate it, the code can work well.<|||||>Yes, that is what I was saying earlier: the model does not return the predictions only but some hidden states. Not sure which option will deactivate the second one.<|||||>Yes, thanks for your help! And it would be nice if this could be improved. |
transformers | 17,200 | closed | almost all codes that related to generation in examples/pytorch/**_no_trainer.py have bugs | ### System Info
```shell
transformers branch main
```
### Wrong Codes in examples/pytorch/**_no_trainer.py
```
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
generated_tokens = accelerator.unwrap_model(model).generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
**gen_kwargs,
)
generated_tokens = accelerator.pad_across_processes(
generated_tokens, dim=1, pad_index=tokenizer.pad_token_id
)
labels = batch["labels"]
if not args.pad_to_max_length:
# If we did not pad to max length, we need to pad the labels too
labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id)
generated_tokens, labels = accelerator.gather((generated_tokens, labels))
generated_tokens = generated_tokens.cpu().numpy()
labels = labels.cpu().numpy()
if args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
if isinstance(generated_tokens, tuple):
generated_tokens = generated_tokens[0]
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
# If we are in a multiprocess environment, the last batch has duplicates
if accelerator.num_processes > 1:
if step == len(eval_dataloader):
decoded_preds = decoded_preds[: len(eval_dataloader.dataset) - samples_seen]
decoded_labels = decoded_labels[: len(eval_dataloader.dataset) - samples_seen]
else:
samples_seen += decoded_labels.shape[0]
```
here, In the for loop, step will never equal to len(eval_dataloader), so here should be modified to `if step == len(eval_dataloader) - 1`
and
`samples_seen += decoded_labels.shape[0]`
decoded_labels is a list that produced by the postprocess_text(),
list object have no attribute shape.
GLHF
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just run the examples scripts provided in the readme
### Expected behavior
```shell
samples_seen exceed the dataset size
and also the attribute error
```
| 05-12-2022 07:44:56 | 05-12-2022 07:44:56 | I believe that's in @muellerzr's todo list!<|||||>Yes it is, duplicate of https://github.com/huggingface/transformers/issues/17214#event-6600140325
Will be getting to this on Wednesday once I'm back from vacation |
transformers | 17,199 | closed | Faster implementation for SentencePieceExtractor | # What does this PR do?
This PR is to improve `SentencePieceExtractor` extract method performance, which took several minutes when targeting vocabularies of tens of thousands of words.
For 44,876 words ( [repository](https://huggingface.co/rinna/japanese-gpt-1b) ), it used to take 290 seconds, but now it takes 0.2 seconds.
I've added a simple test, but let me know if you need anything else.
The experimental conditions and code are as follows.
result
```
vocabulary length: 44876, max word length: 16
normal: 290.2607123851776 secs
improved: 0.18401217460632324 secs
```
code
```python
import pickle
import time
from collections import defaultdict
from typing import List
vocab = pickle.load(open('vocab.pkl', 'rb'))
def normal(vocab: dict) -> List[str]:
merges = []
for piece_l in vocab.keys():
for piece_r in vocab.keys():
merge = f"{piece_l}{piece_r}"
piece_id = vocab.get(merge, None)
if piece_id:
merges += [(piece_l, piece_r, piece_id)]
return merges
def improved(vocab: dict) -> List[str]:
merges = []
prefixes = dict()
for word in vocab.keys():
for i in range(len(word)):
prefixes[word[: i + 1]] = {word} | prefixes.setdefault(word[: i + 1], set())
for word in vocab.keys():
if len(prefixes[word]) > 1:
for candidate in prefixes[word]:
if word != candidate:
if candidate[len(word) :] in vocab:
piece_id = vocab.get(candidate, None)
merges += [(word, candidate[len(word) :], piece_id)]
return merges
print(f'vocabulary length: {len(vocab)}, max word length: {max(len(word) for word in vocab.keys())}')
start = time.time()
result_normal = normal(vocab)
print(f'normal: {time.time() - start} secs')
start = time.time()
result_improved = improved(vocab)
print(f'improved: {time.time() - start} secs')
# confirm that results match
assert sorted(result_normal, key=lambda val: val[2]) == sorted(result_improved, key=lambda val: val[2])
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@n1t0, @LysandreJik
| 05-12-2022 06:56:30 | 05-12-2022 06:56:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17199). All of your documentation changes will be reflected on that endpoint.<|||||>Please have a look and let me know your feedback.
@sgugger, @LysandreJik, @patil-suraj, @n1t0<|||||>cc @Narsil and @SaulLu <|||||>Hi @e-mon ,
Thanks for looking into this.
Is this code used often ? This code was written hastily and was supposed to only run once (since we can save the `tokenizer.json` within the `tokenizers` library which should load again pretty fast.
Since we're looking at optimizing this code I propose another version which seems even faster:
https://gist.github.com/Narsil/a6b927c4973d4d0a63b1765cfff38e55
(I am using a smaller vocab for faster testing, since the slow is really excruciatingly slow).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Or shall we just parallize it?
```
# coding=utf-8
# Copyright 2018 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utilities to convert slow tokenizers in their fast tokenizers counterparts.
All the conversions are grouped here to gather SentencePiece dependencies outside of the fast tokenizers files and
allow to make our dependency on SentencePiece optional.
"""
import warnings
from typing import Dict, List, Tuple
from tokenizers import AddedToken, Regex, Tokenizer, decoders, normalizers, pre_tokenizers, processors
from tokenizers.models import BPE, Unigram, WordPiece
import multiprocessing
from functools import partial
from .utils import requires_backends
def merge_core(vocab,vocab_scores,piece_ls):
merges = []
for piece_l in piece_ls:
for piece_r in vocab.keys():
merge = f"{piece_l}{piece_r}"
piece_score = vocab_scores.get(merge, None)
if piece_score:
merges += [(piece_l, piece_r, piece_score)]
return merges;
class SentencePieceExtractor:
"""
Extractor implementation for SentencePiece trained models. https://github.com/google/sentencepiece
"""
def __init__(self, model: str):
requires_backends(self, "sentencepiece")
from sentencepiece import SentencePieceProcessor
self.sp = SentencePieceProcessor()
self.sp.Load(model)
def extract(self, vocab_scores=None) -> Tuple[Dict[str, int], List[Tuple]]:
"""
By default will return vocab and merges with respect to their order, by sending `vocab_scores` we're going to
order the merges with respect to the piece scores instead.
"""
sp = self.sp
vocab = {sp.id_to_piece(index): index for index in range(sp.GetPieceSize())}
if vocab_scores is not None:
vocab_scores, reverse = dict(vocab_scores), True
else:
vocab_scores, reverse = vocab, False
pool_obj = multiprocessing.Pool();
merges=pool_obj.map(partial(merge_core,vocab,vocab_scores),vocab.keys())
# Merges
# merges = []
# for piece_l in vocab.keys():
# for piece_r in vocab.keys():
# merge = f"{piece_l}{piece_r}"
# piece_score = vocab_scores.get(merge, None)
# if piece_score:
# merges += [(piece_l, piece_r, piece_score)]
merges = sorted([item for sublist in merges for item in sublist], key=lambda val: val[2], reverse=reverse);
merges = [(val[0], val[1]) for val in merges]
return vocab, merges
``` |
transformers | 17,198 | closed | Fix contents in index.mdx to match docs' sidebar | # What does this PR do?
Fix: Currently the sections in the content part of `index.mdx` do not match the sections in `_toctree.yml`. | 05-12-2022 06:49:05 | 05-12-2022 06:49:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,197 | closed | Fix minor style error in Spanish docs | # What does this PR do?
This PR is a minor fix of style since CircleCI is red due to style. FYI I did this in clean environment
```
pip install hf-doc-builder -U
pip install -e ".[dev]"
make style
```
Part of #15947 | 05-12-2022 06:26:47 | 05-12-2022 06:26:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger <|||||>Thank you very much @osanseviero! I was missing this part: `pip install -e ".[dev]"`. Was really frustrating but a lesson learned. 🚀 |
transformers | 17,196 | closed | Log the decoder chosen by GenerationMixin | # What does this PR do?
When calling `model.generate(content, log_decoder=True)`, the PR would log which decoder and warper(s) are actually used in generation.
I have a demo where I show text generated with different options (top_k, typical_p, repetition_penalty, num_beams, etc.). The final chosen decoding strategy is not obvious. It is tricky to test by comparing outputs because a generative model often returns different text on multiple runs.
By design the function tolerates mistakes -- if there is a missing arg (`typical_p=0.5` but no `do_sample=True`) or mismatched value (`typical_p=3`) or typo'd arg (`numBeams=2`) then the function silently chooses another decoding strategy. The code does not flag these because the remaining `**kwargs` are passed to the model.
I believe the logger is the best place to check whether decoding actually happened as expected.
Example usage: https://colab.research.google.com/drive/1DpMnZkSCtZIiaONoxfzYxYI4vgiTNYLN?usp=sharing
- ~~The first commit is unnecessary thanks to #17186~~ Rebased on this PR and adding one additional section to the documentation about typical decoding
- I'm open to renaming or removing `log_decoder` to always do `logger.info` in these places
- If we always do `logger.info`, I could move logger calls into `BeamSearchScorer`. Trying to avoid adding too many args
- Could use `logger.warn` if these issues warrant it
Discussion: https://discuss.huggingface.co/t/logging-which-decoder-selected-in-generation/18133
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 05-12-2022 05:56:33 | 05-12-2022 05:56:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @mapmeld,
Thanks for the PR.
To me, that is a bit too much of an edge-case and I'm not very happy with cluttering the generation code with `if - else ` statements.
@gante @patil-suraj what do you think? <|||||>Hey @mapmeld! Thank you for the PR 👍
I'm also not a fan of all the `if` statements on a function whose complexity is already over the top. Perhaps we could remove all the `if` branches, keep the logging statements, but lower their logging level to `debug`. That way, a user could get all those values by setting the appropriate logging level, and it would be invisible in the vast majority of cases.
WDYT?<|||||>@patrickvonplaten @gante That makes sense to me, log.debug level, no extra argument. I've made a commit for that<|||||>Agree with @gante 's comment, using `logger.debug` and getting rid of those if-else statements sounds good to me.
I'm okay with having these loggings to make it more obvious which method is being used, will be useful in debugging IMO.<|||||>Sorry, I think I wasn't super clear in my last message.
Personally, I would prefer to not merge this PR because:
- the generation code is already very complex and hard to read (talking about the code-reading part here not what's displayed to the user), don't think adding 5,6 new logger statement lines help here
- How do users know that generate should be run in debug mode to display the logging statements - don't think many users will realize this
- If the decoding strategy is not obvious, we should improve the docs IMO
- If the user doesn't know what `top_p` does, I don't think she/he would know that a `TopPLogitsWarper` is -> don't see the added value of displaying the names in a logger here
- Also not in line with how we use the logger in other places across the library<|||||>OK, will close then.
If I can suggest changes beyond logging to this section, here are some ideas:
- throwing exceptions in the current code if a decoding argument (`typical_p`) is ignored because of an unusable value or missing companion argument (`do_sample=True`)
- adding an argument to `generate()` naming the intended decoder, so it is clear in end-user code, and transformers can throw an exception for calls which don't go down the expected path for whatever reason
- specific decoding functions to replace the general `generate()`, where these functions can throw exceptions / using Python type hints / be more useful in code auto-complete tools
- implementing typical decoding in TensorFlow so there's more similarity between Torch and TensorFlow code<|||||>Thanks a lot @mapmeld - those are really nice suggestions! Also after some discussion we think it could make a lot of sense to do maybe the following:
- If `kwargs` are passed to `generate` that don't exist than we throw a warning so a user is well aware if something is misspelled.
- Really like the idea of warning the user if an argument is used that cannot be activated - wondering if there is a good approach that would not force us to make a lot of `if ....` statements in `generate`. Any ideas how this could be checked in a very concise way?
<|||||>Also keen to hear suggestions from @gante :-) <|||||>> implementing typical decoding in TensorFlow so there's more similarity between Torch and TensorFlow code
(@mapmeld) Yeah, we are working on it :D TF generate should have a big release soon.
> Really like the idea of warning the user if an argument is used that cannot be activated - wondering if there is a good approach that would not force us to make a lot of if .... statements in generate. Any ideas how this could be checked in a very concise way?
(@patrickvonplaten) Without if's and else's, the cleanest solution would possibly be to hold some dictionary with all passed arguments, in addition to a set of accepted arguments for each generation type, and raise an exception with all unexpected arguments (e.g. `The passed arguments triggered greedy_search. However, for greedy_search, following arguments are not accepted: top_p. Please check the documentation here [link]`). We can actually implement it with a small effort -- the dictionary with all arguments is `locals()` at the start of the specific generation functions (e.g. `greedy_search()`) and the set of accepted arguments is the function signature except `**model_kwargs`. We can get the accepted `model_kwargs` from the model forward signature (it's not quite, but should be close enough) -- everything else that remains in `**model_kwards` is an unused parameter and should raise an exception.
WDYT?<|||||>In a first step I was rather thinking about just warning the user if parameters are passed in `kwargs` that are not used (probs misspelled) <|||||>Adding sub-generation specific logging logic sounds very complex, would be open if we find a clean, concise solution but at the moment I'd like to prevent adding hardcoded lists of which generation parameter is relevant for which sub generation method (also hard to maintain)<|||||>@gante the solution sounds interesting - would need to see a PR for it to fully understand it. The problem I see is that we won't detect unnecessary generation parameters since they are inside `logits_processor` and `logits_warper`<|||||>Overall, also just want to say here that IMO two mistakes were made a while back:
- We've set defaults for some values which we should have never done IMO (`max_length` and `top_k`) have defaults which is quite counter productive for good logging
- We have allowed people to set generation parameters inside the config to which the method defaults to - in the aftermath this was too much "black-magic" and not at all visible/understandable for (new) users.
Will be very hard to remedy these things without breaking backward comp, but open to suggestions / comments!<|||||>Would it be possible for us to talk about it in the HF Slack? I would be interested in finding a part of this where I can contribute<|||||>Invited you :-) Let's chat on Slack |
transformers | 17,195 | closed | Different logits for single/batch inputs on T5ForConditionalGeneration | ### System Info
```shell
transformers 4.18.0
python 3.8.10
ubuntu
pytorch
T5ForConditionalGeneration
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
model.eval()
# sequences
seq1 = "summarize: Calling the model (which means the forward method) uses the labels for teacher forcing. This means inputs to the decoder are the labels shifted by one"
output1 = "calling the model uses the labels for teacher forcing. inputs to the decoder"
seq2 = "summarize: When you call the generate method, the model is used in the autoregressive fashion"
output2 = "the model is used in the autoaggressive fashion."
seq3 = "summarize: However, selecting the token is a hard decision, and the gradient cannot be propagated through this decision"
output3 = "the token is a hard decision, and the gradient cannot be propagated through this decision"
input_sequences = [seq1, seq2, seq3]
output_seq = [output1, output2, output3]
# encoding input and attention mask
encoding = tokenizer.batch_encode_plus(
input_sequences,
padding="longest",
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda")
# labels
target_encoding = tokenizer.batch_encode_plus(
output_seq, padding="longest", truncation=True, return_tensors="pt"
)
labels = target_encoding.input_ids.to("cuda")
labels[labels == tokenizer.pad_token_id] = -100
# Call the models
logits = model(input_ids=input_ids, labels=labels).logits
# Apply softamx() and batch_decode()
X = logits
X = F.softmax(X, dim=-1)
ids = X.argmax(dim=-1)
y = tokenizer.batch_decode(sequences=ids, skip_special_tokens=True)
print(y)
# results: batch_size=3
# [
# 'call the model uses the labels for teacher forcing inputs to the decoder are',
# 'model is used in the constructegressgressive fashion ',
# 'token can a token decision, and the gradient cannot be propagated through this decision '
# ]
# results: batch_size =1 i.e. consider 1 seq each time
# ['call the model uses the labels for teacher forcing inputs to the decoder are']
# ['the model is used in the auto-gressgressive fashion ']
# ['the token is a hard decision, and the gradient cannot be propagated through this decision ']
```
### Expected behavior
```shell
Having the same output sequences.
```
| 05-12-2022 04:07:37 | 05-12-2022 04:07:37 | @patrickvonplaten can you please have a look ? |
transformers | 17,194 | closed | Update data2vec.mdx to include a Colab Notebook link (that shows fine-tuning) | # What does this PR do?
This PR includes a link to a Colab Notebook that shows how to fine-tune the Data2Vec vision model on the task of image classification.
@sgugger @Rocketknight1 | 05-12-2022 01:51:57 | 05-12-2022 01:51:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, sure. Doing it in a while. <|||||>@sgugger done. |
transformers | 17,193 | closed | [run_seq2seq_qa.py] various issues | I was trying to use `examples/pytorch/question-answering/run_seq2seq_qa.py` to write a test and run into multiple issues.
A. the example is not working:
https://github.com/huggingface/transformers/blob/d1d5ebb16cc8500a3e4e1b30047312cc563ca87f/examples/pytorch/question-answering/README.md#fine-tuning-t5-on-squad20
1. running as is fails with:
```
ValueError: --answer_column' value 'answer' needs to be one of: id, title, context, question, answers
```
The example should say `--answer_column answers` (not `answer`)
2. ok, trying to move forward:
```
$ python examples/pytorch/question-answering/run_seq2seq_qa.py --model_name_or_path t5-small --dataset_name squad_v2 --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 1 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_seq2seq_squad/
```
crashes with:
```
05/11/2022 18:02:23 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-d9a027917b78cfa7.arrow
Running tokenizer on validation dataset: 0%| | 0/12 [00:00<?, ?ba/s]05/11/2022 18:02:23 - INFO - datasets.arrow_dataset - Caching processed dataset at /home/stas/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-0b463497dc4250c6.arrow
Running tokenizer on validation dataset: 0%| | 0/12 [00:03<?, ?ba/s]
Traceback (most recent call last):
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 519, in main
eval_dataset = eval_examples.map(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2346, in map
return self._map_single(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/arrow_writer.py", line 510, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 1702, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1314, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 4 named labels expected length 1007 but got length 1000
```
B. If I try to switch from `run_qa.py` to this script, e.g.:
```
python examples/pytorch/question-answering/run_seq2seq_qa.py --model_name_or_path valhalla/t5-base-squad --tokenizer_name valhalla/t5-base-squad --dataset_name squad --output_dir ./xxx --overwrite_output_dir --optim adafactor --do_train --max_train_samples 3 --do_eval --max_eval_samples 1 --logging_strategy steps --logging_steps 1 --evaluation_strategy steps --eval_steps 1 --save_strategy steps --save_steps 1 --load_best_model_at_end --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --num_train_epochs 1 --report_to none --fp16
```
it crashes on:
```
Traceback (most recent call last): | 0/1 [00:00<?, ?it/s]
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 623, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1629, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1801, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 71, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output)
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 580, in post_processing_function
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3287, in batch_decode
return [
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3288, in <listcomp>
self.decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3326, in decode
return self._decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 547, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: 'list' object cannot be interpreted as an integer
```
basically:
```
preds == [[[nan, nan, ..., nan]]]
```
and there are 2 problems here:
1. it has one level too many of nesting - hence the error above
2. if I manually tweak it to pass `preds[0]` it then fails to deal with `nan` and then fails with:
```
Traceback (most recent call last):
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 687, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 623, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1629, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 1801, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 71, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output)
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 580, in post_processing_function
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3287, in batch_decode
return [
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3288, in <listcomp>
self.decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3326, in decode
return self._decode(
File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 548, in _decode
text = self._tokenizer.decode(token_ids[0], skip_special_tokens=skip_special_tokens)
TypeError: 'float' object cannot be interpreted as an integer
```
C. The test that exercises this script uses a local sample and it succeeds:
```
python examples/pytorch/question-answering/run_seq2seq_qa.py \
--model_name_or_path t5-small \
--context_column context \
--question_column question \
--answer_column answers \
--version_2_with_negative \
--train_file tests/fixtures/tests_samples/SQUAD/sample.json \
--validation_file tests/fixtures/tests_samples/SQUAD/sample.json \
--output_dir /tmp/debug_seq2seq_squad/ \
--overwrite_output_dir \
--max_steps=10 \
--warmup_steps=2 \
--do_train \
--do_eval \
--learning_rate=2e-4 \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=1 \
--predict_with_generate
```
but once switching to `--dataset_name squad_v2` it breaks.
Environment-wise I'm using all the latest versions of datasets, transformers, etc. Please let me know if you need any specific versions of anything if you can't reproduce those issues.
### Who can help?
Tagging @karthikrangasai who created this script, but of course if others know how to fix it please don't hesitate to step in. Thank you!
| 05-12-2022 01:20:56 | 05-12-2022 01:20:56 | No answer from @karthikrangasai, @LysandreJik - who would be a good person to look at these issues? Thank you.<|||||>I believe @patil-suraj has experience with similar scripts, would you like to have a look at this one when you have a minute, @patil-suraj ?<|||||>Great! Thank you for tagging Suraj, Lysandre! and thank you, Suraj for checking it<|||||>Is there any update on this?
I am also trying to use the script. Additionally, I have found some more issues with this:
1. It uses the doc_stride strategy to break long contexts, but at the end of the evaluation, no special handling is done and it seems it just takes into account the latest feature extracted from an example (which seems to be not a good approach)
2. The post_process script is based on the feature set having an `example_id` column, but the Trainer hides that column, and the script breaks in that part. In the provided Colab, there is a tweak that "resets" the features dataset format for it to work. Maybe bringing this to this script?
I hope this helps. It would be great to have a script for that ....<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,192 | closed | Remove duplicated os.path.join in Trainer._load_rng_state | # What does this PR do?
Remove duplicated os.path.join in `Trainer._load_rng_state`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 05-11-2022 22:18:06 | 05-11-2022 22:18:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,191 | closed | Mistake in the BART doc & inconsistency between code & doc | Version: Most recent in the Github repo.
Model: BART.
Description:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L627 says to look at `modeling_bart._prepare_decoder_inputs` to modify the default behavior of the way `decoder_attention_mask` behaves by default, but there is no `_prepare_decoder_inputs` anywhere in the Huggingface Transformers repository. I guess it's an artifact from a previous version, and that the function is now called `_prepare_decoder_attention_mask`. However, this method doesn't seem to look at the values of the decoder inputs anywhere, so I don't think it does what the doc says, ie mask the pad tokens. Or is this done somewhere else?
Thanks.
### Who can help?
@patil-suraj
### Expected behavior
```shell
Mask pad tokens by default in the decoder.
```
| 05-11-2022 22:05:28 | 05-11-2022 22:05:28 | One could fix this by putting
```python
if attention_mask is None:
attention_mask = input_ids != self.config.pad_token_id
```
somewhere<|||||>Hi @JulesGM ! Thanks for reporting this, the doc should be changed to mentioned the `_prepare_decoder_attention_mask` method. But note that, the `_prepare_decoder_attention_mask` prepares the casual mask for the decoder and then combines it with the `decoder_attention_mask` if it's passed by the user. The causal mask is not meant to ignore padding tokens hence it doesn't look at the `decoder_input_ids`.
Also, we don't automatically prepare `decoder_attention_mask` because we can't always assume that the user wants to mask padding tokens. So the user should pass it, if he/she wants it.<|||||>Great thank. I mentioned that because the doc says it did. Just FYI, other
models also have references to the function that doesn't exist.
I wish there was an easy way to ignore pad tokens in decoder input, to be
able to condition on some already generated text in the decoder,
like scratchpads https://arxiv.org/pdf/2112.00114.pdf. Right now it's very
hard, I have to write a custom way to cache the previous positions to only
increment the positional encoders the proper amount with caching.
On Thu, May 12, 2022 at 9:05 AM Suraj Patil ***@***.***>
wrote:
> Hi @JulesGM <https://github.com/JulesGM> ! Thanks for reporting this, the
> doc should be changed to mentioned the _prepare_decoder_attention_mask
> method. But note that, the _prepare_decoder_attention_mask prepares the
> casual mask for the decoder and then combines it with the
> decoder_attention_mask if it's passed by the user. The causal mask is not
> meant to ignore padding tokens hence it doesn't look at the
> decoder_input_ids.
>
> Also, we don't automatically prepare decoder_attention_mask because we
> can't always assume that the user wants to mask padding tokens. So the user
> should pass it, if he/she wants it.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17191#issuecomment-1124969687>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAYU34MU4AOV42UGDXLCSR3VJT625ANCNFSM5VWKKPAQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 17,190 | closed | Fix numpy VisibleDeprecationWarning for question answering pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17128 .
`VisibleDeprecationWarning` is addressed by specifying `dtype=object` when creating numpy array. [This post](https://forums.fast.ai/t/visibledeprecationwarning-creating-an-ndarray-from-ragged-nested-sequences-is-deprecated/81774/3), provides a bit more context into what the warning means.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Here's the [link](https://github.com/huggingface/transformers/issues/17128) .
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik, @n1t0 | 05-11-2022 21:01:32 | 05-11-2022 21:01:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,189 | closed | Fine tunning error in /models/t5/modeling_t5.py | ### System Info
```shell
Keeps getting this error:
"ValueError: not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
File "run_glue_t5.py", line 591, in <module>
main()
File "run_glue_t5.py", line 509, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1400, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1984, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2016, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1149, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1635, in forward
decoder_outputs = self.decoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1149, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 933, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
0%| | 0/3796 [00:03<?, ?it/s]
2022-05-11 19:21:02,332 sagemaker-training-toolkit ERROR Reporting training FAILURE
2022-05-11 19:21:02,333 sagemaker-training-toolkit ERROR ExecuteUserScriptError:
ExitCode 1
ErrorMessage "ValueError: not enough values to unpack (expected 2, got 1)
0%| | 0/3796 [00:03<?, ?it/s]"
Command "/opt/conda/bin/python3.8 run_glue_t5.py --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path t5-small --num_train_epochs 1 --output_dir /opt/ml/model/t5_small --per_device_train_batch_size 64 --train_file /opt/ml/input/data/train/train.csv --validation_file /opt/ml/input/data/val/val.csv"
2022-05-11 19:21:02,333 sagemaker-training-toolkit ERROR Encountered exit_code 1
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
for using t5 series, like 't5-small', the model definition in run_glue.py should be:
model = MT5ForConditionalGeneration.from_pretrained(...)
instead of
model = AutoModelForSequenceClassification.from_pretrained(...) #this line couldn't work
after change the above line, still coudln't work for the "ValueError: not enough values to unpack (expected 2, got 1)" from the linke : `batch_size, seq_length = input_shape` in modeling_t5.py.
```
import sagemaker
from sagemaker.huggingface import HuggingFace
hyperparameters = {
'model_name_or_path':'t5-small',
'output_dir':'/opt/ml/model/t5_small',
'max_seq_length':128,
'per_device_train_batch_size' : 64,
'learning_rate' : 2e-5,
'num_train_epochs': 1,
'do_train': True,
'train_file': '/opt/ml/input/data/train/train.csv',
'validation_file': '/opt/ml/input/data/val/val.csv',
}
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}
huggingface_estimator = HuggingFace(
entry_point='run_glue.py',
source_dir='./examples/pytorch/text-classification',
instance_type='ml.g5.xlarge',
instance_count=1,
role=role,
it_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters = hyperparameters
)
```
### Expected behavior
```shell
start trainning
```
| 05-11-2022 19:39:02 | 05-11-2022 19:39:02 | Hi,
The `run_glue.py` script is only meant to work out-of-the-box for encoder-only Transformers, such as BERT, RoBERTa, DistilBERT, DeBERTa, etc.
T5 is an encoder-decoder model, and would require several changes to the script.<|||||>>
I see.
Thank you. |
transformers | 17,188 | closed | add shift_tokens_right in mT5 | # What does this PR do?
Adds the missing `shift_tokens_right` in FlaxMT5.
Fixes #15771 | 05-11-2022 19:14:22 | 05-11-2022 19:14:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,187 | closed | Remove columns before passing to data collator | # What does this PR do?
Removes columns before they are passed to the data collator in the non `datasets.Dataset` case.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2022 19:01:14 | 05-11-2022 19:01:14 | cc @sgugger<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,186 | closed | docs for typical decoding | # What does this PR do?
Adds a description for the typical_p parameter in #15504, as the docs for this parameter was missing.
@cimeister | 05-11-2022 16:36:36 | 05-11-2022 16:36:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,185 | closed | Unable to retrieve layers from model in tensorflow | ### System Info
```shell
transformers version: 4.18.0
platform: Google Colab
python version: 3.7.13
```
### Who can help?
@Rocketknight1
I am training a Roberta-large model for a classification task and I am using pre-trained model to start with. But for my task I want to freeze the embedding layer and the first few encoding layers, so that I can fine-tune the attention weights of the last few encoding layers. But I cannot access the layers while using tensorflow.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import RobertaTokenizer, TFRobertaModel
import tensorflow as tf
model = TFRobertaModel.from_pretrained('roberta-large')
model.get_layer(2)
### Expected behavior
```shell
This should have returned a layer instance but rather throws error
`ValueError: Was asked to retrieve layer at index 10 but model only has 1 layers.`
```
| 05-11-2022 16:07:02 | 05-11-2022 16:07:02 | Unfortunately, this method probably won't work for our models, because we implement the core of the model as a `MainLayer` class, and so the actual `Model` generally only has one "layer". In addition, our models and layers are implemented by subclassing, which means the order of the layers is not well-defined.
If you want to access sub-layers, you'll need to use [the actual Python structure of the class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_tf_roberta.py#L912), rather than Keras methods. So `model.roberta.embeddings` will give you the embedding layer, and `model.roberta.encoder.layer` will give you a list of the other model layers. [Depending on your model](https://github.com/huggingface/transformers/blob/a42242da7c44d64c66a878cca65bc86dd3f626af/src/transformers/models/roberta/modeling_tf_roberta.py#L587-L590), there may also be a `model.roberta.pooler`.<|||||>Ah!
Thanks for the clarification! |
transformers | 17,184 | closed | Forward outputs on multiple sequences is wrong | ### System Info
```shell
latest version of transformers
pytorch
python 3.10
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
model.eval()
# sequences
seq1 = "summarize: Calling the model (which means the forward method) uses the labels for teacher forcing. This means inputs to the decoder are the labels shifted by one"
output1 = "calling the model uses the labels for teacher forcing. inputs to the decoder"
seq2 = "summarize: When you call the generate method, the model is used in the autoregressive fashion"
output2 = "the model is used in the auto-aggressive fashion."
seq3 = "summarize: However, selecting the token is a hard decision, and the gradient cannot be propagated through this decision"
output3 = "the token is a hard decision, and the gradient cannot be propagated through this decision"
input_sequences = [seq1, seq2, seq3]
output_seq = [output1, output2, output3]
# encoding input and attention mask
encoding = tokenizer(
input_sequences,
padding="longest",
max_length=128,
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda")
# labels
target_encoding = tokenizer(
output_seq, padding="longest", max_length=128, truncation=True
)
labels = target_encoding.input_ids
labels = torch.tensor(labels).to("cuda")
labels[labels == tokenizer.pad_token_id] = -100
# Call the models
logits = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).logits
# Apply softamx() and batch_decode()
X = logits
X = F.softmax(X, dim=-1)
ids = X.argmax(dim=-1)
y = tokenizer.batch_decode(sequences=ids, skip_special_tokens=True)
# results: batch_size=3
['call the model uses the labels for teacher forcing inputs to the decoder are',
'the model is used in the auto-aggressive fashion the the the',
'the token is a hard decision, and the gradient cannot be propagated through this decision ']
# results: batch_size =1 i.e. consider 1 seq each time
['call the model uses the labels for teacher forcing inputs to the decoder are']
['the model is used in the auto-aggressive fashion ']
['the token is a hard decision, and the gradient cannot be propagated through this decision ']
```
### Expected behavior
```shell
running model on a batch should give the same result as running on a single sequence
```
| 05-11-2022 15:37:39 | 05-11-2022 15:37:39 | |
transformers | 17,183 | closed | Add onnx export cuda support | # What does this PR do?
* Add CUDA support for `transformers.onnx.export_pytorch`.
* Add test for `transformers.onnx.export_pytorch` on CUDA.
# Context
While executing `optimum.ORTTrainer` with `--deepspeed` and `--fp16` enabled, the export to onnx will fail since all layers of the models are not implemented for half-precision. Need to trace on CUDA as workaround.
## Who can review?
@michaelbenayoun @lewtun
| 05-11-2022 15:32:45 | 05-11-2022 15:32:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Great, thanks for adding this @JingyaHuang!
>
> If I understand correctly, this enables tracing half-precision models?
Hi @michaelbenayoun ,
Yes, but only for PyTorch since `tf2onnx` has [specified the device to be CPU](https://github.com/onnx/tensorflow-onnx/blob/main/tf2onnx/convert.py#L609).<|||||>> Thanks for iterating on this @JingyaHuang !
>
> I've left a few final nits, but this is looking really nice :)
>
> Could you please confirm that the slow tests pass on both CPU and GPU devices?
>
> ```
> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py
> ```
Hi @lewtun , by running the slow tests on CPU and GPU, I got the following results. It seems that some models and tasks failed. Trying to find out the root of the problems now.
```
======================================================================================== short test summary info ========================================================================================
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_12_bert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks: dict_ke...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_71_mobilebert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks: d...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_12_bert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported tasks:...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_20_big_bird_question_answering - AssertionError: big-bird, question-answering -> Expected all tensors to be on th...
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_71_mobilebert_next_sentence_prediction - ValueError: next-sentence-prediction is not a supported task, supported ...
========================================================== 5 failed, 177 passed, 77 skipped, 43 deselected, 158 warnings in 2478.21s (0:41:18) ==========================================================
```<|||||>Oh yes, we recently reverted the next-sentence-prediction feature in #17276, so rebasing on `main` should fix those. The BigBird error looks more related to your PR, so let me know if you need some help debugging it :)<|||||>> Oh yes, we recently reverted the next-sentence-prediction feature in #17276, so rebasing on `main` should fix those. The BigBird error looks more related to your PR, so let me know if you need some help debugging it :)
Hi @lewtun , thanks for the details. After rebasing, all checks for bert passed. And the problem of big bird comes from a [bug in the modeling](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py#L3102), that while creating the `token_type_ids`, the device is not specified lead to a mismatch of devices. I just fixed that. Now all checks of `pytorch_export` either on CPU or on CUDA passed.<|||||>> After rebasing, all checks for bert passed.
Cool! Just to double-check, did you run the tests:
* On a CPU machine (no GPU, CUDA installed)
* On a GPU machine
I'd like to be sure we don't accidentally break the test suite for developers coding on CPU machines :) |
transformers | 17,182 | closed | ViT and Swin symbolic tracing with torch.fx | # What does this PR do?
This PR adds support for ViT and Swin symbolic tracing with torch.fx.
Fixes #16320
| 05-11-2022 14:31:50 | 05-11-2022 14:31:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,181 | closed | Fix LED documentation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes several typos and formatting issues in docstrings.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR, but @sgugger is probably well-suited.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2022 14:21:30 | 05-11-2022 14:21:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,180 | closed | ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet. | ### System Info
```shell
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "ç": 32, "ö": 33, "ü": 34, "ğ": 35, "ı": 36, "ş": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "ç": 32, "ö": 33, "ü": 34, "ğ": 35, "ı": 36, "ş": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
### Expected behavior
```shell
Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "", "", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "": 1, "": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "ç": 32, "ö": 33, "ü": 34, "ğ": 35, "ı": 36, "ş": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
```
| 05-11-2022 13:24:33 | 05-11-2022 13:24:33 | Hey @erdoganensar,
Could you provide a code snippet that shows how I can reproduce the error?<|||||>I have a very similar problem related to this issue using HuBERT large on English corpus. Here is my code snippet:
```
processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="some_3gram_correct.arpa",
)
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
```
Here is the first 20 lines of the `some_3gram_correct.arpa`:
```
\data\
ngram 1=759
ngram 2=3580
ngram 3=5747
\1-grams:
-0.77224493 <unk>
-inf <s> -0.96890455
-inf </s> -0.96890455
-1.0275165 </s>
-1.8815907 it -0.576264
-2.4739406 looks -0.47474432
-2.598626 like -0.19285807
-1.7276717 a -0.42839557
-2.9495435 nice -0.13558547
-2.9495435 day -0.3788458
-2.122181 outside -0.6023683
-1.761344 that -0.4610282
-2.0622265 's -0.32115284
-2.6496518 about -0.41595408
```
Here is the error message I got:
`ValueError: The tokens {'H', 'Y', 'Q', 'M', 'D', 'I', 'F', 'P', 'J', 'V', 'X', 'B', 'C', 'U', 'E', 'S', 'N', 'R', 'Z', 'L', 'T', 'K', 'A', 'G', 'O', 'W'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'H', 'Y', 'Q', 'M', 'D', 'I', 'F', 'P', 'J', 'V', 'X', 'B', 'C', 'U', 'E', 'S', 'N', 'R', 'Z', 'L', 'T', 'K', 'A', 'G', 'O', 'W'} in the decoder's alphabet.`
How should proceed? Thanks in advance.<|||||>@erdoganensar note that the problem here is that the tokenizer's vocab has upper-case letters but the decoder has lowercase letters. Now from your 3gram it looks like the decoder should indeed have lowercase letters. So what you should do here is the following before running the above code snippet:
```python
processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
tokenizer_vocab_dict = processor.tokenizer.get_vocab()
tokenizer_vocab_lowercase = {k.lower(): v for k,v in tokenizer_vocab_dict.items()}
vocab_file = "vocab.json"
with open(vocab_file, "w", encoding="utf-8") as f:
f.write(json.dumps(tokenizer_vocab_lowercase, ensure_ascii=False))
processor.tokenizer = Wav2Vec2CTCTokenizer(vocab_file)
processor.save_pretrained("path/to/processor")
```
Having done this you can execute the following code which should then work correctly:
```python
processor = Wav2Vec2Processor.from_pretrained("path/to/processor")
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="some_3gram_correct.arpa",
)
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
```<|||||>@patrickvonplaten Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,179 | closed | Ensure tensors are at least 1d for pad and concat | # What does this PR do?
Ensures that tensors are at least 1d in `pad_and_concatenate` utility functions, and uses `atleast_1d` methods uniformly in the entire file.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2022 12:58:23 | 05-11-2022 12:58:23 | cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>let me add a quick unit test<|||||>@sgugger I am not sure what's up with code quality CI. I cannot reformat the file it complains about locally. Any idea what could be causing this?
I have removed the changes to the offending file, let's see if that fixes it.<|||||>Ok, CI is green now :D<|||||>Thanks again! |
transformers | 17,178 | closed | Fix typo in bug report template | # What does this PR do?
Fix a typo in issue templates.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? | 05-11-2022 12:28:55 | 05-11-2022 12:28:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,177 | closed | Update self-push workflow | # What does this PR do?
Update self-push CI workflow file:
- `tests_fetcher.py` is updated to output a json file, containing a dictionary mapping test categories to the identified test files (which is used by the updated push CI below)
- Reorganize the tests into models (e.g. `models/bert`, `models/gpt2`, etc.) and modeling categories (`pipeline`, `tokenization`), same as in scheduled CI
- `notification_service.py` and `self-scheduled.yml` are updated to use `[single/multi]-gpu` as artifact name prefixes (i.e. no more `-docker` at the end): with this minimal change, `notification_service.py` could be reused
Some workflow runs:
- [push CI](https://github.com/huggingface/transformers/actions/runs/2306332297)
- [scheduled CI](https://github.com/huggingface/transformers/actions/runs/2306421236)
Some tests failed intentionally (to verify their reports). The reports could be found on `transformers-ci-feedback-tests` channel.
### TODO:
- create new report channel and add the channel ID to the workflow file
**I added some reviews that contain some of my questions.**
@sgugger Maybe you could have a look for the changes in `test_fetcher.py`?
@stas00 Maybe for the changes regarding DeepSpeed and multi-gpu configurations? | 05-11-2022 12:09:21 | 05-11-2022 12:09:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I find it difficult to make sense of some of the changes, due to diff being hard to follow
OK @stas00 , the big diff is probably because I removed unused blocks. No real big change - I just copied from scheduled CI.
See below if you would like to have a quick look.
My only questions are
- why we use `options: --gpus 0` previously in `run_tests_torch_cuda_extensions_multi_gpu`
- could we use `options: --gpus all` for single gpu case as well as multi gpu?
prev.
```
image: nvcr.io/nvidia/pytorch:21.03-py3
options: --gpus 0
```
now.
```
huggingface/transformers-pytorch-deepspeed-latest-gpu
options: --gpus all
```
and
prev.
```
- name: Install dependencies
run: |
apt -y update && apt install -y libaio-dev
pip install --upgrade pip
pip install .[deepspeed-testing]
```
now
```
- name: Re-compile DeepSpeed
working-directory: /workspace
run: |
pip install deepspeed # installs the deps correctly
rm -rf DeepSpeed
git clone https://github.com/microsoft/DeepSpeed && cd DeepSpeed && rm -rf build
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install -e . --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check
```
<|||||>thank you for highlighting the changes @ydshieh - that's super helpful.
1. So for gpus:
- multi-gpu should be `--gpus all` (needs 2 gpus)
- single-gpu should be `--gpus 0` (must have only one gpu)
so looking at the diff the original seems to be correct. perhaps not everywhere?
2. For dependencies the original is correct.
Have a look at what it signifies:
```
extras["deepspeed-testing"] = extras["deepspeed"] + extras["testing"] + extras["optuna"]
```
so the change is missing important dependencies install.
and the new instructions aren't correct.
We only want the bleed edge (your now) install only for nightly build. self-push should use the released `deepspeed` version, that `pip install .[deepspeed-testing]` takes care of (but which of course can be moved into the docker if it's running via the docker image). If it's already there, then there is no need for that last pip call either.
Bottom line - no change from the original in either case logically.
If I missed something please let me know.
<|||||>> so looking at the diff the original seems to be correct. perhaps not everywhere?
The current main branch has a job `run_tests_torch_cuda_extensions_multi_gpu` in `self-push.yml` which has `--gpus 0`.
In the latest commit in this PR, I reverted to the original version regarding DeepSpeed parts, but set `--gpus all` for multi-gpu job.
Remark 1: some places in `self-scheduled.yml` have to be fixed.)
Remarks 2: I checked this doc [expose-gpus-for-use](https://docs.docker.com/config/containers/resource_constraints/#expose-gpus-for-use), and think we can still use `--gpus all` even if the host machine has only 1 GPU. `--gpus 0` is necessary only if the host has multiple GPUs but we want to use only 1 of them.
> 2. For dependencies the original is correct.
> self-push should use the released `deepspeed` version, that `pip install .[deepspeed-testing]`
~~I will change back to the original version for this part~~ (Done), thank you.
<|||||>As long as the tests are run with `CUDA_VISIBLE_DEVICES=0` for `run_tests_single_gpu` jobs it indeed doesn't matter if more than 1 gpu is available.
But it's critical we ensure that it is set correctly, otherwise tests requiring a single gpu will get skipped.
Thank you for fixing where the setting are incorrect, @ydshieh!<|||||>> Before merging it, could you do a test run when modifying the `setup.py` to ensure that all tests are run correctly? Thank you!
I had to fix a bug (i.e. when the test list is `tests`, i.e. when `setup.py` is changed).
A full test workflow run is [here](https://github.com/huggingface/transformers/actions/runs/2318670278).
After looking some failures, I am convinced that this PR is ready to be merged (the failures are the same as in scheduled CI runs).
Thank you for the reviews! |
transformers | 17,176 | closed | Add ONNX support for Longformer | # What does this PR do?
This PR contributes to #16308 and addresses #16463 by adding support for exporting [Longformer](https://arxiv.org/abs/2004.05150) to ONNX.
The following necessary changes were already made:
- [x] `LongformerOnnxConfig` implemented
- [x] ONNX opset version >= 12
- [x] fix in model definition with `nn.functional.pad` (see https://github.com/huggingface/transformers/issues/13126#issuecomment-993645323)
However, there are still some open issues I'd need help with:
- [x] ~The conversion to ONNX fails when a `global_attention_mask` is provided that contains at least one `1`. It raises the following error: `Only consecutive 1-d tensor indices are supported in exporting aten::index_put to ONNX.`. So far, I have been unable to track down which line triggers this error. If we find it, we can probably rewrite the model implementation using this workaround: https://pytorch.org/docs/stable/onnx.html#writes-sets~ → issue resolved by rewriting accesses
- [x] ~The validation check currently fails with a high value difference (3.77). The JIT conversion raises the following warnings. Maybe some of them are the reasons for it:~ → tracked down and fixed
```
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1569: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1256: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
is_global_attn = is_index_global_attn.flatten().any().item()
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:569: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:805: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:808: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert query.size() == key.size()
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:598: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(attn_scores.size()) == [
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:873: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert seq_len % (window_overlap * 2) == 0
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:874: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size()[:3] == value.size()[:3]
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:875: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size(3) == 2 * window_overlap + 1
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:669: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), "Unexpected size"
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1312: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: #16308, #16463
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? → default Longformer and ONNX tests
## Who can review?
Maybe @ChainYo and/or @lewtun can help with this? 😊 | 05-11-2022 11:53:09 | 05-11-2022 11:53:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17176). All of your documentation changes will be reflected on that endpoint.<|||||>Hey :hand: excellent PR, the code looks just fine!
I wonder if you tried to specify the right `--feature` while converting your `LongFormer` model?
Which model did you try and what `--feature` did you choose?<|||||>> Hey ✋ excellent PR, the code looks just fine!
Thanks!
> I wonder if you tried to specify the right `--feature` while converting your `LongFormer` model? Which model did you try and what `--feature` did you choose?
I'm currently experimenting with [longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096). The reported difference of 3.77 is with `--feature=default`, but there are large differences with all other features as well (`masked-lm`: 14.1, `sequence-classification`: 0.04, `question-answering`: 0.25, `token-classification`: 0.19, `multiple-choice`: 0.1).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @deutschmn, did you finally get good results with `Longformer`?<|||||>@ChainYo Unfortunately, I didn't get a chance to dive in further yet. I'll try to find some time, but if someone else has any ideas, please let me know.<|||||>Hey @ChainYo! I found some time and fixed the issues. Can we reopen? 😊
Adding support for the `global_attention_mask` was pretty easy after I tracked down the unsupported indexing lines, but it took quite a deep dive to find out where the value difference came from. There were two main issues:
1. `masked_fill_` produces different results when converting to ONNX. I replaced it with a simple `where`.
2. `as_strided` for chunking doesn't work either, presumably because it relies on the underlying memory layout that's different in ONNX. The perfect solution would be to use `unfold`, but unfortunately, that op is not supported. So I added a slow fallback that works in every case. Once there's support for `unfold`, we can get rid of that.<|||||>> Hey @ChainYo! I found some time and fixed the issues. Can we reopen?
Hey, thanks for iterating on this. I will ping @lewtun to open this again.<|||||>Thanks a lot for re-working on this @deutschmn ❤️ ! Ping me when you'd like a review :)<|||||>Thanks for reopening, @lewtun. Would be brilliant if you could review now 😊 <|||||>Thanks for your reviews, @lewtun and @patrickvonplaten 😊 I worked in all your feedback and added Longformer to the ONNX tests. Slow ONNX + Longformer tests seem to work fine:
<details>
<summary><code>RUN_SLOW=1 pytest tests/models/longformer/test_modeling_longformer.py</code> → 55 passed, 10 skipped, 14 warnings</summary>
```
=================================================================== test session starts ===================================================================
platform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/patrick/Projects/open-source/transformers, configfile: setup.cfg
plugins: xdist-2.5.0, hypothesis-6.46.3, forked-1.4.0, timeout-2.1.0, dash-2.4.1
collected 65 items
tests/models/longformer/test_modeling_longformer.py ...s.sss..................... [100%]
============================= warnings summary =============================
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/Projects/open-source/transformers/src/transformers/image_utils.py:222: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None):
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:228: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
interpolation: int = Image.BILINEAR,
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:295: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
interpolation: int = Image.NEAREST,
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:311: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
interpolation: int = Image.NEAREST,
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py:328: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
interpolation: int = Image.BICUBIC,
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/auto_augment.py:39: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/auto_augment.py:39: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:39: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
Image.NEAREST: 'nearest',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:40: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
Image.BILINEAR: 'bilinear',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:41: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
Image.BICUBIC: 'bicubic',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:42: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
Image.BOX: 'box',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:43: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
Image.HAMMING: 'hamming',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/timm/data/transforms.py:44: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
Image.LANCZOS: 'lanczos',
tests/models/longformer/test_modeling_longformer.py::LongformerModelTest::test_training_gradient_checkpointing
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
========== 55 passed, 10 skipped, 14 warnings in 86.62s (0:01:26) ==========
```
</details>
<details>
<summary><code>RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "longformer"</code> → 12 passed, 377 deselected, 228 warnings</summary>
```
=========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/patrick/Projects/open-source/transformers, configfile: setup.cfg
plugins: xdist-2.5.0, hypothesis-6.46.3, forked-1.4.0, timeout-2.1.0, dash-2.4.1
collected 389 items / 377 deselected / 12 selected
tests/onnx/test_onnx_v2.py ............ [100%]
============================================================================================ warnings summary =============================================================================================
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1610: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/_tensor.py:627: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
return self.item().__format__(format_spec)
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/nn/functional.py:2165: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1297: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
is_global_attn = is_index_global_attn.flatten().any().item()
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:565: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:832: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert (
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:835: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert query.size() == key.size()
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:785: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if hidden_states.size(1) == window_overlap * 2:
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:594: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(attn_scores.size()) == [
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:900: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert seq_len % (window_overlap * 2) == 0
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:901: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size()[:3] == value.size()[:3]
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:902: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_probs.size(3) == 2 * window_overlap + 1
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:668: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), "Unexpected size"
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1072: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(global_attn_scores.size()) == [
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert list(global_attn_output.size()) == [
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:691: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
len(is_local_index_global_attn_nonzero[0]), -1
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/Projects/open-source/transformers/src/transformers/models/longformer/modeling_longformer.py:1353: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if padding_len > 0:
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/onnx/symbolic_helper.py:719: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1
warnings.warn("allowzero=0 by default. In order to honor zero value in shape use allowzero=1")
tests/onnx/test_onnx_v2.py: 12 warnings
/Users/patrick/.pyenv-x86/versions/3.9.10/envs/transformers-x86_64/lib/python3.9/site-packages/torch/onnx/symbolic_opset9.py:2905: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
warnings.warn("Exporting aten::index operator of advanced indexing in opset " +
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
====================================================================== 12 passed, 377 deselected, 228 warnings in 3599.78s (0:59:59) ======================================================================
```
</details><|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I merged `main` into this branch to resolve conflicts. Gently pinging @lewtun and @patrickvonplaten for a re-review 😊 <|||||>
> Hey @ChainYo! I found some time and fixed the issues. Can we reopen? 😊
>
> Adding support for the `global_attention_mask` was pretty easy after I tracked down the unsupported indexing lines, but it took quite a deep dive to find out where the value difference came from. There were two main issues:
>
> 1. `masked_fill_` produces different results when converting to ONNX. I replaced it with a simple `where`.
> 2. `as_strided` for chunking doesn't work either, presumably because it relies on the underlying memory layout that's different in ONNX. The perfect solution would be to use `unfold`, but unfortunately, that op is not supported. So I added a slow fallback that works in every case. Once there's support for `unfold`, we can get rid of that.
Hi @deutschmn, thanks for contributing! As for the tracing problem of `masked_fill_` and `as_strided`, they are both supported in [`torch.onnx.symbolic_opset9`](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_opset9.py), have you tried interpreting the forward pass of `LongformerSelfAttention` with a `symbolic` method to apply the symbolic tracing?
__REF__
* [Symbolic doc in PyTorch](https://pytorch.org/docs/stable/onnx.html#static-symbolic-method)
* An example: how it was done for DeBERTa
https://github.com/huggingface/transformers/blob/df28de0581aaf6d8742c4988137caac2b6602ca8/src/transformers/models/deberta/modeling_deberta.py#L122-L137<|||||>Hey @JingyaHuang, thanks for your feedback 😊 I haven't looked into symbolic tracing yet. I'm travelling right now, but I'll have another look when I'm back in a couple of weeks. |
transformers | 17,175 | closed | [M2M100 doc] remove duplicate example | # What does this PR do?
Removes duplicate translation example from doc. | 05-11-2022 10:32:12 | 05-11-2022 10:32:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,174 | closed | logging documentation update | See https://github.com/huggingface/transformers/issues/17094
@LysandreJik
| 05-11-2022 09:09:42 | 05-11-2022 09:09:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,173 | closed | model google/muril-base-cased has effectively infinite model_max_length | ### System Info
```shell
>>> transformers.__version__
'4.18.0'
>>> tokenizers.__version__
'0.12.1'
```
### Who can help?
@LysandreJik
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The muril-base-cased model has the model_max_length filled out incorrectly. For example:
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("google/muril-base-cased")
tokenizer.model_max_length
>>> tokenizer.model_max_length
1000000000000000019884624838656
```
### Expected behavior
I am fairly certain the correct size is 512
| 05-11-2022 07:45:01 | 05-11-2022 07:45:01 | Thank you for the issue @AngledLuffa ! It's interesting to know that this is an important feature for you.
Unfortunately we can't do much from `transformers` as `transformers` only retrieves the `model_max_length` key from the [`tokenizer_config.json` file](https://huggingface.co/google/muril-base-cased/blob/main/tokenizer_config.json). When the `model_max_length` key is not filled in, a default "infinite" value is filled in.<|||||>I tried making a topic here:
https://discuss.huggingface.co/t/muril-base-cased-has-infinity-for-model-max-length/17838<|||||>Hi @AngledLuffa,
I think the [new feature](https://huggingface.co/blog/community-update) that has just been deployed on the Hub might solve your problem!
It is now possible to open discussions on a hub model (or even propose a change in the tokenizer configuration!). Would you be interested in trying this out? It should ping the authors and invite them to make the change on the hub!<|||||>Thanks, I'll give that a try<|||||>Closing as the issue seems to be solved thanks to your message on the hub :confetti_ball: |
transformers | 17,172 | closed | T5 zero-shot classification pipeline | ### Feature request
The current zero-shot classification pipeline support models like BERT and BART, but there does not seem to be any support for T5.
I notice that the new [mT5-mnli from Alan Turing Institute](https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli) has code for extracting output probabilites and mapping this to the mnli entailment/contradiction, but they are not able to integrate it into the pipeline.
I am not sure what is exactly missing for this to be in place. What needs to be done in Transformers and how should the output from the model be adapted?
@patrickvonplaten
@anton-l
### Motivation
The medium/large T5 models has very impressive zero-shot abilities, and the combination of finetuning on NLI-tasks ([Yin et.al](https://arxiv.org/abs/1909.00161)) and the classification pipeline is a great way to use NLP for easy classification tasks.
### Your contribution
My contribution will depend on what needs to be done. I am finetuning several models on MNLI, and can in any case contribute active in testing. | 05-11-2022 06:27:50 | 05-11-2022 06:27:50 | Hi @peregilk
T5 is not supported in zero-shot classification pipeline because it does not have a sequence classification head. With T5 the seq classification problem is formulated as text-to-text generation problem which is not possible to support in this zero-shot pipeline. <|||||>@patil-suraj
Thanks for the answer!
Just because I am trying to get a better understanding of this: Is there a fundamental difference between the output probabilities from a seq classification head and the output probabilites generated by the code at the bottom of this page: https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli ? Or is this simply related to the way the pipelines are made?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi @peregilk
>
> T5 is not supported in zero-shot classification pipeline because it does not have a sequence classification head. With T5 the seq classification problem is formulated as text-to-text generation problem which is not possible to support in this zero-shot pipeline.
Hi, thank you for the answer. I am interested in using t5 for classification. Are there any examples on fine-tuning? How should we interpret class tokens in the decoder output? If there's documentation on token IDs for each classification task it would be very helpful. |
transformers | 17,171 | closed | Mlflowcallback fix nonetype error | # What does this PR do?
This PR fix some edge case error / race conditions with the garbage collector due to the use of a `__del__` finalizer in the MLflowCallback.
In some case, the following error may occur at the end of a script execution:
```
Exception ignored in: <function MLflowCallback.__del__ at 0x7f5ad8d73c10>
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/transformers/integrations.py", line 879, in __del__
TypeError: 'NoneType' object is not callable
```
This PR check that `self._ml_flow` contain the `active_run()` method. Since it's in a `__del__` finalizer, `self._ml_flow` may return as a class 'module', not `None`, despite not referencing the `mlflow` module anymore.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger I'm wondering if a unit test should be added for integrations callback. I don't see any for other integrations tho. | 05-11-2022 04:25:15 | 05-11-2022 04:25:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger can this be merged or is anything else needed?<|||||>I probably was waiting for the tests to pass and it disappeared from my notifications. Thanks for the ping! |
transformers | 17,170 | closed | Update training.mdx | Addition of a progress bar to the evaluation loop
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds a progress bar to the evaluation loop so that people can see how their evaluation process is going when they run their models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2022 04:00:57 | 05-11-2022 04:00:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17170). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,169 | closed | ImportError: cannot import name 'ESMForMaskedLM' from 'transformers' | ### System Info
```shell
Hi, I am trying to use the esm model for protein sequence embeddings on Colab notebook:
1) I installed transformers with torch
!pip install transformers[torch]
2) Follow the example here: https://huggingface.co/facebook/esm-1b
from transformers import ESMForMaskedLM, ESMTokenizer
tokenizer = ESMTokenizer.from_pretrained("facebook/esm-1b", do_lower_case=False )
model = ESMForMaskedLM.from_pretrained("facebook/esm-1b")
sequence_Example = "QERLKSIVRILE"
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
output = model(**encoded_input)
3) And got the import error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-7-8f12e906d421> in <module>()
----> 1 from transformers import ESMForMaskedLM, ESMTokenizer
2 tokenizer = ESMTokenizer.from_pretrained("facebook/esm-1b", do_lower_case=False )
3 model = ESMForMaskedLM.from_pretrained("facebook/esm-1b")
4 sequence_Example = "QERLKSIVRILE"
5 encoded_input = tokenizer(sequence_Example, return_tensors='pt')
ImportError: cannot import name 'ESMForMaskedLM' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
Any idea how to solve this? Thanks in advance.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(as described)
### Expected behavior
```shell
(as described)
```
| 05-11-2022 02:56:08 | 05-11-2022 02:56:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, did you manage to solve it? I got the same issue<|||||>No... No one helped...<|||||>Hi, I found this over on hugging face
https://discuss.huggingface.co/t/solved-model-esm-1b-is-not-defined/17104
<|||||>> Hi, I found this over on hugging face https://discuss.huggingface.co/t/solved-model-esm-1b-is-not-defined/17104
This doesn't work either.
So why ESM can't be used with transformers even ESM2 has been published and ESM is just on the main page of this github. |
transformers | 17,168 | closed | Correct & Improve Doctests for LayoutLMv2 | # What does this PR do?
Ready for Review now.
Issue #16292
Add correct doctests for LayoutLMv2
Note: LayoutLMv2 depends on `detectron2`, `torchvision` and `tesseract`; passing doctests requires their installation. I documented this in the docstring.
## Who can review?
@patrickvonplaten @ydshieh @sgugger | 05-11-2022 01:01:33 | 05-11-2022 01:01:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @ydshieh Sorry for the late response and thank you for reviewing! Just made the changes addressing all the comments here.<|||||>Thanks for this PR - although the update in the docs doesn't seem correct to me.<|||||>@NielsRogge just started a follow-up PR #17409. Let me know if there's anything else I can do. |
transformers | 17,167 | closed | How to change the Text embedder(Layoutlmv2Tokenizer) in huggingface LayoutLMv2 model? | Hi, I’m a beginner in Transformers huggingface.
I wonder How to change the text embedding models in LayoutLMv2(original to KoBERT)
I used those model by huggingface, so I used each of processor for LayoutLMv2, LayoutXLM.
I think i need to change the text tokenizer for data loading, and change the text encoding weights (in Original LayoutLMv2 model) as KoBERT’s like below codes.
```
kobert_name = "monologg/kobert"
bert_model = BertModel.from_pretrained(kobert_name)
kobert_tokenizer = KoBertTokenizer.from_pretrained(kobert_name)
feature_extractor = LayoutLMv2FeatureExtractor(apply_ocr=False)
processor = LayoutLMv2Processor(feature_extractor, kobert_tokenizer) => Returned below error
# ValueError: Received a KoBertTokenizer for argument tokenizer, but a ('LayoutLMv2Tokenizer', 'LayoutLMv2TokenizerFast') was expected.
model = LayoutLMv2ForTokenClassification.from_pretrained("microsoft/layoutxlm-base", num_labels=len(labels))
# Need to exchange the layoutlmv2.embeddings. as kobert parameters(weights)
```
But I got error in processor defined part… I think the original LayoutLMv2Processor only define the original.
I’m used this [code](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2/CORD)
What point should i modify for changing text embedding?
Please share any tips for beginner. 😂 | 05-11-2022 00:28:22 | 05-11-2022 00:28:22 | Hi,
It's not possible to plug in a custom tokenizer into `LayoutLMv2Processor`, as `LayoutLMv2Tokenizer` was designed specifically to handle bounding boxes.
If you plan to use a custom tokenizer, then it means that you need to handle addition of special tokens, padding/truncation yourself, which is also what the LayoutLMv2 authors did, as seen [here](https://github.com/microsoft/unilm/blob/ca82fd4e5c7b0e4594eee33f8dd08388183787cd/layoutlmft/examples/run_funsd.py#L165) when tokenizing the words, then defining a custom data collator (to batch examples together) [here](https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/data_collator.py). <|||||>Thank you for kindly reply. It's really helpful. |
transformers | 17,166 | closed | Remove unnecessary columns for all dataset types in `Trainer` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
As discussed with @sgugger offline, this PR adds automatic removal of unnecessary columns for arbitrary dataset types. This mimics the already present logic for removing unnecessary columns from `datasets.Dataset`s, but uses a wrapper around the data collation function to enable it for other datasets. The end result is intended to be the same for both cases.
The implementation is a bit more complex than it really has to be, in order to facilitate the same logging UX as with the `datasets.Dataset` case.
Tests have been added/updated where applicable.
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] ~Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.~ Discussed on Slack!
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-10-2022 19:47:23 | 05-10-2022 19:47:23 | cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah, yeah, we don't need to check whether it's an iterable dataset! Let me fix that |
transformers | 17,165 | open | Set Transformer | ### Model description
This issue proposes addition of the Set Transformer, a set2seq transformer for learning to order sets of items.
## Short description of the model and link to the paper
The transformer is one of a family of models that implements permutation invariance in order to learn ordering relations. This particular implementation uses stacked attention blocks to achieve the invariance. Set transformers are good for a multitude of problems - the toy problem is the TSP, where vertices are ordered optimally, though the framing can also be applied to any sequence generation tasks where the sequence items are known ahead of time. See [this review](https://jair.org/index.php/jair/article/view/12839) for a description of the family of problems.
This particular transformer is the Set Transformer, presented in [Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks](http://proceedings.mlr.press/v97/lee19d.html).
This isn't immediately designed for text or images or speech, but is a distinct transformer architecture that has been applied to text and image data.
## Link to the implementation if it is open-source
There's an official PyTorch implementation at [https://github.com/juho-lee/set_transformer](https://github.com/juho-lee/set_transformer)
We've already got this up & running as a baseline in an upcoming IJCAI paper
## Link to the model weights if they are available.
not immediately available, but we could work something out
### Open source status
- [x] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
Me & a colleague can get this up onto HF, we have a running implementation and the reference implementation is both on github and licensed MIT. Reference implementation by @juho-lee (author) and @yoonholee. | 05-10-2022 18:44:53 | 05-10-2022 18:44:53 | @leondz can I work on this issue?
|
transformers | 17,164 | closed | [Deepspeed tests] missing file | Forgot to commit a new config file in https://github.com/huggingface/transformers/pull/12695
This PR is fixing it | 05-10-2022 17:07:51 | 05-10-2022 17:07:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,163 | closed | Fix template init | # What does this PR do?
During the revamp of all model inits, a typo stayed in the model templates init, which results in the model templates test failing. This PR fixes that. | 05-10-2022 15:22:37 | 05-10-2022 15:22:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,162 | closed | Fixing the output of code examples in the preprocessing chapter | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17161 (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-10-2022 15:19:17 | 05-10-2022 15:19:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,161 | closed | Inconsistent output of code example in Proprocessing Chapter in Docu | The following [docu](https://huggingface.co/docs/transformers/preprocessing) contains some inconsistent/wrong output examples for the `Build Tensor` subsection:
```python
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch, padding=True, truncation=True, return_tensors="pt")
print(encoded_input)
```
Output:
```txt
{'input_ids': tensor([[ 101, 153, 7719, 21490, 1122, 1114, 9582, 1623, 102],
[ 101, 5226, 1122, 9649, 1199, 2610, 1236, 102, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 0]])}
```
The output does not contain the applied truncation and padding and also misses the last sentence of `batch_sentences`. This also applies to the TensorFlow example.
| 05-10-2022 15:16:48 | 05-10-2022 15:16:48 | |
transformers | 17,160 | closed | Add magic method to our TF models to convert datasets with column inference | Left to do:
- [x] Add docstring
- [x] Figure out what type to set for `dataset` (since `datasets` isn't imported)
- [x] Set default values for `batch_size` / `shuffle`?
- [x] Do we need to document this anywhere besides the docstring?
- [x] Anything else I forgot? (Reviewers please yell at me) | 05-10-2022 14:09:02 | 05-10-2022 14:09:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey all, this is the method in `transformers` that I moved all the column inference code to! I also need some advice:
- How do we handle an input type of `Dataset` when `datasets` may not be installed?
- Should this be moved to a utility function that takes `model` as an argument rather than a method on `TFPreTrainedModel`?
- Should I set default values for any arguments to reduce the amount of typing users have to do?
- How should I make sure users find out about this (besides revamping the examples and notebooks once it's merged)?<|||||>> Should this be moved to a utility function that takes model as an argument rather than a method on TFPreTrainedModel?
All our models inherit from `TFPreTrainedModel`, so it should be fine
> Should I set default values for any arguments to reduce the amount of typing users have to do?
I'd keep the same defaults as in `datasets`, for a consistent experience. But I'm not the most experienced in this domain :D
> How should I make sure users find out about this (besides revamping the examples and notebooks once it's merged)?
We are getting to the point where we have a lot of content to announce (these changes, the metrics working correctly, generate and its updates, new models, ...), maybe we can start a once-a-week TF communication of some sort! <|||||>Quick update: I think this is ready to merge, but I've only really tested it with the updated `to_tf_dataset()` method in datasets, which hasn't been merged yet (but is due very soon!). As such, I don't want to merge it until that's in, because there could be edge case issues with the old method that I haven't seen.<|||||>I've merged the `to_tf_dataset` update so I'm going to merge this one too - though I think it will be a silent 'soft launch' until there's a new release of `datasets`, to avoid any unforeseen problems. Since this code only adds the new method, it shouldn't disrupt any existing workflows before it's ready to be used. |
transformers | 17,159 | closed | Mismatching between sequences and scores in beam_search | https://github.com/huggingface/transformers/blob/6d80c92c77593dc674052b5a46431902e6adfe88/src/transformers/generation_beam_search.py#L88
Here are two problem:
1. Output of beam_search() and generate() are not the same.
2. Mismatch between sequences and scores in beam_search.
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers import LogitsProcessorList, MinLengthLogitsProcessor, BeamSearchScorer,MaxLengthCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model.resize_token_embeddings(len(tokenizer))
model.to("cuda")
seq1 = "summarize: beamsearch and generate does not give the same result"
encoding = tokenizer(
[seq1],
padding="longest",
max_length=128,
truncation=True,
return_tensors="pt",
)
encoder_input_ids, attention_mask = encoding.input_ids.to("cuda"), encoding.attention_mask.to("cuda")
num_beams = 2
input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
model_kwargs = {
"encoder_outputs": model.get_encoder()(
encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
)
}
beam_scorer = BeamSearchScorer(
batch_size=1,
do_early_stopping=True,
num_beams=num_beams,
device=model.device,
)
outputs = model.beam_search(input_ids, beam_scorer,
logits_processor=None,
early_stopping=True,
no_repeat_ngram_size=4,
max_length=64,
**model_kwargs,
output_scores=True,
return_dict_in_generate=True)
# beam_search result":
out = tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True)
print(" ".join(out))
>> beamsearch() and generate() does not give the same result. beamsearch does not give the same result
#generate results:
out = model.generate(encoder_input_ids,
max_length=64,
early_stopping=True,
num_beams=2,
do_sample=False,
num_return_sequences=1)
tokenizer.batch_decode(out, skip_special_tokens=True)
>> ['beamsearch and generate does not give the same result. beamsearch does not provide the same result as beamsearch.']
# Remark1: Generate() and beam_search() does not give the same result.
# Remark2: If I understand correctly, outputs.sequences can be calculated from outputs.scores:
idx = []
for x in outputs.scores:
i = x[0].exp().argmax().item() # here I take the first beam as I think beams are sorted.
idx.append(i)
idx = torch.tensor([idx]).to("cuda")
print(idx)
#outputs.scores
tensor([[11638, 13173, 11, 3806, 405, 59, 428, 8, 337, 741,
3, 5, 11638, 13173, 11, 3806, 405, 59, 428, 8,
337, 741, 3, 5]])
# outputs.sequences:
tensor([[ 0, 11638, 13173, 11, 3806, 405, 59, 428, 8, 337,
741, 3, 5, 11638, 13173, 405, 59, 428, 8, 337,
741, 3, 5, 1]])
What I missed here! My end goal is to get the log-prob of outputs.sequences tokens.
``` | 05-10-2022 14:00:31 | 05-10-2022 14:00:31 | @rafikg The same problem. Take a look at the example for funtion beam_search, it seems that input_ids's shape is (num_beams, sequence_length) rather than documented (batch_size, sequence_length).
Moreover, traces beam_search -> BeamSearchScorer -> BeamHypotheses(https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/generation_beam_search.py#L808-L812), input to beam_search function is actually **n-best list of hypotheses**.<|||||>@czy-orange I follow the documentation to write this script, so I am not sure how to modify it ?<|||||>@rafikg Not quite sure but have a try please. The first argument is `generate()` is `input_ids` and `encoder_input_ids` should be passed to `generate()` in `model_kwargs`.<|||||>@czy-orange Not really, generate accepts the input_ids (output of the tokenizer)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,158 | closed | Guide to create custom models in Spanish | # What does this PR do?
<!--
This PR translates the guide on h[ow to create custom models](https://github.com/huggingface/transformers/blob/main/docs/source/en/create_a_model.mdx) to the Spanish language, as talked in #15947 .
-->
Linked to #15947
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@omarespejel said he will review, otherwise anyone 😁.
| 05-10-2022 13:18:21 | 05-10-2022 13:18:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the suggestions, I've committed all the changes you proposed.<|||||>@ignacioct muchas gracias por tu PR! 🤗 Vamos a hacerle merge. If there is any other doc that you would like to translate you can tell me.
@sgugger LGTM 🔥 |
transformers | 17,157 | closed | [Trainer]: Resume training not consistent with large effective batch size | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.15.37-1-lts-x86_64-with-glibc2.33
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provide a MWE for this issue by forking `transformers` and writing a failing test case. This can be reproduced via the steps below:
1. Run `git clone https://github.com/atreyasha/transformers`
2. Create a virtual environment and install the `[dev-torch]` extras
3. Run `pytest tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_randomness_large_accumulation`
Following is a snippet of the failing test:
```python
def test_resume_training_with_randomness_large_accumulation(self):
# For more than 1 GPUs, since the randomness is introduced in the model and with DataParallel (which is used
# in this test for more than 2 GPUs), the calls to the torch RNG will happen in a random order (sometimes
# GPU 0 will call first and sometimes GPU 1).
random_torch = not torch.cuda.is_available() or torch.cuda.device_count() <= 1
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
train_dataset = RegressionDataset(length=10)
eval_dataset = RegressionDataset()
config = RegressionModelConfig(a=0, b=2, random_torch=random_torch)
model = RegressionRandomPreTrainedModel(config)
tmp_dir = self.get_auto_remove_tmp_dir()
args = RegressionTrainingArguments(tmp_dir,
save_steps=1,
per_device_train_batch_size=1,
gradient_accumulation_steps=6,
learning_rate=0.1,
num_train_epochs=2)
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train()
(a, b) = trainer.model.a.item(), trainer.model.b.item()
model = RegressionRandomPreTrainedModel(config)
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train(resume_from_checkpoint=os.path.join(tmp_dir, "checkpoint-1"))
(a1, b1) = trainer.model.a.item(), trainer.model.b.item()
self.assertAlmostEqual(a, a1, delta=1e-8)
self.assertAlmostEqual(b, b1, delta=1e-8)
```
This should produce an error because the regression variables are not the same or similar:
```console
> self.assertAlmostEqual(a, a1, delta=1e-8)
E AssertionError: 0.1493121236562729 != 0.1493326723575592 within 1e-08 delta (2.0548701286315918e-05 difference)
```
I observed that this issue occurs when the effective batch size (`per_device_train_batch_size` * `gradient_accumulation_steps`) is strictly greater than half of the training set size. In practice, we normally don't encounter such scenarios since effective batch sizes tend to be much smaller than even half the size of the training set.
But in any case, I came across this bug while playing with some settings and thought it to be worth reporting; especially since resuming training consistently should (AFAICT) not be affected by the effective batch size.
### Cause
I did some debugging and found a cause. So basically if the accumulation steps are greater than half of the training set, then ` num_update_steps_per_epoch` would equal `1` as per below:
https://github.com/huggingface/transformers/blob/1766fa21599fc99442027cab15139f146b2301a3/src/transformers/trainer.py#L1300
Correspondingly, the partially completed epoch where the checkpoint was saved will be treated as complete because of the exact divison below:
https://github.com/huggingface/transformers/blob/1766fa21599fc99442027cab15139f146b2301a3/src/transformers/trainer.py#L1398
As a result when training is resumed from the checkpoint, it will be treated as if it starts in a new epoch where it actually should have started in the partial epoch before.
### Possible fix
The following diff fixes this issue:
```diff
diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index aa54f2af1..6fad35ff0 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1297,7 +1297,7 @@ class Trainer:
len_dataloader = None
if has_length(train_dataloader):
len_dataloader = len(train_dataloader)
- num_update_steps_per_epoch = len_dataloader // args.gradient_accumulation_steps
+ num_update_steps_per_epoch = math.ceil(len_dataloader / args.gradient_accumulation_steps)
num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1)
num_examples = self.num_examples(train_dataloader)
if args.max_steps > 0:
@@ -1531,8 +1531,7 @@ class Trainer:
if (step + 1) % args.gradient_accumulation_steps == 0 or (
# last step in epoch but step is always smaller than gradient_accumulation_steps
- steps_in_epoch <= args.gradient_accumulation_steps
- and (step + 1) == steps_in_epoch
+ (step + 1) == steps_in_epoch
):
# Gradient clipping
if args.max_grad_norm is not None and args.max_grad_norm > 0 and not self.deepspeed:
```
But this then changes the behaviour of gradient accumulation such that any final steps before the end of the epoch are still used for training, even if they are explicitly smaller than the accumulation steps. Maybe there is another better way that I can't see.
### Expected behavior
The test case above should pass, meaning that the regression variables should be the same or similar (within the delta).
| 05-10-2022 10:25:56 | 05-10-2022 10:25:56 | I don't see exactly what fails at the top of my mind for this degenerate case, but I won't investigate this anytime soon as I have more pressing items on my TODO (like fixing bugs that happen in real life ;-) ).<|||||>Thanks @sgugger for the prompt response. I did some debugging and found a possible cause and fix. Updated the description above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,156 | closed | Train args defaulting None marked as Optional | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/16701 by marking every `TrainingArguments` argument `Optional` if it currently defaults to `None`. Other applications which depend on static type hinting will no longer be confused by arguments defaulting to `None` although their declared type is `bool`.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 05-10-2022 09:32:30 | 05-10-2022 09:32:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,155 | closed | Fix typo of variable names for key and query projection layer | self.pos_proj and self.pos_q_proj should be changed to self.pos_key_proj and self.pos_query_proj as same as PyTorch implements.
# What does this PR do?
Fix typo of the variable names in DeBERTaV2
| 05-10-2022 07:36:35 | 05-10-2022 07:36:35 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.